var/home/core/zuul-output/0000755000175000017500000000000015134127760014533 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015134154235015474 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000337126715134154157020277 0ustar corecoreopikubelet.log_o[;r)Br'o b-n(!9t%Cs7}g/غIs$r.k9GfD PJ~FEZ͖o_˖wKo///Oo}͛ji^|1Fbg_>cV*˿mVˋ^<~UWy]L-͗_pU_P|Xûx{AtW~3 _P/&R/xDy~rJ_/*ofXx$%X"LADA@@tgV~.}-+zvy J+WF^i4JpOO pzM6/vs?}fVj6'p~U Pm,UTV̙UΞg\ Ӵ-$}.Uۙއ0* T(-aD~J'`:R߿fKS'oowHF6":=3Ȑ 3xҝd){Ts}cZ%BdARO# #D"-bFg4*%3`C\LtiKgz֝$,:;zuL{+>2^G) *ɚL}ӄ]C }I4Vv@%٘e#dc0Fn 촂iHSr`X7̜4?qKf, # qe䧤 ss]QzH.ad!rJBi`sKZiu}THW{y|*BPW*g,Z0>?<{r.:{]31o:mof{>{Z8H'U̞=yg7awSL2uQO)sai]>yE*,?k 9Z29}}(4ҲIFyG -^76ox7,*uvf d |TRZ;j?| |!I狓 3w̗`{K0Aȶ9W E%*mG:toG(;h0!}j)CMitmy߀~s{@Q,}s=LN YlYd'Z;.K'~[.Vp|A*Z*}QJ0SqAYE0i5P-$̿:R€ N0RQGkuWZ^yhi-cS4 6"mKaFרfq&}͕C,RizpV:!җӣ D>P.BvJ>mIyVVTF% tFL-*$tZm2AČAE9ϯ~ihFf&6"W&\jVJ5&jNgB|90v߁R:~U jڞU~oN9菏xԞ~J|d`V)F5d,0SSNK9ް4:ÒozsB<^/鄌4:B%cXhK I}!5 YM%<>"۞)Za@Ι}YJz{ɛrm4^jC d-saܺCY "D^&M){ߙ>:i V4nQi1h$Zb)ŠȃAݢCr|<~gQwQ!q/C>*({bʂ!0[r_G{j 6JYǹ>zs;tc.mctie:x&"bR4S uV8?,0-Y8Uav0NET݃jYAT` &AD]Ax95mvXYs"(A+/+o+{b]}@UP*5ì"Ml؈W|sN{mL=@N'DǭZrb5Iffe6Rh&C4F;D3T\[ bk5̕@UFB1/ {f'}KXg%q3Ifq CXReQP2$TbgK ؾ#AZ9 K>UHkZ;oﴍ8M*a~ff~6|Y,d,`!qIv꜒"T[1!I!NwLv}\|s.|=o4_P\(Lۻ5_vJB/_xQ># ſԸn}9U}'/osVu[H<9˷0st\t@HTu( v e`H*1aK`3CmF1K>*Mk{_'֜dw wFEc* A>{avdt)8|mg徚TN7,TEXt+`F P |ɧ<Ғ8_iqE}_Vc P2EU:F4!ʢlQHZ9E CBU)Y(S8)c yO[E}Lc&l .u蝋RQ'Vt3,F3,#Y3,kJ3,LhVnKauomˠ_>2h-Sr??Ӽ_/khSQƷLc̖F4BJ2ᮚ苮p(r%Q 6<$(Ӣ(RvA A-^dX?T.|!p+,ICE^fu `|M3J#BQȌ6DNnCˣ"F$/Qx%m&FK_7P|٢?I-RiAKoQrMI>QQ!'7h,sF\jzP\7:Q\)#s{p'ɂN$r;fVkv߸>6!<̅:xn<# -BȢ1I~ŋ-*|`В~_>ۅm}67X9z=Oa Am]fnޤ{"hd߃Ԉ|tLD3 7'yOc& LFs%B!sRE2K0p\0͙npV)̍F$X8a-bp)5,] Bo|ؖA]Y`-jyL'8>JJ{>źuMp(jL!M7uTźmr(Uxbbqe5rZ HҘ3ڴ(|e@xC{ͻ Aw!5ޖ=8p_Tk@2pos/*W#@UTkտ,Fպ̥ 9MGb&0ۺ*'qp^X7c&͌ӒҶW r@/m@6P!{`ͱ)m`6*G-5F 6=X#leU d6xTV6 gn&i"@*"mr栣 IEVpq 0sy OM*@ >n) u{Hk|v;tCl2m s]-$zQpɡr~]Si!ڣZmʢ鉗phw j8\c4>0` R?da,ȍ/ءfQ 2ؐfc}l 2窾ۉ1k;A@z>T+DE 6Хm<쉶K`'#NC5CL]5ݶI5XK.N)Q!>zt?zpPC ¶.vBTcm"Bsp rjﺧK]0/k<'dzM2dk–flE]_vE P / څZg`9r| 5W;`.4&XkĴp 6l0Cз5O[{B-bC\/`m(9A< f`mPіpNЦXn6g5m 7aTcTA,} q:|CBp_uFȆx6ڮܷnZ8dsMS^HэUlq 8\C[n膗:68DkM\7"Ǻzfbx]ۮC=1ÓOv$sY6eX%]Y{⦁# &SlM'iMJ았 t% ~@1c@K?k^rEXws zz.8`hiPܮbC7~n b?`CtjT6l>X+,Qb5ȳp`FMeXÅ0+!86{V5y8 M`_Uw ȗkU]a[.D}"\I5/1o٩|U戻,6t錳"EFk:ZM/!ݛ@pRu Iヵvyne 0=HH3n@.>C@{GP 9::3(6e™nvOσ =?6ͪ)Bppًu_w/m/0}T>CUX\!xl=ZVM\aٟ6h㗶E۶{O#X26.Fٱq1M k'JE%"2.*""]8yܑ4> >X1 smD) ̙TީXfnOFg㧤[Lo)[fLPBRB+x7{{? ףro_nն-2n6 Ym^]IL'M+;U t>x]U5g B(, qA9r;$IN&CM(F+ hGI~Q<웰[, qnriY]3_P${,<\V}7T g6Zapto}PhS/b&X0$Ba{a`W%ATevoYFF"4En.O8ϵq\FOXƀf qbTLhlw?8p@{]oOtsϑ`94t1!F PI;i`ޮMLX7sTGP7^s08p15w q o(uLYQB_dWoc0a#K1P,8]P)\wEZ(VҠQBT^e^0F;)CtT+{`Bh"% !.bBQPnT4ƈRa[F=3}+BVE~8R{3,>0|:,5j358W]>!Q1"6oT[ҟ^T;725Xa+wqlR)<#!9!籈K*:!@NI^S"H=ofLx _lp ꖚӜ3C 4dM @x>ۙZh _uoֺip&1ڙʪ4\RF_04H8@>fXmpLJ5jRS}_D U4x[c) ,`̔Dvckk5Ťã0le۞]o~oW(91ݧ$uxp/Cq6Un9%ZxðvGL qG $ X:w06 E=oWlzN7st˪C:?*|kިfc]| &ب^[%F%LI<0(씖;4A\`TQ.b0NH;ݹ/n -3!: _Jq#Bh^4p|-G7|ڸ=Bx)kre_f |Nm8p5H!jR@Aiߒ߈ۥLFTk"5l9O'ϓl5x|_®&&n]#r̥jOڧK)lsXg\{Md-% >~Ӈ/( [ycy`ðSmn_O;3=Av3LA׊onxlM?~n Θ5 ӂxzPMcVQ@ӤomY42nrQ\'"P؝J7g+#!k{paqTԫ?o?VU}aK q;T0zqaj0"2p؋9~bޏt>$AZLk;3qUlWU Ry==ck vz(vb$^Nyo$p[DtUCE9sBz%lOONRѦmDVmxюݏX}K6"Qi32\-V_kR(I-wtSJR^m{d a|y,F9$^@mdH֙toN1 < ҷBq/ ۓ,j|z6OSu;BKŨʐPqO K\{jDiy@}b|Z79ߜih(+PKO;!o\戔-QB EM;oH$$]?4~YrXY%Ο@oHwlXiW\ΡbN}l4VX|"0]! YcVi)@kF;'ta%*xU㔸,A|@WJfVP6`ڼ3qY.[U BTR0u$$hG$0NpF]\ݗe$?# #:001w<{{B\rhGg JGIެE.:zYrY{*2lVǻXEB6;5NE#eb3aīNLd&@yz\?))H;h\ߍ5S&(w9Z,K44|<#EkqTkOtW]﮶f=.*LD6%#-tңx%>MZ'0-bB$ !)6@I<#`L8턻r\Kuz*]}%b<$$^LJ<\HGbIqܢcZW {jfѐ6 QڣPt[:GfCN ILhbB.*IH7xʹǙMVA*J'W)@9 Ѷ6jىY* 85{pMX+]o$h{KrҎl 5sÁbNW\: "HK<bdYL_Dd)VpA@A i"j<鮗 qwc&dXV0e[g#B4x╙✑3'-i{SEȢbK6}{Ⱥi!ma0o xI0&" 9cT)0ߢ5ڦ==!LgdJΆmΉO]T"DĊKٙ@qP,i Nl:6'5R.j,&tK*iOFsk6[E__0pw=͠qj@o5iX0v\fk= ;H J/,t%Rwó^;n1z"8 P޿[V!ye]VZRԾ|“qNpѓVZD2"VN-m2do9 'H*IM}J ZaG%qn*WE^k1v3ڣjm7>ƽl' ,Τ9)%@ wl42iG.y3bBA{pR A ?IEY ?|-nz#}~f ‰dŷ=ɀ,m7VyIwGHέ 2tޞߛM{FL\#a s.3\}*=#uL#]  GE|FKi3&,ۓxmF͉lG$mN$!;ߑl5O$}D~5| 01 S?tq6cl]M[I5'ոfiҞ:Z YՑ"jyKWk^dd@U_a4/vvV qHMI{+']1m]<$*YP7g# s!8!ߐ>'4k7/KwΦθW'?~>x0_>9Hhs%y{#iUI[Gzďx7OnuKRv'm;/~n-KI`5-'YݦD-!+Y򼤙&m^YAKC˴vҢ]+X`iDfKkBYx-qCfqsn[?_r=V:х@mfVg,w}QJUtesYyt7Yr+"*DtO/o۷~|hw^5wE of7cꃱ.)7.u/}tPTGc 5tW> l/`I~>|灹mQ$>N |gZ ͜IH[RNOMTq~g d0/0Љ!yB.hH׽;}VLGp3I#8'xal&Ȑc$ d7?K6xAH1H#:f _tŒ^ hgiNas*@K{7tH*t쬆Ny497ͩ KVsVokwW&4*H'\ d$]Vmr달v9dB.bq:__xW|1=6 R3y^ E#LB ZaZd1,]ןkznxtK|v+`VZ3JϧC^|/{ś}r3 >6׳oƄ%VDSWn 0,qh! E-Z%ܹpU:&&fX+EǬ.ťqpNZܗÅxjsD|[,_4EqgMƒK6f/FXJRF>i XʽAQGwG%mgo 恤hˍJ_SgskwI\t`ﶘ080ƱQŀllKX@116fqo>NrU Ѣ9*|ãeeH7.z!<7zG4p9tV|̢T`˖E ;;,tTaIUle*$!>*mBA2,gJIn_kSz)JC]?X(OPJS3.}clݨ{e!MB,cB߮4af祋,1/_xq=fBRO0P'֫-kbM6Apw,GO2}MGK'#+սE^dˋf6Y bQEz}eҏnr_ ^O^W zw~Ȳ=sXअy{E|!4ӥ2 ]8â6 U`V%`!c%؎ʨTzrKh! c.}.D>)d_ 8rcu,wf2?Ǡ*_lDn}rauyFp*ɨ:UiM2r:9ct X1lmĪ o玓,R%!`hGT LYF#g<cm${|Xdu4tmtїUJ\~dc0KcMlf2?mμQ ߉J4WrSHTdp"ӹ'cJq2zPlX̯.0H!ND@UapVoGڧD5>H]f@!=߸2V%Z 0"G4ȇʩ@]>Y$ًF_Mm_Tt)ib+q&EXFu򾬳ǝ/RS>r,C2NfOjpcm{Ll9vQOT>9U;])>6JdbXԠ `Z#_+D[7IIjJɟUh ҙ"`"a ߒ"G̾H`6yiCk(OA/$ ^%K^+(Vr[RR1"u4A.1X0=7f/"(o9/L1X{]q`Ȝ/; 9a>E)XOS K9mUxBa"'4T[Jl /K/9,rlCAj_TiǘP,:4F%_0E5IE'rX-|_W8ʐ/=ӹjhO%>| :S Px„*3_y.g9| ;b`w NtZtc> ײ1KĴ{3Gl& KT1ZWX8?C]~We$9; -.D087?1a@P5B,c}jcGȱ WW/ @a#LA4.ٹ^XڋXٝ:^Izq. ٽƎDn6ٹBc5Lt;3#i3RAٽ9| cbpcTfp> 6L/_x 'ۙz7~w~);qU9GDT! 6]c_:VlnEUdn6UˇKU;V`JUݵޙEO[)ܶCy*8¢/[cչjx&? ՃJȚ9!j[~[' "ssTV2i sLq>z@JM->=@NỲ\쀜*/) ̞r21.y? bO]3?C!yw3ޯL_Su>o>&lrw&i"< :]_<<7U_~z5є/rfn͝MLmc 6&)e+n7cyy{_~궼07R7wPuqpqo{ߟ+[w_uOq?u-|?WS_tOq?Eu-L_p?Cz .e ϿO*3 `Ђ6a-`kIf-s,RL-R`1eL~dپ&+IhYRczr?㐟,v~,b6)up)3K,RLW"Qd9JgT\1f3@Kh% a4x,kA k ^d kYj5Ah𚄓vXZhX1xҖ51Y +Id ZZ\C| fD>hB֡#-$+Jpሟ,Cg:6 3 xH "}C[`ӨOAFn5ʬLHϰ:N@VcyBI#Dr. "h hg ۃm-qu>V&൘ G7qi#^tҒ[JI!{q*lrD܇Gk@;oI<5xZ4xM"؇'k!>V|lk'{d+ :sXӄc)?W`*|\v aVT0"tMًcΒVz]T.C$cEp._0M`AlF̤@U' u,—rw=3}resLV&ԙy=Ejl1#XX۾;R;+[$4pjfљ lݍ3)`xvcZRT\%fNV Q)nsX }plMa~;Wi+f{v%Ζ/K 8WPll{f_WJ|8(A ä>nl"jF;/-R9~ {^'##AA:s`uih F% [U۴"qkjXS~+(f?TT)*qy+QR"tJ8۷)'3J1>pnVGITq3J&J0CQ v&P_񾅶X/)T/ϧ+GJzApU]<:Yn\~%&58IS)`0効<9ViCbw!bX%E+o*ƾtNU*v-zߞϢ +4 {e6J697@28MZXc Ub+A_Aܲ'SoO1ۀS`*f'r[8ݝYvjҩJ;}]|Bޙǖߔ 3\ a-`slԵ怕e7ːزoW|A\Qu&'9~ l|`pΕ [Q =r#vQu0 M.1%]vRat'IIc(Irw~Z"+A<sX4*X FVGA<^^7 vq&EwQű:؁6y\QbR9GuB/S5^fa;N(hz)}_vq@nu@$_DVH|08W12e_ʿd{xlzUܝlNDU j>zƖݗ&!jC`@ qэ-V Rt2m%K6dX)"]lj齔{oY:8VmS!:Wh#O0} :OVGL.xllT_oqqqLec2p;Ndck[ Rh6T#0H Q}ppS@ώ@#gƖ8sѹ e^ CZLu+."T#yrHhlكʼE-X'I^=bKߙԘ1"+< gb`[c1髰?(o$[eR6uOœ-m~)-&>883\6y 8V -qrG]~.3jsqY~ sjZ+9[rAJsT=~#02ݬf¸9Xe>sY~ ae9} x* zjC.5Wg󵸊y!1U:pU!ƔCm-7^w]斻~[hW$k sE0ڊSq:+EKٕ|dvvjjy6 æ/ML-yz,ZlQ^oAn-})xǺǍ--qcl:WLg ӁvJ[ǧc~Of+8qpçco#rCtKӫce0!Y-+cxMK-H_2:Uu*corD~@N`#m~R:ߙ歼!IZ5>H;0ޤ:\Tq]_\_>e˲\oUQ\Wߋ47WwߋKpwSSۘF,nC.\UߋoVEuY]^VW0R=<ު˜˻ x}[ێ'|;c^ M7 >5\-> m-8NJ\ALd!>_:h/NAC;?_ξqĎ6xMY(=ͯl~l8V0٨T zL{Ac:&$ ^CpH*DW\r2aR|=(L X1|wrO_g ux1^^V2޲jMi^b``Q#dBxV#NBk1;DAV$"*1]Y~ d->'I`czF_r_B[s `$v1&N|G3 I6%^SllͯU$%K^ҲI+98hbu-UHNRQ4Ip-%r z7MQPDggP taU! %"/RlJ;4B(H#p"rq+R8XŨxEY?7SY$RGKmȡ*L兇A23ZGepJ1,y ]oՌ P&,$CUE=Z*ɠ΢Nsl'CUy$7*qxNš R`87Q$#h0/'/a>SQp'Y$n`ar/!vSy}H&2SRmlNI WnX[ύ/C0Ɨ4By4"C yq 7\>/Ӱcz?7Ww'o W3[*c=hcEad:Њ 5qȎy}Ik și~[kq͓h,;~9^iH^w%ò^lP+!Fla{Q3O]dStg^{\d!%M´3 8baI?_ s}]a<'=}Oa0E!NE[|(ۈ 鞯Ahju3pY}zqȓn:ne//+ro1ؒPh!?@%[]x.2~B \97?kyb몙Œ$FNl&=2Dm߫KIDD Ly5OTbXjm=bch;pZ^ȶQ(~f##]yfYF`~,"[7-fNd؆{̎ƞoơ{A3?2L}n%:=_O&ry#CD-,Rt$Y2+ `hڊ5MfBYy " ȁ)^iRr^(V1(NA|b2'|PMŠK.e.:??eK"`B U/Gwt7a9xyUU-)ֺb/y:IJ`-I8vj>Y:HkV֤TUE_3E40ko2د_h8;IS`!frb%䃸j?/q'mw~Vh]'`pA!8% b ~Aap%^m&%RTI!"O/б8Mr&1b;{?g3Ou>x"Aw٭qw30k)fg-]KTaWo-+N NUNedZS1wًȵzΫ4(?аM2M t.jуL²MQ};~, ۍ 1玶knߓoގO4ݗ7vZ6$ECv_F4|v6{240~: ).!p^ޜ]Ň]Rsv!=@IPW"A]:$zR`K#ժxР.77eڧ2xmz{QFp>~13홈 2`_&FikCѮ>4UXv< GD~0~i*fԕawYskqkZb$DD?mlsP{ %5gy*hwWQ؈>K:JmaDqV%;޽h"G$jVg%$ Fst3-f.5 x|Fw<ڬ/hi2K#C]%=8Q+i"Jyu|Df@ ?>2gzICb05W5_4kq| M#шEax,*KDYQ^pVatQHF$"BqPJxr c{F *\;d>V*% w*!V<fjQS8>j_DM`VD X#"+5-Y4S[M'RS1#m4Z *jMEk sKbgY բ}c?iLk!RR}Gm+ zu@8Eno1][*v_"e-7R-^i4}-E!zc=aֶ@ °2!ȕ7̈]5e3˛qMp=Pk-K a? q#NdD3[p:kO{m.GUŽj9jVR-oO2][8Qܞ® xUAECtd=놊`,PhM-ho(\ѹ!6ԾPIyې%k.{iuL]lo_@ ҷ-#7'+kIdַ޵{dFڶc&.V<+wrfʷ-a`*|;LR蛤Ҁ0ȫ5mwTv<6u+%Lb%~OtBczHXh1O.47za5X^\CwcWc]9IrUZw_jOmRͅ^`T֥"G.A̔' ͒2I4!0Uִ$Yt[3LDl;x -}4AeP*ֆ1`Z >4eʱ9?@VaN6Zꝱ"LWyx#[6'6M&Ss;)8#$ :]6zM1*s[>fbN &V'{"Ld='=E@mI'jZ;0gpW}%:R+\=@ނڲ)UTNΛtЫUnU4p &jmʳ*\e B0ZRʴ{4`z!WEh-`Z]ǽ+PӺ RHz Oc؄N%#fH6-xM?φ%y6Ouo+ (OѬ=žv˷X(˵hT._IZ]aȬ:wTij$eEJboO cE T$o PO6ΖXoЈ2]M +@ΠRd#X8P "ao'ͱK 4Xܫ ֌jS#bo6t kJ 8Նo v-6K</ m5[BǾ6xz|>2i(2-)%hV=:x%]KjvTehc+o^6akM"gچr7+!:3*!aLu> SuvoM SuX,6yxtHYX[EF$T$)YS"(oI "C*M9nay[tK&sצli|Z/yM$ӊXY7ΛRh!g>wh;Dze^]j .oRL7 6:YKR%gWQ;eXBX*mHՃw/d]ZdO)PcыUSi}D uTe壽YܪsywDѲFA#:w08>jAϗ9DΓ,%BEK#!]b~,C;ō{!򩘉{l1aa\㷘 H|sZ<5g睂}K~Yd+spD< B[zOD@; ` l2vN!0|mU(r} V t[Pk::/ǘk4vm@BHF @1[f0vݖa]ɹ0.Fй"@](w-0 @KP(/gq Cq@j70ށ+a5R;x FZL|60m1(j$< @G{@do51+n{;h1ȎB{ AFEb%j5$/-X`IM-Br%FL*@qdNf=n?(&E,Z$NkX#Ed &ZPn:Rp޵oe q\g!K;`co 10ȷ ̸mNUD{j G\rbrA)JGqh;րs넙s2 F.qnY97)֎sBAg;qd޲Շ;M[7Ýxִ0*SASۆD/䎁Fnh%@ :-Φ#;@q:]]{w \%OL/vAs)׽D zAW1Aa:eA1ӹZ{$m; :p Go ,i=R8\e3 h;e{Ӎbn- > EeD؅Snf:7 vm1o9FIq㾁y<:aMʍu}{\[J<]X6}Gx' hzKoܤo1<&onnd `]).T"67Rz }@2h^ 5"W&P%7)nxNdgwzøpW\_KDqxh49) "tO T cP+t,I!S*f> \D(K%bH"C*Gsϫ5=9YN P)nɉ:>1ݒYdTb r0·U v0׿^"Iϲ:\nsMF70;KNur!:<:&r{*lE\9`VBr bkP̦_&xCdz;AllvBd6!ĥYAT=J{4$9ADܷ/6>N1jE svUIysĸ\@,F5q5ApCȏO{31kYih"۷(dduR~RwX2pp>MJ^e`ܷv7p7WcI_F5[+)dfĈ$tZF'"<`seМjN>YWaL;:jO@sMW.U'+")d};an?o~UTQeE%0wTSykdʮۈmM>%f޷aUr̈#rMI) Jdg,L8;Q6::J,3pD:L$٬ĥ`dU݂Z1I :65M%eضޠC4OϏ^"=}/^¼x) y|02 ͮi3vυy϶UiG;  Kt]0lmnqلt 3\-$>+`WEdO'cШ ٨"CCDDd# -a ȆxNuoYmd8u+_`ٵ>\Ox~[9 p)pjø _ô=>r J]B}SJ^X ~O=vSowθ^LUphG޳f&?-`ūG[I6FmEa^؛و'=1݈aWxlF~ۧN9vqxޅ`yOCU? S<'YWyO_*84N66l A`N+ +FǎV7b1+@я6Nb,>T5 Зi.?q*KIc&[n1z1X.C4j%t0JtDd_aU.fb nCKl ^Eb{0x,"c,&wPϷ" pZ{ P?$U@Z8OAӯd!-^g?|8>ý ݊za-GDϫ`ׇ3KX~yJ:RlG_x~<'l/0ƪ+l4TEᱻuŭsh۞z:*$qPx=kVoZt1_Hf fW$+qc`V-9ܛCLs:ܛ0xY]c;Kʏa7 C2q3>^SK<^Eo-xWDcmz~[Ih|z%8ȗi04;)d/9~Olj+ vEB{ {zmKԦ'1)ȈkzmgShLD%HE XyAz/!=`MӱZEEzջ|ءlM\n6&pL|Y=~LIȳ&眣}E\uUQ\Hs8H,hq/ D>0D#@_mb`Vx}',Ĕ ȔNh s1@LaaBԬYeおO &pZP/ٕ>N+=9=_^EAiAJo!],,8 !S3ސLpoq/XE2~&*(XE^P?H9[;"# ״2~"6b1X)?A-gz?dG'co_?Q^wmRe>Ci?9\لҷQ&wdPgrgr i6Gx=-J ߞomudw  97.Ox!yp8\B%M\+ރYY*1c ?T^N=p'1@$74$Lp6u/0[K궥~ 8?)ʯ=0}S10o{ ߛAb- ^ 3DZ cDWE5 ӊC*30ʒLtVO^%UnG+|o '-n7qb *-o(1!=gFf,Ѝfh\B͸ĥNcg<}7n :#д`f8_ۑK%e@r,Í-/&)O/n5B Z݄V$!VmȦȐKW{EwV: e%kWgǗPa/,Zx+ \3ȑl RّpC`1jv$9 >Bܸ!Is~a32a 秸'OIٷkJS敶ؘ^xTf%ZDY!m09҆.UZ%fݘqYQsE% QG˜D.#:N»#< B-s' dѺR; EJwŻ @,nbtqSb'le}c:,>[ȡwf$^4M^'S>C-S=iIM"@y)BkXNV&84 eQ ɞځ/n_)X`${E|7yֱUHeUR^'' $a$%1+/4 Ybs>LU9b_.-|S?Q~X;oEO{J>_-߻C@>׿u k (>ס??ݓAGϿ_ËJ3wgUQ ';~zzOf~}lfߏ`r6Go06π|xl{{D֏u lrNn?SA^!q!yK5L->O^8c@65y9/+a=-h8(0iщC>l!u -Vx-9mY@l:'ԣj2JjPFʛHڤR-?QGOI۵!ӜóȓZڒےz_/^#W֗n𙱹X9XB ޒA?k֪&3ͣ㑕n##z6B*bSa & q /~&ThAcQJˮp΅4g" mCm $J^f;lusZHGwxG...n\x QCKaIqE!31ACtz\0N(  N #Y=T<1)mq7a.ݜD38s6UJBIe_Ec\4)bz;c;]}]p4cͳW1H!3ķLAb;EM)h;M U8€Tx+PS$ 1n#8˭ J T POŚꃥPSt9oDRNI=8e eǦORA3є&N5%Cg(bY;xǕϳR NDy * .|>*M+s6-P"b<-N" .'+!2]Op}LmqZᔃS 0A(47fG!5PvTK4ٻT{OWX9rm{é;ʓJ}\i-f1ˡ3mT*X6P$Km1Pm^ǧ}vc.+;[iRwIwׅD3M@ԊZd۶ ]B \(fO$,3ye`k#sV:H֫)p=j F-BbECضyӥF7^95ꡌD -t:y֔ɨU۱^O G`^᧻f5Z>&Nϼʙ tmQvYJD'5-p낥S9B(X:'ZEJz!}rz}[ȼh.fͫgwWukHBIڪD[sf6n(q믶 Xn^C8f<eV=2F \T{n Vy 9 /|u_jF JĚ%mսý7jYgki.ˌ­Dӽ$e76FEM(7/8vQ~ŋOcUmu[(34Û . ˽Ya3 e7kԤ GĢZi Hj9 ryد_ wAE5f>g5d u(YR/j?$Bݱ ^P6O&`WZ&Gb}*b78umx'8Z{vRLvΌ2s13ֻ.=ް֫~mo٢nZ )۶[di;>{ 2a*Z).;^_][=W%}䩉i7l>3\k%*q7x qIZΑQ$-;4+1:@eԐJYIʎ \cH(qd7z:a7QuAN'(Uл a |K1u<2Hy &,\]3E[mLh꓍I U}=k@>n좚Od~c^Ӣ(,f(,}M0nĤA RD9DhJcr6*sIm#wy0^)8>gZSmBۙÝso7 1h5 2?DNetAǞ9q/E6.X56Xt*5(%gD\1ΉUb=N nVᦼ;C*Mbh:iG>uPxOܶ-~ blQ'pULc(&NOH%B6qj —A i+ O,R-zYQ.+^_7]pb'"j1F9vpUn' oS]=ZsQL*FU:3:50.n6w]p9*GRZi,YL?K܌^Ss0Q *PTD" SDDo`QEYB],N)w4WT;Kd“UnLe݇9iVYJ|S8P.e?B: jah%JK ̓kiK"1A~[L&&=x3VWDE;NxvH e;z<낟٭><{>pWdlwh˼?H^9Vxk~@(G?0O7_(?zOU>dftz}We a I1 %+&Tdq 7/jO7t+ze:2㖥 (Fsk{(X&8ˍJ B< ;ERp_ lޜ{m z씌xt[mm Aո*=$<I=p/?xiA#rpڮY s'kLy<7i^2z^V=Ht}KGy+-#Sr$]kכ.8qK0PuǹԶѠpLNm!H\AOstbhُN_ ~]>f8:r$)>2ѶܬLમ>T_qtٵ r3ԄmkkIg/ I`YD.kf>Ud'bYseU7` a}e %^*{WƑJC&TA@˛56qcuX)av4|}dͫ%%lͪ׿zwqAZ,aL.,1Â?zVp$/kR]-XwhڽozRx`/e"!˄y:W27=Dz˜ށޱ_zllZ\RE[,UQ^ZܰD7$U _SDm FZUv']DJläsM,8?B7lJ6/ѧ^41W4ul4'`GY4df%͕<\%h?DZDm_ө_gE MHa,iDY߬$7 8s@xff̟a|mz4weإ3*&w>O@mg?u[?~]z{_v2x#<]G"9 8)3+a}me1(x|P.6fb_,}vo8 &>y[":YWEQ)f\۞)Nuϒ'D 8/uo3z &#ڪf}~gL \ڿ61wUメGV@v俘~ڥ6M}o ANNf!o/~schO# h\Q7F`9M^UF@W7ZĜA/`h!vk,KƘY #FQ3o@La\VBve i#??:Y)Md1{کbR P=@HD"$By*e y)t밥P`p?K" 5y<;DYbmfy'r2/t&2yxСZ;iN^%#w!W+tq6T 4(2]S{ v53dDqM"1Os:._5x>엜|Crf4ǔl4(UG "1; (8@Hr\5VVe\7"N&5q)8@s莼 \\=*f8KEoj_jݛ!'׉S)oXce1g%,Oa @onLmZE>Qt[ MTr΍f1!%LʤlQ&L&Mv-Z8/~/PKYV?sm]$9K=Ae.y2J#] eUR}`Fe,EsQuDsɅj.kϮ`$WV&Shun.aK. ;ASxFRueߟeU6|n)y440>A6]8&r,>-?t>OOqZ>ǿ_@ &S L=)UȤ +tJ|5rد_Zh243d!*3Ya.Y}-d?*L3)氁k:ΕƭR\s"UlQE &-T4lpMhU *@BG̴Go70L smPa%۱ so%.9$ W"&i|֔mPa"dEKb~ILZ<֪U' XVL-Ro1tŸ=WQKFs[TKn6yFe 1JWT :pݫ79-"-8Ⱦ\X-9RH`6.XJq )(JntkWLicmka%&5#])N?ݺ[d`1"DŽew8 _yo:o\c3B:w(;,V ^1kcʻFfЯRbq I< oU&*bjwy7^e1)o'DZ:ZrKȵ u`!-m A98㢑9' B[;\meQO 8RR{:w0J,ѕЖNz66tƘC2A`k$Q6P̽@q8\PJh৕̮ qqA/Bx8(H^yFriPl&O vI {ﷳ+OԅjqTeiG"=r[ҿ8cwH)/),CٕrFtMVdѨ*UaYߡٌ{|7:F~^7a~u8a&#& a>`Ia`M87ܸ6. "⾶(U6 ɘ\MRt}y5( 5sppkB"/5RK[Zxv!۲/=..[ܗGEh>,WMv)k8z[*䵈g%5ZjGNl|9PFYZI{>~ǵ4I?ǿ)޽%0~"B&yEML\ͨ蛥g'6ꋏڴY{4Rl ņ~3JC7]&S{ ^6N.yJ&bMb#o<6%˔ >H/Z},;/m(WLmvT+ZtR$X| \etVPp7!YT(ԑ̼7 .ݬ7Qsp.WS|wX_ $Mb/!Yz4iX<~U!k3x 5T}eeSj؛w|2WqG|QrrjpZkqLbg{,Vm?|Au*Bf7}دO tQa6XsfkLZڪk<\LJl]xr, fiZyQ$u%RS0E:D0Vj X9g(Z&-Wܮ)qs7~lbIڦ0*\MWq%{MqdHQ5yҏ۞)U8? $@9WB͗LD 'vsH0[.r?2T]D%i6Z\j3ݏ'da4fÞ´,O߈; KF&9AB$v[Y0-붂TI'!Pn+˿7/˽RUdhm*/X`LK!V#j.vR-&"w$ms)B%IwYٵޤq~~}s$!5D`U"Á{~h8Q$1'blk-RS`s(o#K*|Fy۷?{*7ؐG LŘ9 xv]9D(;4ڀ݇B)|MA#8c"+ \%bũϓm\,ڶ7)"0Bs#<1RT; .̬&|O*d2M_\\i2E%礐&jĜ%5Rsr˘5 ȴ؂4yOgth+.i~+2J|Uß Ђ3Va]CT \pWR#e΋\ (qw8jK=X">fhR 8J *udnl$Hw(<#͵9WyȽEp(ȝ)6gD;'4(f4} XoIzcA4wIq.GLbb2#"93DŽ3F뭰mr&ZǙT`gRϮ$)eHn>5 nx(0? N# J-Qk"^H@4 ?{۸ aHw 7Jf]d, hcȖo0}HZl&%,J"DyA3n#.΄4 :paP9w2]!D.a`rٳ}t [\ Llnb1PzN):Ep:#li%sހQJnR{Ty\AL&*2Iae0HF+򹂞̱χI>}r{LmCǶ!\ }mF+_ -6LFh .69u4im\kT4OE)vyFP_j'vpG7O_:)>-+Φi(Q)v|nĐAL1:߳ [sAp _*5g{2CD 3!=D Aeә\3C4S9*Kep~vsKdY*9p/qNe9\.i\@sU>"A\ȾzϗM4 ,n؎cvpm/5۠.6u*DգUŃϼwZT9s i8+b0ᬌr:fq'sA|S/[119n+櫫7(eua>|tD>-]V,]뢁:/iNId=[UK}J?ZD 1 X> =E6CmC=C^7[0̡y>S#CG Ks9#pv|+-²WlϒҲ9;Etszh TsHf %Lrwoί?#"f*̦s6a!aΛ}! e7[ɮ60; `x%[ ZƑYdeK[Sໜ^ǵfZ(i?.ΜFZE-As ֦K1he"GFbDT41OPIiR{S!`e YD_N2PȸR}caUc o,b2je>ŠUr!Dȩ:1\xJ Vi~ͪ 4=њ<)Eܘ#z/9:!UgtR~אU#\n#C{0pC4fL>k4!$60o캮ec9ZtGDPo6DKߩ]h4:rRB22`%EZ㶁؎]ln:mU/눒(RYƝ2qel^iykn ,04$F#(9W&]0U\~+&~#\Z]{hO,S RlH.,/n^rrә+>gX[C/J cOw1 ]GRs&'FIbg8 )ޗjE Dw&huQD{ mKѩ3];Ѻ~Iv -\[ihF9@K#Pst|j؜n.%B bVtn~Vc%T s&Ro H8}M~`h# p}mZ5_pm_Bk;xC}p\O"b{JSڔB\-;~wVjnъ&p]Zi 3VO;7K0yB4WOBK5Fh nkwsqi90t|5h ' K?f\ú,֌kP#?wC!+n?S4),PЭ5zP+^utܷ7IMǸLK?)ˢ>}ET`T=M-egpfNx3)v-Vz+;!ZISNM@FҺt nwZtNMY+*o2.ܹiX>?n~զ[Uh~ftV2Mg+q>dTϞ3y H"ǧ9 #ŐikwUXGkg1mnҊv]|ຮLCªS~3CXP< ZQ.<,ͨXޏe 2ZA2ldW3R*dE݃t2nj'f1Y<@KϲxV"qdZV( ە!h*#`B'é;kGb )&Rf 0jB#aD> ; Dݦ;j$FMr4Dx\N~O&K6Zilt`.-JKH21(=/~Q`]LeOޜ\|O+wmSaȒGDOWM @ϧb'#)gU;!ɣEHZՏA}\>z)4L(cl=icmS5r?wᆱޕ =˫wd[٧U8ם>q+닟^~r0lg?T.]P Td:O!īQwDAr䯦W7<{O | ٢)O% #X9unq-==59g m@ѫ53 9Xcw0#OPmPP{fjx/O% _}/Hm@_OD4O+և@oePE7xc=U+dy([C}^G|Fh7-(8_-/?FURȎĪ|<7Rsz@4#X]:q !4 *\} DWI/Kx/  }VN^I8`lOFye1#Lv"i66y;8p(fSƣ>7r?f~Wqd,343T-A?H#y:GxCuȚ`[tQTXeԆcͽq…r>@Hݕf#ؖ`2dqviM?4e*ǘ]|o9hje m?$_:VP}S?Z" ƆμGꠟrQچLVAe|?}3S6ˉ~zi˓%K1qO6HV ӂUq`$AG7ÕUˢn!YE L?*T!v0핐HUt{EluȈB Źykj&W^ȳ\}_nhzUy܍|a8#7w~yHy]Ʋd5 DL'OB~ȿ9buC]@ZOw}q |5قDgVXԝhjXqqXz oLPj:Oi^Ta9 L6v[84s=M\^ֻ5Յ_߿_ 6*7ʷxbyǷ"W^Cq^'(-ɡO>U,mSP|5'N= l2vI7Ahᴖ [_}YDpɤ^1l1:o0 km؇Z;?41~y=}J#WR\*-֦=)Iy8Ku_^5y[L,OGhlk3f{}߃-;4 ~R7OǬOL-%{L5d|["ۺ~7sn^cI4 (?ŅJR=6<"xS'`iThȴʵ3BWķ@~Ͳ;)PT%wVYt7C.<8/|y/p<1 h niX<+Lu3E3 A94M^ېSJQRBP4 < / sÂپpb6(.Haf'`d.t41gYXcPXTԓsL JƐ!^"CNaέDkb1rljpD2 RFHFp&yNڤ\Dqlzy<Je!B\1fd๊HRH"H˝CDѢTbYz B 1\J3zʴCވ4yR$Lf,ZgL`4}v'$P=w1 A.A ͆ʍ+&$(M)ڥ()ZIzmJ(_c9c2i|5;!iU4".%/ε7n,9O;P:f6"N>dƹڊe[{W٢RSzf:u"bYޱj \<xrVa}^ׂ{7f>L ox(RWNqu~k!0eͧ1e&&_0Z,UT.JEWp Ȟe5h,ooճ2%68C%Xe0'0sCȘFkĒXɸתJf=fղB&qh<f/b:p<*}rGvU28Ut)\]Hh)CMd<Π;lhԨƢ cڃcF>@ M&jZV"a}R(T)VbzOA2A}QHVǃVx[Q -.cQ,TG?ONXżӬ(adNI/+I!Hb'#҇>}te-/CLm]LՐ"bKе2 wP܆9pi!/Fs7Db6:H_GW1* Ge} $E[k'єAa,C>j6y:Ze 5- 9DFgH!t9BvQ5(#kFV-5*8thZie#s4q֮K;l,@茚$ frP%.F|d@!RPH;H-2zXt*Ü>F/q %XN$Jjƚv?tXޮY4MEרH Rh›WT*[fy/ztVAc!-B-Ia.y8t*2 cƐ%V`&k۫`nu2mK3Ϻ"w}ܶVY\:%&PFj WH7]pd3 L[M@SR(u.pkHk sQS@s8JV  #rڞ]_ H|Tk>s׮wäDy XRcF5G? Z+5h+Gr).*TPhyDC0J$Q5@>N((j =jt}V 떍M +Ep " l(^u6!3A v*vB%FJZ3⓪`bg r"[ `IE Yq2ǀVm,2aFZcZ |,%3)% &}*@$4. Z`g^r5WhaL ƛZ#AKvG b;F (ǔphh-t5E h+hX4BS"qFd>KFx>j)5zJN²=ss{@ KN Z6l!&\/:LC,cdiVBMjSFH"r é -(q9*\cy]ZF+Neg,2B R G^Q.tn[o-y[ros|q.ժ7Vn;_"(V~:jhȡW ԳWGC vʌ$C >$C >$C >$C >$C >$C >$C >$C >$C >$C >$C >O1[vH!W\>x Vz1U/CGg?HN/'䫣޺ˣz48Kg^bllwSneF*uQh~8ocpBPHB! $PHB! $PHB! $PHB! $PHB! $PHB! $PHB! $PHB! $PHB! $PHB! $PHB! $PHB! $ C6HȊZ2gHA;sH F[$arZ^< +#@ۈ0DFDaDFDaDFDaDFDaDFDaDFDaDFDaDFDaDFDaDFDaDFDaDFDa!Khv=P!%x) >c|i$ `0WAZW/F-C}@ ;u8/'mjw( YԄ Y(QB˅x~}cd^8.׊s%||&`!gf> k%/4+-Tp! a;&Y7$0+zY M? iJhOz\g%|UyT-P6L6/`Nʧ˳ `bHELNd6Zls73TSiknpǟ7BJ^$sțJ"o*yRC$f'M'qteZP#Δ~gRK.*qL~]O.a?} qp :խHH&s IB}ydt>=/'cVu윎Jդgt~u']+}8N7*a]aSqO_o~yy0:͗tŞ~ G97Wg}EO}#& -__cdŝ~O&ӋuG>' {zxT~^ ߔmnYz-9BK%|`zY^~5hak{zӏ|*dV.9'.-ͲWZ\qxd8ϸ~yD#x41R#m[t--%j]#մJ8]@n6:)eT!׮-UjNB)WbF=P? m\bbьMW}. uC =I'!$VATo?H7ųj^$n̑3w?cs=]rr5C{k\?O3|^1 \gʹ6er,_]t5=KPPLL35n\u fo:Ӌ?=dH*麝 %B}\|X`]:BJh'w?M~ץ!%67ѠJ>&vS֮-)T[Nf0?ޢ~o-D)],tڤR-Wm=n{+-νQun:o-[s6u*\gj~Ww[m=]\|޹{w!]YLɝst<ɨ+8cm˷nq};xӗT-س[qJ^E?O*5\e7pwǖw|_oAc\zAjK:N%hǩ ޖ>W{_y_oyuqOfi~Ly{޷'#?;U%y[qwCGXy|ly|ʍc˄-%|Pyh$:g`t)XkŷþN9 z\@ ȧΛ:&u{l]~Rv|_*b!ʜu]o#W[j@ѧE)x8^۹lCr<>(8P$#iV ě2E)3PK8BմS f"sVNMx^,`[(µX02_>6k^axfUZ^L)DOei5i/|.pqCr_+Rn$KGBT`+l@<6(bVE%>DymUw}2'[qlby{?MQaD'w 5}ϣ[I3a"Z┄ML3 h CL|PG6[l)HE*=ewù+䓅<ے/||5r2|ɯ&ymDHa!'A>ZSAцѠȇ)cAF;YUh yv^s K *v55 l"|Ϫ xMwa$͜ \aYX1dڎy}Je~.dV1l/g+}A~v״Q[̷fJU,I&ZP͞dC2:O|,CeB5zc|l9Rw0*!3!S1Y#p3c[Pct:VH#҅ y"<"Ǭ9|x۽YH-|I =,pQ!c %eX-=*GtۧCt5!q^fDokV;Q^P9X A4u3i1M}Ӑm2|_712cϐ2ZHI= iHV#ͤPka\SC2|wrIRlw?<ni_MRpZ8 ,׀IcЂX\xc?llgP>[j1 ܸyQFX'CfN׼QHgXYȇܙZ<ByKΝÉiYT^-F{\ƪqbK݉$@K<(]8+6 E>TOP@c5o]3"ov -)[ J$tx`j6PCʧkrlGUj8mtg=:ph6#~ VR$Kiq{chG>TTc:\JƐ&B\+y,=,%(it`j G>P`6,RaA!CMht@ :K9쪄cـey,C%S:+>ao PQrG}_QlxIRw3Z4=/zHqV"}N4'i4Z7ޝ8A"*D(ޤVRn wW{x^|[?/M?>ςfq25;UVI樓;GX |+?[?afT׺̉u*qzEySg!]Zf k>l.\6/G䉊:xT$:b ޗj;\ԒA13J$l2d*99x$GN S8cŭUl`5 )>&6b/y>>G9k@P^0vf؄fy;UuQ1ݰ9Wwc'/pи$A{/n̘1tezc[tυVf>&y˃$e16g /'UB7y}*ȇJwD}5yczdtp O4}Ă$91s13XBajZj9FG^{H;xT\Jj~P$Q#66.R# /1tWwi&:& 6P0mTfN2/C>T#&Nf?KQTog1t6CRo{G#XmDPCE^guT6rĄSk(.һCYwVm]]g!Y\ ;WFM]%Qzqy'X7\fQsȢGt"8UW-[݁Ѝek#:Q#+ &O\.X) ! i`&;!mw$CEjb峯¹Lb.?ߘOq36c-',~aᘳغ>]ޟ`1f<r~5+ 7|z[0!$mz&3,=3|͟2ǜx.ǹݻ>g{;,`<J&?KC`%cqrXC#j(BU|>8O:{|(<& SSL}h. QZs][Z c-Vƙd5\^ކ L G0>Id|P֜FF|4E <"Ǭ9|%)焚{9T|]9z${vqf%v i-zюxݣE>TVt Bu\cb[X`vՂe8R肂Ltb~>C>T#Zby)sxiC\Bq/ͪ[̘۩$P1[>nY|ϚYh.^q!)U54C%5>iVa~CsWwcQ3yA؇2o1bS$$d&'T҇ym6PC<‹(aVMtZ( D Sa5Gs8m#"*r;],ꎥG,sB5)#Wj~=ozSl%g_$)mCH;E>TBXrO>(wEUF5?= B&AX7gi1PtǟiS# UbX);9 f`q#¦w.ՈY=[\Om=yD.W IJp"KOhLq (DZFE>TRR#@ "֦* БB$.u=T&Z=jvuU9TjT4Sc-)2i.2=W^9*6J;vE>T#dܬ0ϼmZs"*1Tlm-e)>,, }tx*0CEnK'\c꠼P_F/s ;~z2EtQO= QVTLW&QGջ|Ċۥon888]~ |`Cxv*TFVZ$O Ċm]ȇŒXd6Fi3Q9j"k-h, VibiPix1;) ϒEM_0E,=l*&yCPDrH1=va>c!R"Z s:i0T䶪n¦ Fc!*1b蚊ALuեU0[UiգaIc,G,qp"gQǎQCl컗G6nQ^Tr: DjQ1/X b'2zQI6ޙc +b6rE>PW5y u?rZD\#]7U=4KL{;(=p:7t7}(GI.alʎ͒_Ǫ4g'9M 'O,Y``}talȇVѫy/l^Ovٚƈz FvBQCcvwRVj1QVa)eGHٵ"6r`9Dg?(BGrL?7(ng ޱC5&| 4崦H(+ҩ,QZ3M\X~Ӿ|ۗ#q̣m|t' iw/518vbQ <[j@MxF㾉dTYTg:Ks|cF<O1^L?55(ޤnQ'~i=O"mҩIhRYfywzw R&vLNa\I q{to3-h.H叿~ۭY?小5ou$(bNmS4I q&_#-%Z=;cu]J7f[u*^wՏCw/M oL:"٫ Qe(.xwV.S +H_o{ua= Eܥ7_-\YEF@i-JUkDPZ+aҹA{+;1ZDck|5eyǷϓlK>;'tJB>%tr: P];Тc} 5qS59Io$C#~g+S6i|zp܀~>v{-b"ƞv 2ʸlI`eMRk̈́ | LDktG}B޼P 1V"4w*Fw{b5"jt_Qj\0FQS,n#bFw5fDY&aR*}v.G;?kEqO_KW;~F{ x ;ީS<2L::) \SD@ө-UTEi< Aozw&N^ >MF)}Z#]jq۲dH!,5Yva?7!o #'AAn1gx6nΓ#X/FW8+yS63[񙤅B*I](P".څgox='vc.l=w씜fTQ,Hbvn 7%ޟ½ |$fQh3nI"?,mw0\<:L&)PbS, ( 7k@2`//gѓpfK&-23S~[8@;RHa22f8f? [J{}$y2p3{bQewӒZI!FwR(>t +W^,V^Y7>0zX'O:x/OQ\(@Q8xE)%O1w5zUnmA^;piksf*jF8 m `u"TRYGΓ1nϝAǩ2)(6L(dL;ZAsvx®x豇c#=AcOFuB1ۮhFo0KYj,=EFZ.p2F>r,i*C׬z 6b`f$q^T@_b=/S'MU'9j܄ FnUZu%I7QE?X4ic;1K)K|9>Ҏh Z.&A%6+)ٍ4K-4@d A@F [HS̢vPb\B4zB<)89ډp WhIlڼX8f))OL32}uU;ĶN8 +LɘY;mT~m_zя7!/=I: 2u^V$stSO0I89V.UiUM5>uέ"/-cO1c(uEnr-~1jN)x&ԇXu_[B r 2RKsbM'ر&MwIHcTX@Qs6`c؛ȱe.6A,5pڹaiPΔ6~~>z& UG2@lLxn,2R9SX=A{O60,+d rlE=>6GƞH@_,9JC(&΄%Ef ]LUgoy*VxR,֕ ~U]sϱ62 S#g <ov:rh.mw톙97s}[%5*utÆWB;〨tY,2?jXU~5mjYa-L^l"]Wְ:l8]=[g<[٠&/^>]TCfȑS4"NMX&1M1zzwT-ޙ+WTzSxɺN1)x(dB14##hŠ]#FC7 xxbǃaa,ŕ;C`+V!vs'~ņד8 ϊ/K?rŷLߦ[" ?F" 'Har@Q G2l=6CCt#kmO,9 r}W=نCO& ,WqB'cQ˕T%rͥﮗ~ {m7lF6Ct$8]PtmpXuw<;jw DU߇E_?ݭd[Ԯ"r7u:^/wgQ_1dzӬ:{op܌q`ǘkߕuZz* 1iI˳M*{bw6^/wDnqkr&)Z/.Ig-d03% j NJR (xPD̉>Lw8 hpSXP=ay!–ւyܥstMj;(lp`Llx6IX(=rs0~XАF5E q3NJeCO9'#,ֺ8W34y2pԠ(`e Hc1ZX_mJdEl#2yX.{VE{{C߱ `pƞ)9) cjm)av<p•̕)in="с$E-yyV==TtoaPg}}-˧^&6fY±~ouM.~+ug+gj݅ |Yl(ǀv 8wkcߍg 5_D买 ՀvAu^eaa&l,P{j$keМl+ |]Wsyf?O xU6W;[З0G}Ap_^:;W a^RْoO{9ogo_L`M@RQ1#"_PT5>y1榟QKce΢a;}-yA U&V^rRp)ῼŰmFA nϿV*A+_@eWЍ_K.D[ lTe5̓O RZd/[;b9ȨamyTbOc/,O֫U͟Ε:ıuۑvڋcL!]wccyZV}X>"`-/"wu9_rָ+<%>vs/ ˒Fj3^~,%ĖXAS&?bsFIơt\kp"/R Ջj"x`~xMt~Y 96gzi(H9ۡϔdn*rnNrLg c#QT0k")]YboxZJkʔ1 u$մ"Q;y;2RfC4VɴxU RSBeZ%E)~i 50 o+:i]P$b܉TpZxp/<y_oOQ):ϕL~_?$:8rpΑϲkMY$4:^ 8Ć"u-:ZV)M{{}s2~O#U\Oϐ˦w8mMz,!.?jM'gHKm6zgw=ᳫGpzd\'M>}+2^w96_UqcZ v?5(($ȕŨ@z4FZ§rŧѿ-OBi>ͨK77f>d݆Ϟ|Z%Z-o;2!{~&>~wv\>C`jm;!l7ܦep ^Gu!GՈދ;9ޕQS;F=`A?ȏ.7mwj!ʎy5XB@ZWҿL_l0\bN~.:bUm?zT֌.>\͛S5:]c-B탮nx5{:FEFoi2:8-氫{ѱi8w=U6]bnm$ea4xڹsTޛ`]{*X,V>꫒wL:ssĠW*(+4o\sӔwGFEk/5GAV5ukZ!w`o KtqIuv;8zf:1Iy}ǃ nL6몊eYXZσ5d0,P[LP6{ґ ͡vΎ*q`7 L]kH鼬L*5M82 i T4E72KDžjː+K氡j)<"Cq 2ycG>VH8%fHJ?l<"c.6qCz!fmq׍W`uKZEU 㥻N,^!۲.YpL"u4ߐ1 (I}&B1:`bB^EY t\JM 874)"lϙ=!ȗ8jN.L=`N{<5B<"c8r9Mc=2#$}[\0`֒MYxx690#B6qQrAdXdr$sDpf顒+\"2ߤ"z䷵7e*PQ6Q2;Yzajt Ehj:fx6a ȂPe *M`=*lzd@\rܮ2T(]<߱oiƕ_U_^z䬡!Q U @p˻b <"C%.ߖy=a~2Lf:g;"24rifv}#bBkӷq(w #?OW~5:e7oN[=I%Y7ݽr6\PR/f6xoS~Yg-+L], q W=W:(e W͟&|P FGDFHp'}j8 .;xDQ>~o,}ue[ݧi8"G\p,A!A3x^'楷FGdpʁΞ@K0Eo? 22W q=QP)|<""e KPbp ,A_<5yΪ$6<R|QCGdpj+]`r7RH| #Gj_}84ᮊ`}Kw&靫 6P<"CТk[`-z~dP\Bvs<ЙKP~o^WM.\}QQ Aض^{&*=|"D`;spf3I`v"-lf<<)De)OӔ;>L]`Ivz|dHdI\"b n/Oy|=^)s%C9ƲOB=aqPYv GdpUᡪ-N" ]Vyvq6R.Yh\D\!@.@^B5 *Wch@DBgq>cL]+J33+JF\EfRj-+`vecf*jk^5(;e{`QaWF11i==ʸq^P?dEhP͓IkЇWeͨ}D"ǵ&G\pxBȖwxpPu:x]0qkrS{5֬圠O9#Q]FӏKS A! b !kw8*JQwQ92*2T]ƌ2ވVUY Ť XkLzS$M1kڗ<"c/EfO\ Kcad{Nl]9] ?M,Qŏh+u.TV_8_C|_gC/(?~ݧџ]ol?6yOW?揣oǸo?yiH^)FP+ nW;U絎07:B њ1%mVnU$#>{NV!F]*<ШЭR4z#Y>gOp(Shkcjt.]ߙ0{63lpgaMmҕ\I*0b93\pxe5zɟ3yH0nRamީ_L3fPe{0nJ C{ol*q4`?/^b͛@ 1]ޏ${EBtZ7/:vu3k!mY~;/w͠=ßqЉT QU.8y5ZoEj3g\E\=6v!֘M`4V`13 b>3CZz;Si-1殲TJqPig07jC ]œN 6?ƫRʊ9omx2z!4տILU6Awuղ1\oLnOrH#8bB$+l/gxǬ0imSz;X3Β tdptryS90lp:vOEs6cH+=fgS1`R3'@vV& / \nYUdJ`lӌxstN3fCa=| Q_Հ7iӴ! KAJP62&)P}'&si;\@ؚ ;0Hf` 7gПM,La݀;Q7 oqac 1z"W<9$F''\\;0ʲܭ'LJ(k :֘ެ!΄"qZ ~GFM"No+u2((9OAOl1o?`8 P`ӺF_N-Yޮy_"<ZxTQ(WNqr17"Dž?MBx%2҄I\6*OErj#(PotSŸ@!|( J,q\i4ǓOfI`X|k 7??O:M-V{ 俍T<.ߏcǫڋ|퉼?-=ӷ&@IC~࢒ ֆ>K}j E>p3 tD.$/O@!*<|P Zt\(*1= FtL#(P6/iu\x  ABx, 05_pFW`Y^*N,-kMk.MPS:x|*SlQOpcQO<QK^LΊz.gO U) ;pՋPPA.5BFy`(rk9BKR[C$ `hMRmD@pco6>p,@A!\-c?س祯q<;x>yډɋ@1ymB/<ǒ9X`U˲pߞߋDtCDx+!z. ǨZ-P5鎂cyxbuФ\)&?$JL`^'F Dž-Lp]Ŭ_i.2l%Ly" CBxeSI.y+}+%HƨNTSŸ@! >Q(7 b1} @!&0CO@!sӾ D'£ABTKnWHxnX2M8ʅ0&#(Pr# $+-#GU#<>h-I}EjsVV A+&Rֹw'` D{] oɑ+K# ֛ĸubAE"ҶvgR4$kfYMo=:ޤV/ZP% ,u'bK{jxa9 }X[ @YZ|'qDC99P+hSXiRCmzB.vӤBZb"0Mjhc}+174N +5sL=w7+޺,}oRCD-8#c3 PAxCkI - ^"Һsy I ^rB1r쓳`RA[j!oPCv >*%.O$L.qԀ|<7EuKթ/qbhDܥR-GI*f:+yRScpIrgt3| t/SJM,ƾK7܌&Gux;MnߘM1Ո?&}߽}sTo8joM ysʞQsAwǍ$_Ϧp;&< L1{0 f ܦ/z'fXu_:U*"/lNIxtMdc6IZi<@XP=*:^gUżD-furME*(s?ݭ?SwnOsq7fdUtS+5y<8tЩɕ܌(VT7K[KmGYte}'Ӕ8 eᴥ6s[ިև 5LP4MGIITΩԸMroxm=DB'JIB:5=qܛ%כgΎ#H-ـA!pNLvr{ k7%ȝ3MD!9Rqþz/5(,'%Gcz3Es9$T>6lK:8Ӝ ˁrj"Z ǒMIp/>?7GM" ;8-qo`UٚY>5q$VLڜ8=5qot /FTٴ?Oh=-t-̰Lna[6ix[wj?)ARG?gzpbGv0́rU>rNLy<すj)o'Ge2=YOMʎߙֻoSSĊabOᶩmW;_Tڍ4&OKN' @l3^w}u%Ɖ׈_ Fi#Z0,Ngj:HJ19_O}P[[bݱg*rHшj+34CwFrVᕦl64 .1,.r׋v>C uM Q+%}4"8Uj^_E~?7 9ar ,\ S¥ZC HYᮗ-)`e9Xf{YzIQ-_M;K"?isveeە-Զr囌dl%wTo~*+ u&ԇ&Z0܇𙝣:=sqm@lI%^iGjdTROc.*5X@Uv93o,|,u>4#z }U SY~7_l0:Crt)nd0iC6P: )K[St1ֿ D.%|0N}57~ ӟ-y< 1-#"gt"uxmz{.)Ro z)R/kWj)6b'QQ9Yaގ. Ēa+*]#3/h1g`8Ǟ-Ki5' {1o$tgxvЏG}PytVΊc3bЧfh_/sٗŁ޾[gQQ mtpV* ăE=mO `,e:OM~>ieQYt)Ty1n[[`|ïW)U {x^+p(T,xl!hdH0+8& Mi0TL֛Wj+tlCSClrY+>0ӲRF-7Ǿ?0!tz wz݁^w=I O8'O/UJރWu:W:Oё[$qp$JYi78e4cDIJ1?uwΰݡΰ;KvpKٷml^xW^Pb##Vi xBݧ6ʘ.N$n2a-ٶn(࿝ =,1B[co;pkGh`V} KF;] `2@Y9)aS1O%VsdQus$r(joA$I} #5h BD(CQ,SON8,ǣADS۸7 he7dͪpk;V &ʋԓKja\NWc7 n"wT^-W~ZZ?!ko`1RD:!1h"˃2ZA:Rb&TF6Pe2R5Zw^+; HRJA0C  k@e|f [! -a yI#xCAоXőp.uXE0$RT!l~x[]ݹ8\P`Iڧd#y'XHǞe˔FC2N Fbb'kGI4 ^֝Ly2`$œ/~ގ34r[\nE[[ ni=CAҜ#)u틢^UI\(`Nۮ)P3 VT=@?@?@W]dOd 1SxC?ik҆IQN))'ȵ1O Dc3k޸:gNl|uu;u~ &vzV $ AW ,@PywzWp1Wܲ$>x LqY̙C:<Ncʠ:P@]zx!=(d(1\(wp)B`jg>sWL#,8ZI}ځBw9ݡ2vIk9MddB[_d>_{Tלlu`v$凞Li'sOCh?{s{i^Lu׳oz<^̞u9?y + }@t;JdeS@L|l#yr 8M(rnS)Ew9lVJH?B";בcq&=ss8:/7=,ˮshQ]P;h k mDmm`D1 ؤ."E=J, ; H\ lK8>,\VYׯgS`&W뿦{(W`!S&Y(tϡcs!۠ёD땯B<949z\;x ",ݑ|}&R0@%48'{g# n}%n4F`#0qGb,uqw7ŃT+)bh:vt-72T=e{u3ޘ~@!CEDIZp&@m'H2U<a4aFaDμaZƐ ]O<ZAC ߿)V'~KϊSq-eA GEʀAX-I`æxr ^@%w1>'fwL-l{ 縷&긎:RJ+&H aGSa@)aCVu3;ГHitL*"KXafXT 0V28XAPVd-:zLBE]틵üh@BbhqjIHI)( ΄bX8z<Աn!vHsɲ/kymyۗڭ@9yΑƴ_YRe|$ʊ0y2$~:LaNUtTݨ×.TuR VMMPZSe })ܖƦ7dH9e`XZ,W}mlbFvn0ҨeVIKnbesf"2oI.AN'L 1Lї^a@zgs$kܧT( bl\c=>TB0wNz1zk3X~kI25-&×e$m%ࠎm#ErI2CjdXG aIݹ)pc0o%P0 dGtb1,IwWq@ȾUa8'R\N?F Z:iJ XKVG-_rXi3^KW^03M~ `F~ #q ];Ν9K.-na븹XW\0_j#y'ëau$t/;&Y>-Ss8grdaC,b<jU+B16Dihqds-N^._n5d@37QZ406B` CH%L̢Nؙ[1?a:ɾ;UZ55WjGwΕnƥ^&c/#jPD9W:B|\_`NAK1hH*\T'Ɗiqn >9h#U\ Uby @1tJ$ iV̫;N&}+AV71wmH_aKfnCRw'I$;5|\ ȒGxHIɲI9Lkß~2:ׅoMxP= $ v)z qB~tpǭj0DuHmlZb]ΖsjuIѷR]w,w`MƬ1kmZƬ1kmZnN1kmZmZƬ1kmZƬ1kmZV3f8Ơ=͠ ۔Dc/iE҈,%T oRHhIA. NXk%3_\vOT c*ގףkӇ_56<8T|p06E^m <<, A~hDQڻ<v|oϓ)&ӓGxYg > "@tw75WvMۢku1+~WG9Xds @~_};d{{)#6#{^]_A/[1%SEyba=>9#e{K%hw ڻ].A{ !vV .u&nS4" =~Ehb$PK|xƤu(ÙFXgRR@.S"!ơK0f)%0lҶ.Bݺ%|Eh0uc:?+ήyP ]:pJ=;pct3MI@Gkf?ф=QI-Ӥ{4LjbJi3EMbbR˜lTaEF~k78}R81S$Si&V2J3TFNLsp@3KLlD{u?ѣ>ghUcm@H"ARþ-AP?s+0Ru|}gM$q)npl1s1KNDZՂ'VkP3C(d,81\+RfK!{84Dg, a NVB@U{(ièv@_?L<^xXW1mk "j$F0 4=mc]صMv= صD>,9&rRmltRkǰp xJPR#I""DԵquA29Z;>یx vcx(%PͪM~êqd=cdDĠY(&c _%),Y<5wcBө*$JZ{D26yʠ6wݧbhka]mvjS@1WIȼӘk"_ns6JŌ!'ĉ`)n%Ť=@}Z%?~ӣ;xc|6w‹4<|jjz8~}p4V(Bj&$;ܹ,\œe Ħ3A'#Č,֒8Uӄ"(8wgjFy7*$߳kLixR=J+F&)'}I'T%Xڤ8,)ԔY>$6H(_Nb8W<]_ 70Pw>`OKb&֫4˒XPckCXl P"A+sfkq/LuoCov*'J& -yFJ*SahH{?FCA\-ܭۦ,wͽxrjr7p:[`:k|Q떋wKOOxA߬,(CB$`6x1 5ag} a/@*& {ǹ,Ee0,? {e =si+ND'0xԳһ/p(D)Q oūA2J}ˆda"Yn"< wNe k ;B%$l/ O]uMd=\Gk!윍 yl*\J"@Wȹ$c|/oټ~T YRo§eGә:x;>7)ԤF٨w TnxwhƗ>+.&v!L@*ī )9-Rq߹{6`aQgOڑR;?_6Im5},y(Vs7y$[JlnW)?z9QJk yl] A-ą5[q"7yÄIPG}>o6'@1y&Ξwǚ}΍8|n/Z|T; az ـv?bZK0xNx/@{޼u GvSg&pqdPZqq0gDS.eC.Nx7{;Ms[dl_y0$v)!wMMI/<_%  Hr j.Tl=y /1OZ{Pica̓ =7`/ʓ?u ==vuUWOF]-^t[ a kr3H/ׯ܁{uh6l_1/ԇ_r0S7kkXSء_<ΰyf@b07G3U{OPhlY26@J?;,zdFÁ]9"=#bX{rz sY;\II #DT>8jpߕ o }Uߔgtq;=\7^^9:T]SYg߁7tҏM?4jQevCr73_-Ĥ>M)l|%^R^բJ[zW6oP+~`؅r݋INH;!pOSQhp)K6);q6LQZ(fx`j2[h7r; Jjص-vp̌#p W{CpqͨR$,nSt8bb/wsG .Et0k{sGFëࠌAK^;$l??9] 8~:^D;Y 9W ha>Ρ3?THrR,̧ wf_br,kt}Z^Ok~ZZO*pӸᅧQxvfazy )v3R˕gb|Nr ]ft 8%epsRns+-.n[4*W1wr+y-i[rM[jwU$emA>3xuɻ $+*tq V|PX Kx4!o՗˯.IʚuEou:WsUOEF=W}ny[a[jpw\l'Ŋ;`.S1U-l0> 3gڎb5|M>_Gv )|'l6{y\ՙ}Ke-;lrA$MeFOfY;ykC%yf(N2"faZ:KE1dw*ydJ&/sezȴP yY֌k)J$HAifEAʨDZ:(`r͕Ýy[an,eYh\SO(2)q\;Dp5R%8aa;FIq*ZL.WԎ,%zSwHd403)P.%H5 VP%@2AU#{f}ڶgnn3!RFGL4׽sr3ˆg-|ţ3/-`X K%R `#tR&')<ɬ֙pCщK RϦ5K5Qw"@H%X8i2f*Ϋ*TZ͎f=H<(Sq98{/c_Fcww6UX{+y4xҬ̷tfG7a~Q]h.%O0j_X 5EX^_/&\\\.T}m쓔y;~#坾}T M1p th$ga(L3nIm(o g\YE$ Sf2~ # ;q~/MO,~DOOiu_svcdN5ii\Wl̕şCKTNUME2D>ouwxA0%I}1M;f<I3ؾgJ k!L p"l5#5%7쥧+z=ayw,>O!(]|x9Smeݍ\Ҹ7X /sHmMf^{>Vr mDru 6eieoئ|$I|Y-iE҈,%T oRHhIA. NXk%3_\vcaT c*ގףkӇ_56<8T|pV<Œ ƿ'?~ؼͲp&, AWJun+~^̯ ̛Eks1*eeU`,fDƥld90Rb1!Zh#j‚*j@[H.W/E5ߏ1f]nG~D!!b!\xjX͖wʶi~z#4hl TA)o k3ܳTc˴1.0ۛok>zksbGՕT'>jZ؞pHr@Y2G=VP}JXor4IX}&ɏT}U&'jz7n!a:J:RTcRbfsx+{ ֽSJxnMxl0>%Yn`X&T;s!~jO_˝ϡˋ|a50 '.­OPα\ňQrk2"Pe݂}wWVzyZL]g积6ycB6 Ap-N^e;ac)8IvPAwחM/xr<o+v]lvW+'wwf*TJ)- KQVӇR}>_/T0QYfDw¿5ѝz;6Y 2D>.א$dϳKXyPTUF׫֢=-~t]1.b k5&j'ze|4 R8leh#$@֖ʴaH3zXѐ;А>nߵo2rU k|a UK{4 ! !.&:\م:#*/}iMY"ĥ$2T!tFVD褌kP;4SHYP)lif CC-cL;)q"dK)8v&BBOg XA_s5umV! ӕb}dZ^ VhZbᘅd/^tJu p|٪ (-PQŪg=w8.K9na횥ʯv1ڂh8U$,EB(S4Je@'p6d>!"ʰTGKd8FȠR95#8.9O?gGx]p:p'cop4t9 >w%6n(`T1L(][.csk˛ 胓_x F82ArI&,$5b!^aZh]}qz8~!B神dRe;ZB\*)!{:fA-2vOÇ7=K"i<tyvXz֖h9Z ~t~5YZz|_흦@4M5{S 8_ʃ__OEi`'=<|\x?9 LJ.pћ g6yk?usqO{9FI I꘡)(fszUΝr]jfhdyك8娍E|(V7P`pܨώ䏼y>=+eɦFٴ 2fF~e^\c7E)!j<ۃAVh}4z" K Uε.8(Ȕh#&M2ķV®O;9lwy?|>Vl#[fTnI EϜ,ֲs>%ν)f|D *DlX?2Ԯ"үYVl5SCprh}m'OZ4_ g.3gqgCi 9GG:3M|{;CBPJOЌS)tCKAƶT;M\4hS=V_l0mvc5[V DJj׷yPLjaN(jJ~#c輆q4Up} U6\#I{PBAώW 'gj%MQmx [j@^vzr\a=='- ` )O4r>0z ة^$v<f(r(}"d H^ٷM\I".Z5H͡Ar(# e]^RmC/ 5g2d>^c&U>MY;hlnz NuHj+}i sOZ$q|Y/ $Nw6'mcoAK/p&Pfy" Nj tِ q;:9^K˪Ak.^ (Hy;Yc0!9oڈ ? Jq+6Ghd4zqďlo9HkGs ΎI 53!p9+*Gޜ}W~R_oe#g?a.?NS{p쑾߿;hbGw2az~fs;G՚Y%6Cp+YKW6|vQ)*_ ?YnU9WW|tyĬKZ@{л~IXIq]ɇR6)(|̃ZMXg~; AP6vNA?^4`Yugc0.5o(_|#H}{ h͖~.Ӳ0BHF xi8֩C֯4$)HQ+Q4|1UgŀoC"| 6Cdž>, ^/'TrNet@}RA- YynO݂c%ثTS8A&Z:a,$Oi@]Ɨ ց82chCQN(F*&Xa*C9YLvtenw*S1ݚRTum}W-sUSr ~BQ}dGѨ**hbѯ8e5~tj]1 W;3+zY;;!tŎ| sO I_j$[欉Dڝx*9Pv#ZT tCujNdAāN~ R8iKܔwDt虲A^#nL!vkF覝, K8M; ;e@eLz EkBD` gFgry6S*jz"z9 aY'<%(z#2Rs&ЮAu1@ إtv2֋.'#r(NǓYv%ǐR"26Su,ߔ_d=)v߲\ggr^̜lfx6q1UJM ƬF؊B3t#PHBMu!QY.w.wa+qQн` 4 Rtw,䵝OGY6$5=Shj?>]PVbLıep7ؚ},drwMwGw;OZ4)q9<ш4C892緑W=vyTο{]m{n4@*PNg&;Su2U֚ezn~vP:"M1+ GSi:2&H;i7ɕTnj7if<;i0w^l1_[O!'J|h.oDez/퍋| >Ʊ/*AKEo#ylR;$8G5dӧřuWqWKCg䘷LcTCf:c i I'81G};щ<}`ڍq ض璶. wG\eGLFgy-*#m)4'>n{s3A-xī=Xt8ǣ]d@v XZbbEd!fKp}63dE{ )oNtܶIesvzRؕa"^x*wxu+D"s$Ox.RYNP Y<ߘa}$pI.ZЉG۵(,iKoQz~,P!/4'UNMAVH%S<+@z'/6L7R{gf^8ߜϊGޜ}W~R_oe#g0 an4蝁6\'}?whv&_2M i:#ʏ,hyAc'ӼpěnY1Wgg f_ROVMڪuxL~,o{~P{A%Y^8ʛd(|O*]ɇRF_w>ࢃwboYbw>)m's)W4_j_ar;]ow#㿡D4ٿ`zG>_|ޕqd>';031b{IVS"}_u7MMK;F$X;zJr1@ƃlr8T ۺb`+Raelnt1}殃{Wa$q_Y#_pali>w'/H?2#)MhhkcN&zGܧoϲyM"C/܀N4^,TY7i^MkTPu~SM/-l-tIjעk;^jV3=_f_Q(價T;3eu6zYC'ZA}%ҮaTsӼrkZ']3)߼)* u;|Lon*Ԝpw1p0k8G9AzߣkCl`|Cz ^ wE[,} *;OȀșHsv#?>tA@}Y f'wc:}7.'}{sqI.ņ7p:1KOn*,[60ݟ's˱yßCO뛚ozӥ\t X=j δxE.-}*V>ԷWbZ38ޤaZ7 /W Giw]sg9Ӓ[O^ɘp2勛 Gݻsu ؙiT ׋U< =MmzaP g 5f?A`J_blarz {= zΧu$ g $*@~S-*:k.mmǷ&?ђ L/1KlnŢ)Ƴ H CLjs7dDE&n/qb"L28hƦIҤe }6CA](Ts`CV9~X闪[]Y#C(c%[eFﻷW3>B'iΑ/0!ڑ=nR k/9_GVSs-9˝R 3DPM-W5CBi4zY„VeK6uD8,V2R;RGj#D-wlmc IboAY!$G ?s;Bm/l|ßSzeɿ^kUv AF)Cic3+0,L !ĭ+njjoYp ̂HKB9UaaR*I/`Awp.'6VFv\obSջ5%D_;:5ҘoOTϿmLׄh-R܀+)&[)Q[M*Ffd j0DFtKqv]3RʃIZ>,h3W 6 hdiQk %.P!`V0/2ZDJGt`ZD"rR5a&ha iU3L1(hL> 'H 𵠄XGm4a+K,:V!!: ju4(&9bkB6,ZjZiԴְ#(l?t}]KNYτmY+6:^vg8v6@1Kׅvc7d8ҝڌ ɠgՁa݁ AYl;&[#p@F0ezpie L×)|>\DZNѭYlBhYa yv%F&J,fQmpCtP 8_|/u@D)'Tı_ak ڡ1uy%":8qhCIǡ?:*B? D 1*էĘ=vHAJ1~A ''dJJjq* %gjfZs5+يJCOOccu0;yS `MgX* B$cRfl͡ iIds[w\iiz݇iaB+ fڎ:/ ߇;XՎWU GɡԜ /˅GxFjbZy ?s22K$lxPQ4H)ʝb;E.Cx*ŨkB,<.Fl>M,AiP"P^#qRjw6* `R; F(LjL끿-UGŊ]'5q11TjU?2]6r{M2.zي/Vq<{ɮ/ BO+=vIKr[j--v89=k5P|Vjyly=K=L(4~<)C0z)#"F0+Cʘtc{~mfҝ 7vW,wk4UV1N1f֭nЫ'VV)>RZXEǴA^MJw 4lBِm% V/='+Zuz[Z4%LRW%8: ق0fRx}d*Q_BQNjwu.9^O5,:T<2IUi JfAV5hQ)&Lc|W Us/F݂=wa ƥNyPc/*ʚ L,jөڪ[?;dD-wxcE+}{ގ]dL,ЄQ|5|J I.{ l`8㳋2pVῩ9j=t뿙t~7eB0c f L 2ٍOjVM]M>˻)x}Z*[͚œȱbґ5rLX)9avs ~~oZXi9"༭{$niFI)x~>S,O93qc$=9[c(^iGjd>9HysQ b,<\LgCڸ-w0wc%b|pK06dAu(07̧zo Ky/XY;ؼO[[Su1xmC[M?b8yw|m~yobc7yWD.m*T[p|ܜ6Kؗ(_Т/gfAe9&r6ȹMEw9lVJH?B" [a?-HqЋϞCy&(o$pO*SF7Cq{C8ȈyIsÀ7ZƐ a3<m>z$}G^@;xl cynmmvx ͸2蠄"e>A: xj0рxrlFrE؄ A2 c*RtQj>\R5l#}brL\95vY%;/WsL/%]k-d6-N_OF゙jW6T^zzmð,h&|gUNin9)fP&긎`50C+P-sa'9$xMAzff+Sב `vFǤ"X3UdqXF"A$,{>{UO⩕u/qр*@L9i46^9EEP 0'tEr+5Hެ*R) `z\T[& 8+m$9wSRއ+=OIs_)`i0>;D׉Te9iWdon6R7NC)Ov~4u|U4`$Y e]v%pHaMAL#G2p v0b~iŘ݇d䍣y"FmXĨ| qx$ _#erظW@#:v*:: >x~L?~?~|? ?L@4)Ex6_och0lhCK檧˸%7}ҧ xnrKSݱKung{wК3t?W"Th.ˑf|(l2Tfrd""5X0O#ZmP QtipH'3inWEQ5 Fny o;Ji alC*-`q.Z,zﴎ9,Jok6zk4dյPg>f{)C^0l QZ;a<&1ue)v $'WNsx^4a/'LWSv)?s 8W|?5fA7B$.Ur[ .DH;KH\&K!zƌƁZ1b"iZ+IϵBک;[*uF^~jG/bN5"->VS=l2՘Zv 9zѶd]]S *@GTPJmb_XpL0&1V*Nk`waR|Z]XAw](#))_))i/9v`BK 1X Ĭ)@*& EYD>řdgWj!sye.-QiLʘHxcJS870CdiㅑJ yCP iib<-糴6,̱uOgZD|GE" bauH1h,Ġm#G, $rW{%aC))f6Ab&0DG0@`.G_f* p*m-O"5@M%dG(AjX[H~YWZ1J fo!U@NFR x OoEKڥIJPAryx,E| xn5hk݁'rJE<ڹq/w lOIp ؜A/7sխ=vJDsѫѥdHuHTWcxxՏ"F̊(<DaAR┶-`).e}?&W[ x$ )[X1c2b=6M L!-m]K}?=()`u>m+}oQu:YZ_uE&7XTnzwz0}ZM-$7Nmݾ< [p7*|FxN;F۝pOo$= RY}zs/"Ah븶Iwۼ'!:D]!5=-ɐiaz9N`.0 $ >ɰ[)Q]^JǍjɷB=b(ؙP2zEx#x+8>jK ^Ȍ ^`A ^{+!s׎ܵ=yL#KOc7鍛7jP^oO (eU42aD%$@tY$h) Ӂi]IՆ X1Z qXBg `*(S:$0 rt-0!`/k Ҙ68왋%X7cZadPg($^2#YobA% 9Ij$fXeiӎiaG``/ya`xN1ɾq~FK^>s6PGR0"Yr+˱##Pe-',_qp4`[NwN rIBXYL)#"b1h#J֎[}_Uƿ_?_rb^ a,p0M-35COœ V7}tt>|zkkVm_R]6.Z,,Ņ*pf=e9fdD.[O;U%⦟NӸ^%%(yًCͣKu43Nfa1Mu'3u:c7'v=/39%yWamK 05Y&Y 7IgYeusRw{+Y ZlJ͎vsռ W9vw6Bɩ4<0 m*CA ƙARLr(yWF4n zjeH*#q#FHr$z FE'igP{ڨOgЎF\v4J _9p\!q߸Al=׻s)yrD .H2&j<|kEP$LPPcH䅡V0P,ϜM&NK\^y N F9DŽRn&x9I?"( -J))$س~܀EJz;ɻ\om1Í *O*%!X*g\8EUf{CȈyI `o!^0B3<7m^ {8#b/{%5*G}_֩K//x?-#>`5Ռk)J8J,RFPꃰZD 1p@~-G^pZ"ap) (; F(Svat+TޟnçLY:&^߳+&>KKTָdu<4.|s&?~Z;#`G|4qIbb]eq*iz96;7Y*: iT?8W"![ $8eXA !L93 QЌo0_%P0 *:mqkBK %TsiPL3<]tG&҈ojLp!y QH!Yt6z ghrTW'u[F-~;^OKr_Փ7&'l1*9#|;;xK-obPr5)N@fC'ƚ{b|uOMݐn` [PZAiɋ͔>{ɚq{%h}A60VVx8mU9#ld6!^/7*k zE8s9W`N_uwߞbNO?9};8_&S8l"mO¿"@-'̎4r*eMk¯Nz񅚏/GhaxeZ>؛fl4Z?r .OבYYT`#)NjMBȌ ,B9,#4b<jU+B16DiRzf2tO22yՑpomھ`Dk1wD@3?U[!\”Z,zﴎT+PMg"~롋"¢pId1re#7Ѷنs[;V Ad˵z\+%.N ˕N\q8z3!HŠ93@Vpb#Vos̑*yENK-RZ>䮛 ZH.PHx3<t:FY^1TC$@pzY@U8c yWB_Z]g)}! S0Րj 3=\c&s{hl?3A=ƻdNsg9-r͙A6ƒEiC}nYε1bEq˂q8k]W3|ՍwPbմX;<@ yп1WXу~ v+s_o|5 ۑ~؊Zn$sdL ^:>m}g +wm 8 I ,8* _]ĉV~b-Nq<`KP[qgV~U:j~[[O;NiVըW:N~)U_U`T wS#f5ۏ3fp5: >R"g ODԏBCvb I,ë~8G|xR3hp yce̓5i4t/T#ކd\p@TE4hWµ÷a~i_Br:WW+K3< Z-*g[s)+Ƃ)0YQ(:&i,;qmZbraESĸs&*E1j&Y #tjڊKVD&Y>38qm34o-h]@|3+)e0N?*I8RXÐ4.OnD˷:yy:HNlKv4dGA[N(odߝdF!Lɾ>!n2u)j^=sj%;hPm|SO*!{$ܛ%Ծ(@-F(J P(RޖS?,=jnl2 MtԇEh3cj ܠ5@i곖kLIoEo8{zSK`|bf4.F k[X.LYu*mV(i,I?"a6 ߓy8̠N n1Z9PLaD[]6Ӝ+DˠF΅@G\"X \ľrZuq{d썹J /*Q+_SRw+O_J!LZG-0MS7f6&j~fRwf 4'2TEkדXbA(#U{ ]/lot3_ѕD y\UCy7KŲtl47y{qsojy,hzi)9Hr+˱ÜlN?O%^:4H̱6g~eq&ZOrH`.ALUD`VS-Pc.ES[3eMG%\Hn3c C>Z[E4aD80.]7A[?8M,iw}GW˗|{ދ1q'UCZ!0bQ-1&&R/5eDDL ` Xy$R`nBfklXud!tbJUɓh8};r9YUZh\MΫwj7!_n7^mNcF6ʌ ;ɘD刊*:dZȹ_W@SFVl*q>{7*CS5]穱!s56ڊ,?}.M)vUamWWLY;ee7+u4nXEᷤ˷עe,Y%]ӗ0}ɹ m6N?ᶓ=WcK)ෳ|=jݙuyl%+&Ë^'5~7oJە|ڍ?Wetz"Kp3"5SDc*Q"!%F$#mMr,+|huk4Iǀ;j'Ղ77"xTa< U:|1zYN3Whfb>b.E֝63+VO CNܑE қz6Nr 6l @dK޶ t~n Qʽ$+YUz[eYd-7O¥]MRN~pJ>PTo^+F:9Ni^1׺A[=;~3[L)1U 5.Hont4uuevҜNDN:XUsnCch$[@35vv--6x{Eort`8_\%RW]e?K渖\]ý0)G+??ݶI9\!2 -%& ͻ[]=J3 "bl ɜ{p .,$1I#aIv:Q¿/y11όmaݾ|XjL.OwaA%wDHR5Ha* ugEœDvۊs]TєI +O;Pl֬]|<Š} S?7$r~N2ː&*9ʐ~eHQd3瀨ѩ\^V ^C]5%+s?w"|ϜS1wEw&']`RGSN-rm|B7;˜VƶY*#+iǜoooqq)1vY4a#A29HysQ ]@Uۂ]gSc*^"Y}W]IXz8- Y+ CQ^]\5X@Cd2[60JMS_\?'Zul=lI: <OV>+fm =7O/!D!e-zg՘ID\j4d썦!!7ZΚl?.w!gHjXF|0aKW{+%˶r;鷘n^p֎v~ONʥ&= VT&$Ұ~jU;h#GkSG Vp).}Ԗx"~v@ [ѵ9tųSBِS=,j'Qߌ]}]jW/\{IgJ@,-|4 j-$* %^F@HHLBDN6lR@ف~FY1 3 r_ JnCmƴawX:t310BB23uS/I_RS, A9p0Iٻ6$W2]Yņc{e:) @AIj4.8H4J%h]ʪeVf>Om%F˲J++m+E,5v)-jV"LV`/xŁ+ sQm{N9vܫGB\` cʊyBj8-M^p-bz/7%qIcCas*14 mRQ`3IE)[jH,mHIb.%s\pϪA]{)]n-sl>^ň.= :::):z"1!ugdvr$YvΞBJpH_A/wCW7-8"HB@ d(+RW'`uVUVї相!8+"e2&gƀ.x O'O9 J}0#a;HfкV(g0}._ Q aҎCQb ˹}nŤ~9\*x7=;I)oY%q M' ם+]/_NGA:Qi\a. MfnvoFy5W0uy/޷{#՚!nC:pNz)Hп?^nH~=)x ¿?^xtCdpF;?[L4G~AB5gjtG%yЁjG͎)C~T54ȺUuE3;.Zm2A[xP YͨsFWc\>wgŔgB~uGu=35ev/8$9~:]jRuQ=bW4pf:ɫiLmTBBD=E2媜~ @(ÄazrO.hE[UNl'gPl_-qfU1ƜnH(KSq[_4y%t؜s}oWscM.QדNG7 KOGoT kӮ59}x1Qw>M ֦My`0i5~S{~wc @x_f._.~\JhMi$g._"f~ `3lY68 ;#cJTq:la,h%KdP 1+$<qa}4lom/|k/=IyƟ|:shsԮ973!WV Umf0\k΁b?2[ Kqp5Nˬ 7գYW%d߼UY76y0 V7yZIy9l*_Sqz,{P z^z:5'ݟj'NgeFc~V4i0x^٬+A %07?da`]WׯqW?'ol6PdQ?®5N`Iv.OXW?g&aR\oVx6L43)8uQK&1r+=l:q[kdƊ諯 >6 +2o"{4VhStQC]^vy:,_OVx63ÁV n7YG_ךPg\Y9i`&O;8{ t3NN؁}fs7NE 9þmx.A p'&؏ 9]OMD'X<<1?nO0Κb3<=9 3ri##)̑BNz3ZWsC.QfnosÛ4z`ZFX]%Fp%*ņĩa,Dr4W(il8D{3i ]Dֱ. q%HJ;Ot>8T:Gŀ5*ahh 4QB>|:!_)%Ge%)E8xȏ'ayU#ZcَkK=\HG4dqcPN:usS#EnL="zy5g=yGt[c .ۂ] ΂B^aw}5e7;Kιh4 m;q2$ךZ V#d9u'u,^j41G%ծ WEE+ +҂mm{D-`)1q +-r ETЂRsϺiVphYϞ>VjI:J(RQ)d:(5x[98뭆48[WOL6wI@ 6 8d>^4ƶcAGBGЯN!n%8H1F4 d( Q8:fwƣVGΨZrgR)|BISE&*ƐB!7N3,M"hyGhÝXvhC<8ÿȰB9bCu"y}"L1|> lb !>XSHx o!-$\Hx F!-$eB[Hx o!-$B[Hx o!-$B[Hx o!-V}73[&+9=?C~Vm !;҃85=pj'I1=LjI9vR'%5_=s;f-/aⅶЖB[^h my-߷rw3n}&(϶ K"-ຠHDf[쐳"৓;n yI(a;S@$ןړqHn#}FSy2YMFd4vQhX<0I"jxT~8!; p ]R{+%*O4FvnЇ"e6C{6j\4*6h-JGR>RLI)8$m7 ,8̹&KQjA. /kNOl0< F鬓E,E# l 'H еBX7\;6Hc@أ Rc/cZad`(QHLd$& YobA% 90g H5 ai:iF@`Vu[Vox؉Imָ0f+eT(i)dH)mrl0tggZBsK^:vȵι|L[Z i&d\tyGeNʝ[ wn&WGxJpY<11S›#Ptlk79tƽdC7Z `?hriG8'&gb_1m'}u\giCX3(pFMOowdWbo+|z'i"m;MQKk Tُ喚B(kB&0pfe AmN/j_c $JAZڶCΙ֌/y ,5f1crHxb@@ԖZ6.)nq]/2uޖO.Xkqy+k.ޖ7}A.5(ŸGw?Y:ʯdƽ=?6)S4u ˥7 <%3d%@p_<ؐI|?}_o?ôg [I* $.!Ͻ۟~GFc™K}oc/z ^Ň04g`"~W۔-R@SxŒI0Ujw`48FÖ7)+O505~ McqWBA?5O,+1b^av]|pם-j)àΌ/PG/f+L3Yb4?~BFW=?=ZmtO)0 Ԗ7/:a`V!uXVN6,$Qcg-H@2Fϴ,sJÃ&l;V<%&.d`1s,MƃXqNIV聯^搵\>ڹIV0#(+,7;dDe^WY:/so߃Uzy|j1E<2M 5GY̅{&Zzr>P:*q+Lwy df >c3I5{:t.YʼEFǾ)UEnoχͻ>ʼv^A%PH5m5a*,:hJ(WK _D֎ig] %3SӋEe_Y#@˵D q$X gt=,69tsy B#2Ԧc@0G9ո-Z!>K"z9tͥ𗮸xj/P\a$9&YW}\IPϓӋIʊ&%Qmo2z,Uy0ܣ=Xt Rh!w0 0,gQ'7Xr}!XIDhp?{?caw܉`LN]r7rnH&̒ Hw hߥHJ"nf0[T,!~uT0R+..0EmB`V-Ӊ`1-b:Q+^L'*+f:1z46Y`.[#BE\%j:wqS;q UvPfAnF|{S3vtug0QIFp ]s&K(5kN1~;mo?_eSKylລG}ONϾ&mq51D-mwj[TB@f5*Q[UV},#Q)U'^d"q5*+[#쭫DkWBsPqVW@.C)Jr*QIE'^W '^ˣ41e&SV -*0j 9R (6rŞkH#B9)'k!!:d F_-yLBBb4,^=$YEاSŅq-F2,s2J2ίg/fnSl:uFT6 ւ!٠LfLg&QvVw$Az^KI>V[\BCF\j#7,b=hltr'/>8@ݢT>z,vzD:'wa=Sb67vœܤRkR-QZSXGG"{H*č2N4ΠlW5Cz Gjd~sVk/;a->cخlo*ɫtq&oqJ91#*ijGJ,uSP0J08qH(q1Z!sCL0#+523pzs6H x[2Ff R3F" "9X*;[r-Ȇ5 l=&hc_S:{T4oflɩEό T`Gc;mEBG̴ lLg1Q3xL(QmGA`YqGkZ pa31r l'Limc"ӏ} _E-W{a[c㣳OeY *rEJ0HP'FS/QK$YM Zl|B䁩fJq֠r{7{Y_X|fa<;~ԚmQwE+tl%ܢDmNj|`!e }`$7]yXqQpo(g1/ifZF֜g(qE!ؓ#F!"-_D[vӻ8-y߫ܕ? C?!\jks$o\r=c$94x["{4+S%%)#(AX-I`#`m9Sh (W-6!`BO."`*R?,<P^R5Qcl=-끻@qTqChu#G_\ O_mt J Fח(#?} s2RDB칢P%F [ 3<2*NJ/ق90I^27K&Wܜu;"-Rr+TD滰 3âᴒK@D0x !ڳ nW< GSrk7&*N Z U"ĩE#'V s&k8H(8aFwO?j$ ED*t:$AwmF[]K!MO{^/МEW[,9$KV%b1.%3<>`G!h%A+H5VOif:ߏ ԧvo}ipU|=Nzmws\֧N=s쩣2au0ݏBKj_Πw˕ DJ̉ߛʺvua:?嗟??~D}?V?B&xwlGï/x4hxT--q^O|ʜ]ny3|nI̖Z(Kwp&A)i՜Ȅ5fIu*Bs "FhGg]OQ6oh_yb}&VHj# ?3 /YIMv x-aj# oz;uINn@6?YYT`#)NjVGЌS,ƟmYsn(Zӈ4OVYI'+C:'0b32U-;rF2v=(QZ0i al*-`r.Z,zﴎe*(K"ĂSmY *x@"$&>Pg\\~g k5t? \>Rw{?x`'x>^{8@ںnLKڳ3I 9L2ó5/RvK3j~pOq6b3U6d} |҇-VN@5 ľn׽ \<^/ѓ׀PD{୵;k;]_9&&eG{_N |:9O>yU /[g:Qi|d*T!fMK bJ&LaaOtRJvvyub:G(4d,c:<c"aĘ kT+92`p֍\萤2YyxaR](B;NQ μ97r!w^=9%Z_t,FVGc4$rQzAB zw9)r>ah!ޮqQ1jC>@q3Э̅ C^xZH!q+rH :q)$RT.! q׍fH1r.`9VwHK0" bJU9Afx ~;RFsg`Lex}1e _ލ[FH.RRcR-&(#FY 3rXJ«,s?`'H-"LjXQ @-K@X 8%ͺ$C0!B*Caj3jX$"﵌F ̛Lٚo?>!V u퀼{PW[[]-*%_]q45F`]gevCՓUxwN;wnfyH'nڦ&^$ T]@3࿋ AB JOa8 l۵GJVP]7j~* 5ϕ\?w;ğ͜v~'N*&69wc'z?Onǰ)-Sc˛x(iG Ohd[n6?m9]kjȮBvZ]VZȮ~@+;\Z}~b38Cl+X0J`D$4ѸϮ윷n'A݂{m,7 3 "bl ɜ{p 3XH"&0*Z&YiXx| u}.Ms"w txѧ3¸-9tj:oJJDc7o*D}gدZRM/2J|r_z+O_l : kyTUȈ>.riM(K7")r\AY%VuTUO'\ 37;FxnjwS=H,(e GӸBPuH\Ԝ l~cr:~w[FeHCDg|>8t,_^O>UFp1b5>eJV$j!=.h8IQhX<0u[!0$5ID>-㮻0 ~b\>J-9mQVGR1"Yr+˱0oLmyn %@GDv%f2 Jb KD rIBXYL)#"b1h#JdM:n`la?UwǷs !ܗwI#wl5o@jS8_uSf1amOXg0FINy c)܁p&M|^ 3y-Xhm 4wF2&zMa9%x%zrRn6ٽW+_%:rMXvE,OdYj)nYr~,EP eeMT(D`L}RL#y;RΎ'2:iP'i\ B\¹=>5캤}k[o;m*O .`Fq^Vtv}ywi&M6'vcekٶ.=ms9]VYRL@n_dY@3y[:殷H?Jխ-(9~{Rn1E2WHf"%˺Hh\]XPY(22QŅeJozXvRQ/ZRs)̴qI'W7& `E˝[rMKrÑ5~s]оaDGG8f`F0 ܄1Ž:mtft:ֆM IC]!-QfANjLQ)fLx"3[*Nx Ԉ褠]|&r³M8kq25*;L @.;흷J@,6d1)ern#s RPNJ u7LeI3EnV*!QB{*P7&oCdMg:~RH1CK}P{ 'UL&^~a:- I_ZI_JΊx퉗xyWυD.*t2 )F!(bRB:H1N)6Ov̞BQ |BQ@Ye G!(2Vh G"i|Bb:\l`uQ> G!(|BQ> G!(Ζ%cZBQBQ> G!(BQ䦤*ࣤ+fBQ>PBQ> Gr|BQ> G!(EBQ> G!(|j? =~K\y:рmHVS ߂]w?U{,9e,#4Q(r4"$sFÁh0ELaT{M |@'rN9YdӅy\5SccΏwDa׆ LI#Ir">|,k~c[c((^%+Gj('Y@SJx @U{sm.rWwvyv:Ot(|mi C`>Ct? \kArW߼7CŃUG.NmY|[}ar_zӳ5n5.@mAZ'BZJS#-d^T!XGPHcHUBdHUR~` KrEό T`d>&BN[E11+ YLꍣcB)7j<[EdQ0A[R.k%'RHDcno\lƁ4IB{D;Dw~ʫ3a 0|yOPμY Wٻ6$ npU`7.q1~lg+dìٖdڐ$L%SZh.ugN k臊X XCd5Jf)T 8+7ZPg> O'v:/%߹8vhWH[Kt"*DV5p~jQrM_]Yܳbx?Tj<ΊF9zi9}+n^I衐Z{ Ka呰Z5TAO@.]޵1'(׉Dl: , W{]No;]z .)hM Ct2Lp'mhN, FMv6ykW_ϵ !yW׈Yח˾|V+|qwqxԛ2dze=-o)Ʊ;7v6/4_P"/~O^ɛLCѧ~N'v>h~|l>[IK.vRkRӔE#b$+Tͥ2Ϛu8K 98!o؈>hP}P`D˗E jY\YG'6&wE-V3Lߙ=XjٟXa"mGg㥨Xq5yB,|, Ȍq !o ja1_{E2*0sCqCcmRޤIbtN:9/!p9SFϓRHrJ4c]H+y W>mRmRҭ=aKHme7/#OMZ:Y(\QNg}{tkŜ!jY;:rXJAeͪ#}Κ|5KQ5Hx:"|?|Z Z1s%Iy2hh"JDž%$uM BbPpv6Kybj$ɵ1dC#X94# Рv%B/BK4 Nܿɪv=Y7Rzo1;$$c(=y,ho \:D A*0vC폵4m2VJR3#5z%Pd0e5&R>KY8qs[.|(QS7GiZ9YƛJΠ5Q:uiJ4 饤0Ôrs/lT\kzhfٵqiH U&@.i^/!F 6{ya@}llە Q5 >6)U/I;Hond]3bm3`m3&mLQch(U|4/zrf?sqmꢓZ]WluRi1GJH8&媘Chs*&J*Oߓ:߿~U~?Wo0/h 4N^߶@-/EMkfI.iQ$6l?,fIw} yE)N&y*.Bb4?Y*~E*!ؘ#ޖҎT&ń_o MGė-ⵂQ44cu\w>_3xA8L&,f12C6h+YFӿvW)ҟv?+ ͫdWKojH^3MB EoN򙔞ӓѓ(lQg!(mLhS],YsDCJ!{X{CxrL&GX$J5%D藡3Orh&cBr> 1xY*(]%.\ ̴NB@Mv'}8n^P nxeY Qy>Fɩ 難4,MA)yH.59\Ry1!hਤkfzSwHoCRj bgY.3ʟq VW,2qC@ L(c\2;ew".CM&{)^zlQ,9 zcɻĬ&,gJ{8_!el0,`e OE2$e[^Oq(M&Fd~XnG4]_{JsmxhMDk{?moݻ]QH!Ũ3|6jCN45rTө ŸOW35Z sZ#Z{y`Fn%1wk;R=[۹CV~T.n't3jFb~$WtW7  V;<|+`ZŴBgcڣ5+G%hI֍Z3WVx:0]d9z2u0z~[MYT t*'w\PGW.Cǫ_~ uu_~/R:${/yv$myaT 8笫d\s5>Ҙm[ ZnYR/{wT{IN],v\I]w}OdߍÖ(U9swT!|J1éTL̬TSx}틵>UUZjE/FXvN2:N(/U.2K;+à zP/#l:L;r:aXV_m[iW;{MLZ#+x:6@!3=/ga)K!1<Y-|;|$zӼ(q۰0X7p0EBP f-@~Zeu:\,s'ϖyo 5&pdK0:sZ9'j+.kSVԻ"Niċ!pf517Ϝ#C -#X 68G#%5C޾|ҍ'r'oVS@T*sykŚ6#_ Ĭ)*&-VX#Df'Pv);͋ F"JoYʰ "wxJ@/lK& AHRVa S띱Vc&ye4Tehj i|ɔ ƺ%.%B9 'vͻGev.myIga݆ԲwzU÷@bx>кSbz}Ni wٱ} !s^s?e'M7cX&sF/ۑҔzPvHFG"Z8^`3#*(`FB 6;XSxe!Yd>Iy57 Ȧ IjخYNgpf"mr"-!obD߂?M4qxCл_3,=#/ lCE.C:*+נr)N9|ea5Nz6SP>N盁%:G[tf8ڨg&VfÛ(U9A#LH.AGL1p"0肌dESY/nCRT ]QQ7_>@nRX ˽QPƋl<-zPI[`Rg㓝/7?)I结v]eDy6$GqXR: l;oqDR ZRoP=)8ěU` EPR5c6r61i! 5.V4~GaFa}鴇岤T4203D%$-@tY$)R: ADNvY-xI8"E.qVq303  D9;kHdLVG䐰GG~52dL+ hbKFb$S, W$53 䐱CFIk$퐒vvpGg桻Ϋq'YNNYַ"Xc},ۑKqoG-ɲBVuEudYէܦߌebfe>"e1G*H F$+PNre9 qkrh-#PJM."J;͠f2 Jb ϛze^FKG"`"ة2""&ZH0<`^gPFΆ`k\[}LN"Ol`h\]N]B~t3n3/V]GsXg0FINy 쇱Yi hP GjGxPH]ИG"h@QAsg1)Q1 #ԀPEL yZW )M@˭;}vD;ZmS HaW~˲;~-%|G`9eZZU@qa%c2poؑH}rv>a7If[W:Ew}-чK PKg05L?&:6E>^}}:m[P5Wў[o"oۥ]1mxY~ mfN5rI ,Rg2DO"k/%%D )XZYh6䐛7, \-s7]VVx:ah=E)c5Iu?̧Z<,w#+$F0'Ku#5ruqdj♅bmW,%//nL5,cI[SDmm4i͡LWw;%-th*;,8\aq^c{X3$X0C›PmWO{XlɁg:anyIbP'l5 Q 禝AV=&M6i{옇 +LؚwB:4kWUd\:͍7IW S[%u F 2r䘔2A9[ RፀPNJ uM`l"Xat?isGhPH${C K)FD¿M^w?h^5f?N3 "bl ɜ{p .,$01I#aI߲7ЉdR`#5s>l1ORsc!Ɵ/Da׆ $Ǒ6«$ifYs2e$);Fxo^삡RͦtX<̫Vϸb*4}s#,xy#rȀUvQ2b]rn\i|PBm?~5M=0.0|t'~G[)Db2LStFM.DK0Q٩Ę#ě\0 !sRW@0!lUI<ueUIj+TWgt?jwJS06}׿C뒎zP{s:arz^UJUi -c1k+&ftb@UkL~(͗C%6?~~'k^P6RWox ,+f`rHx_@ jwE '\AeMJzmRf}-S)D0=ue\USWWJu WJH}F >u=*QIu^ZbDH]%>u\UVMQR57EQ])g`L t.*QN(Qש |_{~߿s% ջ}S1ig32}r12Pό #T@T|0I3<`gp{,FTs:K) *t6LZr5QIi_'z6U<[]mG.;+V*y㿭*B]F]׭X P9# j|UM ,3SEs߽ 3yx['ɛ> .-1YQKg%,K}S\5"HJ(8"]ʗ@^"Q2.TxD5,PocoYdo-B{f\xr`75d?c|ߴϢ#P=lUUId(y"D*PQC޵5q$BeODuɺ):w}XٻX10ƿdu @j)4*:*al["%F;`QPZPi$\>iM1o_ʃ B/FR~)*OsWy2\t*70'> frL Lb]e*kRv mWs6_!0ZGh/_F0tx|:eb}7)B4J S4~6Tu^Y4 s*r^*g{KMo~/ׯ=7@dvR␏JE L@=xS&؛\~aS7̈P7G] x<;a~·= DD I, (^X@dkT9 |!9W"}G<Eۛ.HGsN#cd!ⓒRF-=gh:Y^QY2c=1ED j/"B31<…ଶq. !I)_g1%M_5H)V\Jo(x7= !}~EFײoeST61I 4sb:H+dܕ(UAwSkLN`pW"[52No߿-zU 8coh ִhRɦ v/yozvM~yjT7}eR0̝;rl+dT]ݒ QN ﯦ3og$rUQ r$N(Oe N+ nn5.sP*v#Ka̲!nm-]ŋ_@Di?N9nV8/FJB^Nms8yFUV2y^<\UO~*]?xUW6\%17`r5#Qo4|7W~@HHsg΍#9̶LNT2}$FmhuDž_Ǔҧn{F^޸ʈ?JŖN%įUgQȎݏ^c޽;Wݛ^\Yx(\5"HCϏZ54kchEۜ|W9q)jGb f~yw_Ve*윁Ԭqb΍gr/,d,=QŸ w4yyZd.]N/хbDV|yZZ6לsm@q((cxO,0.h).&escԚHA%:@ܖ\9ZGμ'79*<<)RR-:FO0}PJGw!xcRwYl'TsթbOD;j^> >rb4kUh6]-g#k)-)Mx#\. mV+v>Ί05K Fj=2d; $Rp]ؐ\b&TAljakb9Blr;LqMu{mzmK5A +NBak V¼j .`$;d|H%f-ң}7)nD:ɓ"( s| F坾s %3"|OBc+ٹ\0]Ed u9-F@7hdqq,<҂F0\w*b5Zɹޗ]>X{T^e*ӫЬ=&T]1}hO苷wUT(6z"mTr^k!4wo$h6$`Nx(UpT%Ag MВYT|ɿHpfxʩJ9Ghm"2h`9 5RD.bV $,*[M.Ŧ WƁHI!F R-[#gVrLS oﳊo#uT:;#*mFT ݠm-q7:ś=!R`>!v0Bv;y繑Ys!OmZ ;c$:a F8P#N.8ఃeHXWS> .4vóJu"JsRX('S܌GUڬUU?#KjY.lf"(yS:qb^AJGZTUqG<^.3ծuD?ߺ?8X*~/ (|o`Ύq;^fDpϐD6-K\Lt{bbs_3邂/v6XC6$֑[g/<ÏwR?->Nõc#u&R-/g0Fr*+I|$)ْPpx K&ȉX[Cf8KB& OĥTDGCNc jRST?}m#M.IPuJPE1{n&@a竺 F~^&=Xc0&YǝӅwQ[|?4DzߚKdpI/zMNg8kӜtVdq\J}r )YS!R I$r4n <p!&ixD]g #F:*Q(Q$߂h Cy@)E*-Ż-葏ɺt,|b$; }qBTZMhy@{.6r7{$ D\1$ы zxs/!n&T՝QjBHΑ\\j sNX1$R]Spj{gl=ж HuXv6S=3@cz?yǟ/q]B{'@ 3,\*/IDD0UDm5En6$, wZP@G'G"Eē?.ټuDemɯNƾ"3":pjT,eAry QF}ޣ@q(xt9ѴӭTS.UoRfQ?xڤ(GfE1oGb__vBoԁXl5ʑcRfjo5|orR*N8n܆l]|ގ0agiaBqaBͦU|)ĜקT=+YR,s2ڥ_MȽXjT ʤV0brʽ?us7L!MۂZQ$q#FHr$*^O6*Apvu[dzW ND%tֽkE( 6βQ^*7uҽsqr|:36我{Cx2=88mh5X/^y|H&rQ=k˫0,@ѓJ@s g8ociBnr#$"Lv<ülT"YLRZ2tRTTPl ť.@Jb ,v*bP6bCC9DŽRn&xqn2( -J)k%Of]~u HLEcIO^&:z* 2s8E`P[P@k\񼓞/M6JG?VK$v#~}*o'a;pzBlJ m%U SKdQ,ΒXAD1W,wAYzYGA4^P=6BUzn=D-jHQJ,K~.?*mi'g6j45@0/u_&H.~!)CLNuK'Ql14Ad.gmh4ibۓ }~HC*UfGbhv%i[]SpTׂ(dDE|(%w"z CI Y>sgjPFA eCQ]F=tYC^I1*kHDlNy223K \%ռpzɋB,(}ۉzIr& Ei1g2Il} K~DzRr,z8B[P t_DJAH\E>q*E\Eh8tq$@q0t"́> ݏ8_|u"m 3^ᰃK_+'5XaH˘eHJq8C&#(:J aԒ ܷq/ʑ ks,Cĸ )A:j30 \) SbMԱ #29"G!Eh5>tddr|&F¯̱\̱|3Ye9~R%T jCxgPl'Zʖ OIY<i k?KeB(kB*0f tP[;kɳm]%z*?E/g,0Zp0RN-:A%,F|ù_[:U mf[y8;ܮ:o?v{U wW (nٿ2歙36 :yZ f;r ١*h# ]׳*,yUӤdO#Em8eTTX(b*pxwQErߩ:QWRh+ )QgI/*^>0(X]ꋊZ{뤗|0ci-*LW`J }K[ta숧Kp˒QZ7qr|+fݡk `U^n>s唏2dKF6$yzV~odNǦx.+|#Y`TLzYq?}jX,r w3♱~ ?e׼h;l6˜r].[H=vg{}XU zLQX;{^rv)"崪KE٭rҝ$%,Aiх`6O1IA#G#ZPX|{u+?"1 XtCJNOb $L:qX@K:tq$$@qEVLd]wh, #@t:^ևaEwIwp^դ(3%-߶˘ޜ|)e& ߺKD˓.[ y/}/5olȡ672 9;hGYC~~{6kE\(wp)BSϘ8P=x:0gpPk%i04uDhj}k6rUy}}}⨋2m?PO>\h>:h}$w H.d|w֗{fn80la}[b^_ھ1ӟi Ѵ x0{-!b/fk.ﶷf3Kϳj0E"vs|Nҵ %iYUS+V a15}Bfx Kn"%JєdSk,IQ?p2}붱SO\?Yp1ԘA3 tP0 3J3g 8IIY!sSNZ\-KCp7pո@H%x~nwZV'gOM5ZwZ:;;vJȮB\pON=>UnC\;z _lOghS(0x; Zdʽ3ø)a٣c{[%B̚ތW@w?{*˫ѱ7~OmmY>ۡVr0k,:.VѽX} nJ퀵I/[zCp*)󒦆n3BI;M8?·C{ b[-U\ C?AwhvVӶQXU3 :((HAjAvkE<5hhH֖#/8eUrE؄ A2 c*b=dv)A@rK3gU;g-}&:6"$}; |:{dz^=||~hFW rgDs w>jЇxNfx/4 칢P%X!oa [o8"5<& L^u ux/qɚW~% HiT`T9I XG$6cAP6rL񦘍9ᾞ`aB*l^p%|\|*փAH2G'?Md&Kd}ņ$ ;  a]I0ug&Í|:˧ o@ճo%P0 *sb !jp -cW.ї°9X)BojAo>JnRHhv>Eo- 3jI5̌*zʷɝ5Uo'1_'/N. `X2|A- ւDp4vZ2\͑;oY[j&ƚgb|L7MӐi`ԉ0Q!FeqGŜY zx"fmX\+<5a#qb_Iqi6v '@_ߖU-Aěr|o ._\<"|o?~y݋Wˋ\\zh͛5׭(nS󮦆fSSZL-qf^>Tb\ Dۏ ~FfS'?i5k=b-x? l~3/][ocGr+v=V/ %v@xR#$gdyCR($6ERj:}=UD~a/blNR19<_gs*dp/ܢ~Q?z&^{xėZj "-oy:Eνs ]`!? (YNdZC(?J'V"cI%bnY%rD>hCiq\Y\h -#s)ZG \ɏt@Nuv` )tGI#RE)朼怪wRH=(v4< *xYB"$J'>nY RìkìnY-8V8jR`+ϦT!םMRcԫI*kդ>b5)R3.Ը\^{;qɷm(@hen=A<htet7yzZIl@ͫOH#;vnZg3?> #7Һm|w7o<.{w y4bN?e4WoP u7Zjo:p~0 o߮RJg'QE Y0Hil2*/( Adɺz̒⬕&a ):ttrEjF`2d P*?sQCZjp3_cǼV{4lH10DU(+*T [cDo8q!99LQ1,LgF))+)ϩg!عI䪅:'^P,TZj~ڟY'ZY"CY?F5 YIl\v<蔰+b;vQ"0 Yh!\:s%!V2H j|JxR^"L%(0P;;99x%z|\jYΖZM: ^h´ͷ녢igϑ3U{M2N]IFG@'G#I)zQ i o@\¹==X>D 5"}FO:x1. %/-^=Y{g8kfƤN6ȷ=۴}uOC%EY",Ȳ7oNoU(MYY@yTlXI .F$C5A>svyƄҧ\Vֳ^~=Eh 65sZ__q lxY7]WGfo$~`Ӄi͖q씣tQ/:RkAN་DV:mkȸӳAϧ7f;%)douǖG4(ߔc7((aaƴ ?ғ-| 1,4x`f9d5 S+&1y7XRm2kXa߱]|Pi.D@' }}Vn~`]-]q8y+N*F?_vM "J@jH#艿t3qZ&qP7L`y|=:k먺uTsv*¦U'|)-HF{iگ|(+vFkޣVx3%>.J:Ù n)ܫB_&\)s`+ɒDU)xY&^:n,mbEoUQVN)M+jdA8?jÏ5KQNu33%ڒ(mJ7b#d*-1 JT$ץ;#ec*;CowIQMFcYv- ~YϬDZ6Hc7yiiN;W0~'ȖjԧFYi t>d슐 P6fF+wY|HzMzf=EIZ!x Aቩ]28V z0eN*3kHD.7pĦ˥]e}Hp^~P^Gޔ3 >W;:2/.~*"뛻5BJظ^沔\#19pt(Z0FOp>Zۆ:ׄ)mꇴHN*@*c@c.J yTQPj%?[S-)x$eup0AnX!bG'XZG+E.rRk.!I 3:Zd 7(`Gs E k,XW8 N~О T~󚁿Ua H9hHӓg vMB. P+7(T #60LX~8Cg vQRդF]SiN).t(+hr :rp:q%)խ }w/KQ\JA׊E~}.Ć[66N܏--lڋ 1觃"M_[(/zM7{(  kr<Q8 Ge)RE.1) њeuxb~D͈$#2Fo€^zc&yȀKPh5(W7eJ1&rCL>F'U ODR:GkLEvQ)SQ9WèoOH}"rkV&fN`5䤏 5\9֦L}ۓ|k<m2NL*owqYU2g1g9p%$z 41"Q p#3MBdFvɸEVB֕5Yx}]'_)|M?%~8yW/t@Û`ހѲ8"C(i)HV(mrl#1ٍ/מ93[ F $ksS3|Ply4[IU2vN/ #z!U.YHIZ&)#"b1h#2&"pgl6p~wL[TEfpdgՏbn` s;@wH,3i$a<퇱Yi hP Gj&8wNcF@Ghm 4wF0 9B 0UttY_GIlb+Zez\ bzE6F5Mm mtYܑ\GGnB)5W`tyέKM >ɕ 22 __ȇ ڙR|z_/x}}:=U jdn|_ 95t֮G/hfNe|s|Og{feﴄy*q-*;X*ւiR=POfd|wG833ff1f9NX)Բ7=8vQ']]O 4L";:mqFE'N;&)dϒ1m\WZ_nCRIuo]c[Xr %V,L j$4 dޡٰ\btV%;&A,Pj^=7 ͘1y6Qup:n@{[4wX'"@'}=}͗NWvkY(3r6L)0GguKi 8Z+t;vm%ڇHx]Hx/,hnuQRHJH:"=WԀJ%8eyV-sTU)d $:|Pm/ ԩuwV$RyHitL*"+̰@`8$ƠE`*?z=({*?V*E0ԙWeYšuG@ES rj2WIils3jf;ydwCwMIVa%HQ-pKu n}P}WVӏwc*!R^oggB RήtA+rav5Rͦ6{ U*:Qin95{udb_+SՃ7y`1*9#A^[vHn^- _3ZFb}$m: uևᨒOi)> ]ޏٿOVLK:*AGNmԖ ̵³Qt<}6mzKbި˝ DN_qo`; w߾.|w/0Q=x 8I l{~ފ ?yWCxӡb -qn|qeSn>w$Vl͎܊({w_:u7 n~҅Y{ tgrOd_5e5]TTm*Bs׾!|H1L*Um6M_,zԏ"x-ab#,oz;UINo@ TW` 92K;+à z#IMj@?b:c &"5X0O#ZmP Q?$䀙4fnߖ|.*V#G: ۛ(b-R406B` OÖkP}9J-eEgwZ|Qv'2T}/Ѷ:rwҤ#0ZJ9v ¡ 4B<5U -cps]\ '*\[Xnu '1bePPv_n8ғZ7s\[Ju:P;$*Frp)BSϘ81b"i4 A| gĖ/W^/JK7*)~U&.L 'a+]>^}n~~>[0Mw KiÃs.%~&^IC a')kY y-=뭲&δY0 N zlCrIti1/c"=Iཟ}zS"vnH_xU|OM^>yk ףvvO qؒ\oVR;E#[QWOL Qt 0+>Ωrf`UxY^}`#BEA o_Ce4]Dm wȳef "?"ɆFcQګ;?kT5}ګ%2|gr;=z?~߫g~փYӓv<#f|6_kZ\QZX6~ )gYRb(ZBX$KrV cGcPtg൯8aYƠl<9s<ٱ_!,qߕ~uVa9L0+&A^[[DH(pJÃ&|seuy8B{?7;+"6 *'Qu{\yptYsBGiTT%;T׷x|ZSĒq` iotsKLT8,%NѺe7\O/;zz`'H-"LjXQ -KޠAX 8%I`B<UX-zg՘IDk1h /:#gCC]-JzB-a+6*o^o,z#񮜄".Xds̮97zt3siy1+I+ee,nLBN~T 0^{H.u5Ժ[sXtfM]UвE[7v>Λ zkz?2M6y=?WtMK㽖f?ۚOnzxЦXlOGgq">炤{(HHknY7?nN9OC0qdTWC܀2-*C^z Wb靹7VWs-I^b\J5RͥTs)\J5RͥTc`.+&Rjj.KRj.KRj.KRj.KRY˥Ts)\J5RͥTs))JSlbnO#-)͵`sZkZlkZ_ VɹsD=ŇbPeV4|,|PC3D{O_οG- /84c"3q?  76vA?meI!%bC/-%Rn-rDT:5S] kJOD9|[n3تm:y(w fߣ$]kv ܮ5p b}8ƺzޟxOe|*襥^:m2Q(lUr?XDx"=ˬ)+B LFzi}:;(_eJnW*]%v>~v)]%+xvܮUrJnUrJp;1ulvܮUrJnWj JuUrJnW*]%vܮUrJnW*]%jsW\fPZsUl\pV=Cp98R "uUM7y*kkŠ} +&Ht/htA0K:e88j2{54T|&.UdheKub&PEV5 i +4`r&<rMxr9sj7,O/O8ߩoM]fԇGmAd&%U5V6ZXbf(Jl(TB?{b5.hl}7yPtj1/t;Fye t3ޕ3(uľAk >:4 5' 7J#]K8x^!7nz:>Xdۙ 1-юRC^dpch$DAƲ,pPei[Y:꽪Q5XN; ʪ"oo x}(jow_ߌ(qoK[mk`DrW&/Gw5mS1W? ܞ|pzöѯk 6bUFj6^{YTeȥZ U6ܨ JxEPTNFT ~ V6K#?0͜Cg귔;W֣ɓ4oKliOk %81**o0 %WŘLԞ&ڒ"u2CN9*Q:9ESH媥F|#8}T9j%[iu=HYʋ`hD,9 ,TcNnvCysśn0}_0OG?j~=lӕ`a+BVBǨj-'KRf3n͖.:&N4FUc9$CP2%]HVK@|/*!a_ݾ1cŘwRj] ie3-bQł"ց%%ckH#mvq Am`BPriק=?ay/FqN)Ҟvx=/e*)|p,N2~R2 9*]8XP#:4'D8Xqe45-^J}h x 60.)WKJGt@P$زxcx˳^A1nu{_}C5Ţ\_W*KɍyW(~4pJq1y:=Zb΄eKn#wPG=#ՑpC:O߽kx/!G}'*@vR(zy<^(f>G)3yg.>K^ C凿mrqqe7- 7gɷ|9K7"8}>Ox\aEm^ gZ9yOdry&޼p`x核1c+_ָkRcF}SJ6 \ xmnHv k1<KUrrwnw{#v'ӕ<ؖ ZcVC,F*lG07 'Mڣ-=/*u>}g\t#Y7#`'}=}TRXtQy8lo' ޑnpo;c_Tth#1s)VKm޺2q_恩/g2Xl4xE7E=;#ҦǪnS)[{ {%ޙz/ EJ\٧wck"aX\%CR. m%Ǭ#W-I_U Yb .]۞\bSޕt A#@z%RoɭP;>d ']ۖF}t#l?}'yk ɱaGZ!(/4(PE"!'ʠjڜ1]DcluD [j͹\ېZkGYjTy\mTZ%jLlY #mjLR"(s.U#Ŗc͜!zЯdM0P{}L] 5(J\˷_7bx陸Y8LXIi{Y,5&arR>MNOʶb[?4N(p>=;p^꼴u l_NOj!qIj1c~>%_Zry*y{C6:q7}Ҝf<^4Y<`y|HrJ_LOf/N=A&-Lp =Io~|e?,ؿl_~ %MS >:G{椈x+K]Wg628TU@Z!9)^Q0uWy٥gKX.~ج?J:veKcV 3-^T#V%vn OhʁӲq}5v֮u;+ƃr~%+2;m)hdF4^ j[z٫"IW>ʐ! 6ecV)k,6~[\;-4J$/ZjrʑZ.%UN/%*.DE*F\YhytXZN?~lǾZI9*m!`ʪL&oP#&ZnɆ<Մl *e%d=FǤj$%J8_S AXlk[GK83yR߾ꋦϒל >*!d̒qa*kn]FB&B%cT@1s1\hD!&_9[5I+Jmᜐy Yu1^xz"h 8rcThiMV271b$l0'PZd$YXɪR(9QJRW[T흨䪗1(!c :b(ݘx8`}zF7c+lL@5j%&$xOaZbsi13H@"ZcOJ=j^L:%HdN&!)Q-WYYC՜}bFlE†XueU!(dGoE K"o{ L!D`ˈwh8ZyXgzRi,qDn xH\X/`t[B:pm+^jBVE^h (%<涔2h,y2Uq!\ mE )h ŐZ\̤P[˺QCPR1 0YlsQ8jù < C`PdUy7jP)3 P4b[ՙTMBz * $(2jۄ2kZ#I1c½J@ 3hF"vlAPwbF00.`(SP8@ C9clMX,pĎ`X&Ďp\[C%;3([@zc >ʂA*2UZd[R2AAM%XhD\ ̑4ʒ`"=dͯH_(dW.()EWW6"3 j $qeIن;Fd0ȋE/ڷ* HH 2J1 ȲZY mB +\aF U+c+wm+ؑ~1Y'HA`=k}#Rqë.CƖU ItWWOӘ Bu3Vk κVK3fHNV!*)_,KUa8qB1cy}̰Mc&MrڬA u-Q49ac[-]#C۴#ja$%@w B#.fPæ4dIKr*=d`>\uLF ANjK j,J0F<(9Hi$Pd"e+FY9-B,h8P{  EducUmA5H=lܶQio3&5K X]ҹd]S~c-Asp|ܔߗW42|"Kk=xPPʭ`{ըz52' [W*-W+SN#:lgt+F-TuBZ%FJҨj1jY:H!E@2=B*mX)+gɈSZ]FʄWjuR4AYn#|r\fR,J@LTPH#\5%i\4µ5#;܊92 jj o:k/m;J*>@֨2 `93#ae%3J :̀|zAu!֭FhJnÈ{k|Tp=ZMf!FPb.h⌞TV@/^47jgC1IĪتnj"|٥#jR>$"Pbɡ.klB9K\ר"!(Tv""DoaL`j?®zՍOv/n; x\:4 @6c(ܷ;heAu)ׄ֩hIiqy nov=0휰z?IaP3 3 :5^?GGYW+ϒ@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@DyI ?% dO;`ѓ@FÈz$^ D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$3&PH $  sG kx$P\@$s$7@Ո"H ""H ""H ""H ""H ""H ""H ""H ""H ""H ""H ""OK!iHjOOJIjD=GH5a""H ""H ""H ""H ""H ""H ""H ""H ""H ""H ""H "z>$VuZ֧ח sR|~fg= J=%p {t%ɐ\ k{.]z2k fjKY 7~rmv<ϏJ~1:ᅥ#-,ۆG8T.M"11 H _ABin:nQ%%htYU\%#`Y(qx!WKc640la9>ģw勭:>4!/F4fs@yyۋ_պ!xMz_N0ˁvPʿc#MVxڳCO'yhݍ(㲀^<-Q)5%(m!dijЖiu~QsOqA?J~ vNg}'~:Ų!cg>Fӳ>7}]]b!m7|uwsNq3CIQ!e D*QFp}^p0onnO'GaѶ|w};?Q>},Ks%N-3fe Wo?;mYW"hK[ Fe+6޶ҋ/Z?3/~̒}BInGƋٴK+_y9uTLl<޶ 6:mVOYw7]WTg`Do*dw"Iep5FK5Pmfj?ѽ&@% 糳Ӵ-~շ՗o]*'w=r ws7sn,LڷL R9r+GPVIOfS5c7¸ێΡ⯳L4XC?-?RnQ dw!̰he*ag:??NyJM7H`[eNjQbcbF\.mȅ'*m aRxprST1uV|vR۲gf8/R?wJb~ż/Ja/?iFO/|Vtw2_.?>gI^Iތ.u)#NPԉ)2%']e:y9Af_gX Jb|!1Vb(61 i6No黷Ճʴ[ܼٴ3DЇn}8: ]>uۗV˗ۙK 1njW S_wvZ y飜ʨ}3:LOSyw_-V/fWci;;Z۶ݙ$Mg~Ɲ闊;)'qmDmQuLg]|B VY|6|\7|{xսwl]wמg5 뼮fY|34Kh֎j8|4[.wg5L=5_Oo߼o߯zW߼~˅{޾+d%!)o2`\~ŭPַN֖ԫ}퐏sK蝔Idz߿4_͎O&eSYsVRrr==w+0x'n<%:q*!܅'ԅ&n愻sw[/n}>:&mWIR*C/UIZjͬmѼw`^u\ /?@v)* hgtqErb^g hމ0?ٶϻ%Jp$˖WֳYANHy, b •m/6όs ?ǵ*s{(p"Fx"y0?])GQjmѥQs-@ 5/{oo޺`>qnRpJK+K)o$veGjݻCmVWh@k0{| klRW?w3O˗VEtT ^]~o -@MmiE%v8"fv ־A+z_QceKѵ@H%u!Z(FY]FN2+o̓k05x5GX OR?8hV `'N$}H4DN1 E:qa*Dl#J]L<CC؎q'[[uY:ԭ'gSZR4YqTp]Bnts[o"8M#%w3gOd&ƃ1ն`GX{}ZYQ0zYHy<#頂@lR E\xF$bkBx1M{t+7}nic%KȒ*fob׵Hj O6 ZqSV7Xh=1vZ҉~O+fՓA4@ӁRAY`Y0qT΂R1YJ!,gx|&Pqx7EFfgOǑ\[5ӓI;Vcӊ"%Yn)Y5]ZxOp6&r%i\d)/l6Vw{=C$mml]bsrzu8Nm֤ճYu-KlYܺyN7w^nB0W7z7 m-x9ꚎӤ٩w G$zV5g?kbrs^Ձj\6x5:d}s|׭ë4Jw*ƶF)TKǝ.Et@DT2'y S߂nvS7;穫'ldqj)Gexz ^Wbvb Ԩ3&C1pk&7aլnweW#A) L"&N &pT(fVHu j5-@z_! {;3Aoan0=,n96lO:cQufp$k (myg&l1WGlRGfطͺx0k*-(LDIg}=gnCSh P+j8lOx[Z8iamnW NQ+%6rIXAuI:Kv$Fi%գD[&gЛߥjƣm1mVۜj-M;U;\SAwCP 7ak^ѽW˫_6BBBGmIEnu U((IBS䒲 L-7Z#g\Tv]jOeZڙp~x2J**oPszǐ E)!?)p%{),*q|p4HUȨ4sDKJ@)C^{-Q8cƠSbPTqKiEj[#g=2R^ [Gb!b!#n2/ 7Oon }|=y J{7`YJɠ衣%4ˤMǢP[ġ/"j#%v!t`-۲\$pg#9YM+L@pMbDaPx-#vkGl7rWP5x,jQ[tYm@U wZR+mTyW,2tx:tF5;{$X'c2f4%D@G(JI ( XʂZDzQ{nQ! ϜmXM'iuMIk ۂPX`o/6Ԫ.X饂R|T-;ɓy2Kħ臣@_8~A8z&o~+e©ᥟpt4: Q~ͻg$rg_|4j.pQj EE")ˣ<RJ;.3z*xr]p袭 'u帗S1|7 {7¨*l湎_oTz1kf/jv0~gr9{otDEB+ 3 zx9ϗ".{˒[ig\WΉ<ݺ^LN6sqK+saRJAuDXwN j1 CG*I>I1DM\N&SBd*V7Vypio=\jeNizv}yfh| m}/yߘgڥX $qOُRB8#X%=Zmf$hi77T&5*J1ȅHo5)IҞ-(T۲Fz 8׳O 7VRֿ) b~EB a 4^~KyBRI9XH%5] Ad颶SB -#p:DzK &w8MMB Uh@"}&D%[m%'xar6(x dR% :3W[m,K^˰TK &PBE"4%4%YZTc59^&[[wumQ'AI.r:RN)yCM3(wvq))\Zύ$)]J1T%3⩣Fks/3Fz\Bodu=+,q}qGp-=k|t&˘R#\ zR}*%R*kxճNW0 Xc‹$Sp2mPǛВ̱P2QۏnޟL{/xHv#2ux0  yw`-U&`_*Sk̮Ri(tp ኣYoI_z~q%/'jjZWt R q=83nKaRK'DB4;č FN5_SN(>}_|;l]1>]\zu6#Ps7F-?{s S9YQHĔN:S&&? #PG6G6 d_-׶J;m E U&s|*S)MW2XGpg0WڂLWH-;o]e*•Gp fo*+UvpOL%^!\d\ea֮2fo֮@rBUuk+- Wsx=? ~\3gSljP.<Ǭ,x*fヨS@ӥ(uJr-c SZ˓Swx *Hꤔ:j X=0Od :Amt yXLOBR?L\72L:g*yLu"BӈsUxLo&p IԃbÛ5G ,BqH,*NGoPYDH*DgA9 !OX{JAEjFxW-'Ox ȱqtS`9uXU@\ HGwLKI!ET(Jex>Ownaf"[ Iui0Q+ϙ#*Qpy8ID3~D9%ZE %a2ㅳjǹ3RNr M Z>9ϛ?eJzDZ7.5Noz?<=jx-K=KT.7x8 ĸ&F2ΉqZ oIpe=AEF$x/9go,üB*%H͠"i o4P (Ѣ Dej)۝ X12nTSNn zia݈d!A7y4~n5X\ "bH?p~H *FG GPN9<(v<}8 ƾ){" 䍣('~r1'7\I}$sPTOi,cQ?uqa+ZTbֳR"FM9ߨ9ΣQq( -FKsɍ5]O3խi ?VKuM|n|9\_xhCĜۯAN?O疛]X$*ϗoۏ7hIG:o5#hfY>І8V3g1k=͘dI2zC6J|qNg2>3U'.DZ"=p_T.Wt'vLowoG'߾>??POۓG7IHO''0=PH ۟0lkho64bo2. %Tb6W?{FrŸ.Fp!!{䣱WYiRKR U") IMrh3fMOWޜ >,:XukrS5~\Y\c/.>d#Y| )jk* _b-B 1iNo ׉wF¬[u4񲞐􈞐x-cj˛N=iI a. [7!v6N&$be$mЖP?vO^B "0q@Xd[o|4,'吤+'*4ps[FY{#gBZCK+ 9hJRTӤU_)5Rυ'|ZPEP4=r6[/5kFId=gye⋐āNcP-~7ևx\* UHHc Y)UU!`ڣLW-ͱZ[pY5nCBu~GJ}㢤v$k.)5 1RNj QG! ե(bBRFXB5Կ2(e{MZjAOQ/ߤaoygBL5y}ͣ߇aɴm2?X?ȃ`Ws_3i3icCoos8~vH5G̸+qUv/j=N]Y ׿6 M\t"XU1cx%wUWxa;\w jg݊Ϟ9\IX;ʓs.4g A5[)leS ׊=̊H[μn#\̍ t _`}.}6V>Pfì^p` %"R(B%Py6xwNwvcm}Z;PR%\Yj Lz¬Q8 9ry"Z[M#)(TąPЂc6RDUF:ГHedPϸ;!('fLP9{te!9.#xCZy 8 B6y߹qQʮu_}}]c`Txԯk^& 7"» }Qy#tQ7 CPM u͕E+B %H™3˨^1DǑ5 DQQǘIŝ0u[Ӗk녥H1DҀJcb#qg1rvEh-4<6Z A &AҁPgQ@ PGsڣv fH28I]j)O!H)"Qg%Ƿ]$&bQ5By(Pr  BL 0@ 3ƑGheFYA!VՌGi.4] W^'K鵴 :Ji:FJz6.MTTDN G@9k۝&nv-Sպݢ݊ehArU@xDcB"W uH#Q,INO\RR!jS^Pcp[*D HVs+gCTiTax1r+ds Zm]/}؝Vƣ=?=&\ iR`ƨ̚1ӯj\J$NemyfJڜc+]?]Fos81r̎61}}5 qd#nmd'++{̼2r~F[[{|}<܂{GjwJz<Wrk.1t_j}96空6 dm,O\:d֙/dJ4V y,Ox@\h YB2“µ]6{=&0L6r~?3K\A"롢%4ɢg)hBpX)1F0Ʋp Ys=,MtP+5X`&ab( AҕK[b0(mvǡR[ڢݶ9(!<D*ޤd$ Z*&eB`:-*A*7 ԐRZtI3A!pDu)4Tv<,FnyX+P˸KmǡQD,RPB&)84Q d >`DRIPeBs"% ζ6wqZ99LQ0c*\uW- $HNH(=A/. =vDݸ>OJ/D=WF{NyUJ0-*CL2Q<*L]DYQ֥۷4>J/M Hkl!)&C|^*^sLT,ۮVQ)MȲDH19@g` (93B뙷!Vnz3z{Nye`=[T(h<`"GքxO)PG˛HI,pR xgU*ku gg! 0HId wTHiKn1rvs8qMUdm :Ll+:B`Fp;g͑;ߔ(Q;UܛJ$N+ *I*ɍtpN<i[GdvzlqtaP F>esk6{r1jY77&ǂA%p|,)凫jWb 80A5v]EܯUK;5>;z-n-0+㛎oZ7T *(tişaڨ5gY3%C.T{*E3pJM>;d̻-  7{T}=?[Z8bwfrily׆~ᘓ/×58c1fqh_0cuLqfwD-UG.^t}E,tږq烞N7 :)&)a$aM6GG, ?-h?Vw3T?ز0܅7'&a~|wtNX]QȉteOL7YPgL|F@/=MJI-g7;㥿/qkqr!}55_tBOK"v8*~LIS䋚2E8R3 Cye‡oς4d u 3^P Ox9] o;;y,?-c/.竨ȒJ[RR`Wx52u!1t\ syBa޹#.¾# !&jaqgg5֛|ۙc.dќ+}2fYLoB% %J!pd* *8'*F&)NXܖkDGP ,HTR:&0&WCF'%68R7z_=6#⫲?*PP20y/J-hٍQBWk+d8eI1֟gm=OG1/1)K%\!zgr>9m+S͵W~Y={|/{{NOh:gdEYEtB'-RkClŸVW & ,Kejy2ta`0 )a5LW#2\bJ".2w+Nt7jIޛaoK!%-. Ytx S7ݧ!~,ݼ&N ]oGW| p#xa`?B?%)߯zEi(JI51"[awMOUaEnYh BT @5#s8A2%kKlr= E=ћ|]M3oT$vI@*)YUYsUb\͊8۽gyM|6j*$*,VV#T0YwPKJAABf kr@B}DZ$|;4 H]LNe!塨Ba t + K `nD{JZN]AuJ[}@B خ-B{cPi;c-+yQceWuE[v8ꪐ]jA*T*ީ7L٨V ?]/ѸO߿3gSWA]pz0_puLfQWܗc#E?LF]e> 4^o3Z-Y覮 ɽQ<R9Bd3nr Ypc g.ki$,1 EjGNEoHcWO't ]}>A<6CGIdO't ]}>AWOA R!BƊ<k5s! 61:}bNUXS ,f|PLCg(DEpCܟ}M+ѻOzzʻsW>q<Ӯb܅+iQꣃ3cd{J0x Rxf2&Lʠ (bB YT(U̔@K ,76wO5QN[-2JnZ&@~уz\(释,t𼸼hS0|Ћ vK|Cn{Bg)b$Se!h#Y/ardjUzq A|* .'O`y̼$Fh#tA5dC #X9'+'GP7$ РScPĸifͥb4연֭؟[̎4$7I&JO [C:h|ѽQ2ŲBP 3 _jkFVJR3#=zPZd0e5]AzJ[᭶q.|)^« wҴK`ܠ)yL/~ѥBr1ǯ] :.Ӽ˧ޙFz_Ro*w WAY`0|e*|5)Qdp 1Eqјj Gn2I~p}4/D N}I׽H}G~X7||+"10ex:)y4GhfhK'hH'70j#?n=b_kWgW~/? !Kb0gfmWDheQ~XL3?y4qT8*|"i8`YŲBOng*Y4kó*嬓r2R&Fb.2E^QF>b7 '~ .B|F'RG'?Ot…9'~~O;Οh)ЦpDNw"@3O~}4bji`5o ͼGn(\kc[n-@R~z0؟e|ʉ MԻ:Kޔ+[ 1׻mZZw׷W?{7 GWLShϻ8X'=w'!4ϣRru|̸d8c26˶AWzfֳյ菧&кZuQ`U%8+HX^vn\g_}bŊY%d!/`dtL A\)Ih)9{X.]tu-Oيi^GՄ8s< GxCu h~EwTM>}Q~IZ)VY\e E#!T0))%Tr+Wo+bTI-WBr#U虰Fe F&E3IybZ[ͣ)sU(h6rBU!}l&YKuX0rERT1 ;C٢L/K27Y;]GKy&G4nC-XYt `n`,@LW_ gDW>o&7b]E>TހJO-+P{^<Qy2h }s 2w*ڪV"LuN)`q:R2 8] ̴Nx^9=+g11K.HP ܑpZ/K Ns"-j[3FEN6Wхquj6t`.|V]x+gzY4ezIn`0_ kRA'gKGȤ()hBp3Od,";!e,+@P? )ڔJ^AFLV2c*Pmڮ9kl?r_vkT -kmv`7qJhg$;y"Gʀ7)d^e -d2!mUfToYȄ 9dҤkbbA$LPIGǺI;ײ>lamԗB-Rv,?%ܘ'ܻEbV$1^!N9ʢRx8$f'EԂ*ʖ1  4Ƃ[V[2$NѢ"0F<)V]Lg;9Һ־j8&rO10ճ|&RiMg92ZJ^Zdܟs28t#)dR0-lw#MH;?rlN7E֟7㳳 wxnkf.AM>d1 F.4X͗7u0|w}_GP[5w1["_|.wmO+˶姿dr۶,aYj7P<1C*& V\S&(Ũ@SbT KJ(M ]ϼ]0/SabV?5-l=zr 6tjeMF_Łie˿ej00xBg6jOl+ bhuĴF:{vTMGMimpMBmK`\GAE>ܝ%tpUCC>A~ 3E-x`cy{Kmۭ B9pO X^kx`EtȚzPI\? ۄo|*e:WN,h)K&,D vR׮RnTP^ok.58L,g\)dB$A\##WJ *?<8;zqvyїt.tcpV{NA{;B>su=jLG5b>j-|^Q_я'!% fm_zIQ;yW~ӟ./3~OX38hh>ܗp|?~ ~XigRH30E+$ %2AvR%u»ON燿ON2 3^T(mﭲЮSΧGAoM1Zl_KߵTb0iwC-c;'.'|O ߾q0w/'6hNإoZԋ dMs_a¼5]y2W"R( "F#,k@ߢ5^0.v 9h~WY ,d"DFY1)?$Ämp[;Wiqtv:vm_=ED vJ 7oLƳyӀ*/8v @zp*)o!o>{ #2TXʣY⚣YBidߗeW C1L񣁫,1W(-gp++5gׯr6"}>qm_l+0FP`Uc,,pJ͎ ̹8Bi},P@J3h*+]eiMJ%^!\i#,:g0ձJ+I7nlJ[~N9)ҪOЮNT>sZ=;ٙ;B[I"T ѠBtYnCRI(m]`97xͼk)j No E..A f:=Iޝ04?Jex!YDve$µX"6odyF@l: $@H.P8s(%֢"yT; T& \bCju%ꂭߑ}p^ne>~7ֻa&עƵU-j\עƵq-j\עƵq-j\ע5oW acpD;Y`Cf'=Ʌr;YZ5k 0f{]5[Kh_\CDWK'lk9ؔ>Bʨeb<`'!luJOZ~Kl.ӤI\1MczH&JBiF 닫S | yJ1j%#('Jz^ OAiM-|dI<ȊPA]'$ґB9g%z= bDcT Bx>9TjŬ~cg3z/_'޵&7^m辰}ģu,\#YTrC"DۗAB訅6k'9@Pk :$?pX)"*E.9$9bu%(rg-CO :S;c$:OZ#Hn-E^GV-ZmV_KrR#!:|013lWCl ?ܢ ~"W?7>_GkYQբ=-Su}o,Z>gIፃ뭁kZ-DwiEsO>D2Uhc@˲R8Z`|}~pzD0uT?ܹ0tG IJ6K(c|Yhc"QM`b s[ p&Q X aT]~GQu6wf~ז8CrV)J`!dȝaF*Y$Ё 1I"OJD^6ZDJt (Q$w-Q0E@t"8Ȝ܊)&vg9ow3)4Nj'y/ɮLs|&W:(h:9kj>=7]%\QT޾IMP<犰JQEbr#%6 `[dSHuM(6#]ݨW_~b{rF(FhǽJFY:LY9OZ7Ѹm$\zdF?s{BDŽs MPoAJEѯH,tDXTn%eOj*"y@;Bgo\3 5hYS[[nQ {Jz6 _?VceG%ّ~ ;oQBqvFӷ A` 5zz1Z%H&a Rh,s.C;=o36 瞧^oΎDRMSb"~nx1f2m9 u~> P΀F6TkpK\㬣 IIO7rfjkfS05Z&y$%2bt;!'KM :G[ 2qo<ȔE\籖[i1Hu;;)3}:+n+Y]}/dv8@7[*Q =C^0K@W炖Gj\Oi3ApIăbXQ)fR c ٿgt#Ԅ+IGz _|tY03qh\QKϨJBH Wh%XϢpTaq)mc< PCNՆ)gNϝ{QHp Iwdio1b ^O>=xGkq=]CֺhyG|]xhLZ֩]TShJ*($LIY6-*N$Τ+d_Y:lskRd2xK+MU㛭 (IÆVMAhA-Gѹc2⏣6$px\*-;-xo!Y!,2s `Ѱj{a\ ^,BYYȗվiADLpFR3RɆH<*k\R\(\_AzL.Z.#*7gY1jZ_YV5NMh/$9]|>RW7p꧿>Q71V\jz`VH1Ww|Ug+?!|1 >POl]K7=^ġAgrXUG8:s6n;&14O3fh5rrF5\4+]#\}yzaU`vOg8jљKy0JŇvnY%Q5~6TQo`-&wGx,y_Kw-]v5C;byBhO0y)> o]/\'lͭjX%FEufĹ,'>W}]jmR[sǿG[LlTšl0jW~p`;:͟~_ޝ4L.I 鿾MM3whZ>G=]U}/G#Qf~q6ϯGσ䏷󔆍'fkgr_n_~L325˝^QfV֞ lH7>h P @xh![vO^/6GhaiTkqJ04pnuWИ._; Y[u,k^΋?d?A'o[s{)v9\2aHX2gvASFq1)˴1gtۤ(t32ߟQ#ȑ9:Gg V44yBDy ST_!RǸɻ1z1@qz}u, {[JTϏ7= unV w՝Ag7{0`:Rg͡ݣշfԁirӝR _ŰSys6vIky|1>nhy9Kz!KeKlVSApw rޡ^#t*=՜Nzp Z!GI8Yn1T|&H\?K>ȒXAjJѬ22aCJ'lUIxbZ-ibLG*NF)X;MLq\9C8&Q$69k5I:x=c 2ANnpJҹ4u1){,;kum7ѕ@hsչ֏>n2N,nƟKa:GJe l*JΙvv~Qg9kvy077YAZy5ex*Rq H=+`^,Ew}_aaYfTޔ\5[|k.UxyOA6$Nsg'#B3H;5@Aȃi4[ޓf;f԰CIJ9"&,'s@l0]R%*',4b*g66H SqC_8ƂxN5(Y;ەx8GZ*``a STk0ʙdVhrb=u"jOD _B")#)XXnQZZ4g6r"TRO B4 d(@ae"Qd/{]a-[D'kI|] XS`AΈܙGձ~2SJa#[yA"8MP^;q2p~!5"ʭ(=_Svi*{}q~Ώ_pMܟWUGiTT^@N*&j}<^,YfeX1Pqř`gU7W-` D`T7gB`Nr2ǏbIӴ }A.}`H)rZ#α@.jk#`ȧ`H/VoV7V4uF\>,UD XF=MA'IE)~[ ;;<"^n7ν̱\å2N'ӷBFߺp,@:b/5wX䚻+KarͽUؙ\s/uar\ #JooaȣM[51I ˜D)`/5WU4[x4zU()| ܺ$(ƐB!7Nùd& ;BUeBʘNB a!2Es.<Ş8b>Qibhloٮv!ѬnPdk}k~(4XKuQ4i*BZW j!_-䫅|9±PW jY|BZW j!_-䫅|BZW j!_-E+䫅|BZWF_+)3)%O!_-䫅|BZW ya^H|%,F"5Cbu;ວ:;f߹wreF-$ J(g,y2#*F)9) K]蘢n:qਆy]d>5=g!ݎ-۹,0Їb؆uЧQk<ܶ+E_n#&Q #Bs<Ҙ1sF7[)seǝc\{KPɭT8L 0$HK +e LwBb{&YjT= Dx㉎F&(hrZZn $< ֝0!>.2>u@'8vGD54jbs>/ _oEJ{X$-9BR l{%M$Й\c!nDƆ&kSsU*-BZ|q\!8 c;(:N^a[aÄ^^66 }UI+ 7(K_: "?{؍^ļo aaLeRVK$oQm],S9'#QTWEJ(5G}'b쇖G@<&py2/$==z_A[i੎"qն<1EV)[rwxBO>ڟ#MQLYegJ2!t$#}¬DF!تaf`N(寄^DI0|O:a6VWglP==nم䨊 bhN;-s*QHE#&1eQ)*%(A2*St^WEFmMN)JMmmʒ %Q7 "yL6gZz.-Tmd&w`a58 Yk`` ]bXܕ՛x4~<~_}7F 4gGGؠxIHA`4F g|1hRD`6\VZC|L ()dKDdeiPEckW%~Ij9]Ajq,jCeԆKe9#WIx,rJ[*tA Q]U?sOqNZi2xKع7<3aYFm~X0AH4^(% W9@'<([t*z].`m6' *GV{o8-Y qɥ˻5q{ާXECL;Spћ1E\R1H cԤCŘwS1A&͖KJ]CڠhCə&PJàV}!uxu. y.3CPĄjrlY<(TF>g&|jfF2G65N[.Jq\`c;ovi;w{s J V !G-!iD8 "]^ 0(HFDq#wBkˤ<7)/MBͣ9#$hRݛ7$%BSeJW'^mb"Z/xl\jDk41)!;ntt`I񠔲huhDd",:"?8)e6ƠX';}J x]&k@̣T,BA4RUpdI9z,E4AJ(4^+e3mW;VӍc ʅl7+cU/>q=ԯ Osk["Խ~ZӗEZ-~Hip^?UPʾH`mdoĵ\HZxJdzpA7M>Z* \q \AH) 8GpUv7pE"3X4lwW%-mxo%6*;EJ\GrB؇-fiGKzkGΏ˃nsNOw>?^ҎgX'_սI=Y"i~}4wt%;2q噃i/VˤZ߽WN=Fꂵ<|J=>qVfyhrac$jI0AL?_Fb4?.ND)jC^4NJ>} {E`{$/0]ժ0]lw`0>J-\q \iMYeұ!\-=m}d|:͵YRKߍU GPL7:WӭsCW=b誌` r D暠kP"D=.z ]oV[ՖJt`#*kXo"-7]"p\G~*hzsv~n4'9"M^_m*`c%5@l~-xS{AbL ac\LWm?~ $ݸ\_?myygOi_Tu]qѮ@+;^x2TR"]˹I(Y+_zzͺË-7+m=C2c5 okh` ,a{=Uo|йjHt{Ȳ9l{ˣ 0nrĕPXjHTRσd \:b^WMYvFfAGWR' 8rB)qfA+8ݜ8ms;_/]CMꪎk"(EH#M4 +%MIt_޺DNF? 2h"ޥ5,&3^dtJ,(YDL(,I0x EZ@AJ:PWdɱm8 KaʃΣ"-HrZD#3dCI'A(+Un٤/oi(l tCNg+cC6ȹ@SfzL k2-N`d^ꖔGR1A5Xl,M`̌ FQxUA.ƂFЏ:dc3% eWm3; ݚn*9vw߹&{A8ueV8p3)_N_ь+7`Ҡ BݎsK,P4 Yj_M,7:^m ?.8lw@mg`HӁÞG8rOzx>_/FwVt 'vzMY7mv'X1ç9@4<2ɖ{e@M2E 7P'7H}a+Ʃ%~K\ 8-=h<=W8-]Y?'\&~4zizK>]Mjn~L'[2 c={Y#yxRϞ,M Gzcń~HǕӵ{jDm!S,K"uR:P3Q*)է1}9Y\94Y'Ed t&rc8G4Y:qu=]#6#6_!k&xtf2dCM 1kAVdUj z[ zDb eHч8rYJd.s>'M&S5q ԭCUbeV52yeYUfֶz,Y7.P2*|xŲugުO)ڪntcߓ_̅.XZ7+GVpfgv5nznܭtGwW\:?~Wqi:iM'XYg$#%?cf֟(ciޛ\KYYhoѿ2Ja[8dMQo/?>ϝCL5o:Hמ\oG Ww.N#{\as;\>U!5OU"M&"r>mU;hd~2?{Ƒ_vH~0Y'.qre {gH4$%6%lpWJ=Cf>~SDN6s+ᬪNb`BHǿz߫K&of.w!~ /,wu<9C'nPg5Htz(+Pa >,?LJrVkd҄euy۠OFWyA?=2έ~ 9loF~/zN"JEh[Jĩ~A3} 4N %Qj M=MDkfTF$V)C#Q8gA4) !*4G]*C#(Yta1x.d.dӅՅknnm.XBgyI77Tn0}'/\c.I)Y=4ftIUNScDYa, O!.2½geɕJx%f6"` B HЦ.FvGøb\.ڢ&I1 e_FhSKJe7FS"DH*2!0-To$h!#2A@bLPQGǺՠ+]F"&6+cшQֈӈFܘ()ꍢhɕsN PB nR(aq8 !N8S5 6)My> 24#FPf1rvHZ\?/uLb\X VYW/BQ8X}a}7Eyu}ާ(e|+.aXMbOo}>jH%peWِrʐ+N/muz_O!^!k56PWknRBTEmԃ}frJT}q={抽s"vTq{fhuGM/{v*Zv^^ }sHYpXg4I<`"GTkBvJMe&NBH0iJpT&o ,ZGp Xq6eeL`&CC= U ߽9%^qnٓ/Xl2() ݮ,g,M^#IT^E+}.!i5AcH7ONؑȣkWvҭB#EAy:q =. ԥD ET`"sS *6͇M@7c 梯zn*å>|BgIk,$4KHA.fl)N1 GV3k/@5u:$%L-$8k <2y'ŤƱ~LBODka]-2+(2Fԃ@Qiˮ7R %x~Gj2 `;3: RGY&i1ȭqWgۤ=MQ T01d0 W6P40W;o}:6ly={l ~o<0IE^[{n;b4lb\ c>`NY0G2%W//W67]:'DWQR/4`hߛY {dj X?) u8Nٹ5 9 PqW0h ̣q% "k4P\J(QZ(7bDm$(SzE^. LgRɧo .Σ2ZcVmoV>&C,Ci#:q ShK$ J2+lplC;}frv)^>{$[8R@k"km?^bmkԵ?n!cYȭB M>J*5:A>l3)=B1H }.!|`N /9 v3j1h8')#(o#E(!&D%(KäPHm wT4EKb7ٱʒϏ"/LI?T`i6mn-(ڇx)5ޥEiY'nGT8q#R.1Q$#3@S/N&CSU먀m9$8 U&}-.$b@3֒ a*HFFSV4PKh۩GACf(d fPc zna݉qbr!=Z$ho \Y:)9Q$))TC? O>CO5"H YA+A=z#QG0@ig}Bj RYAzLb5\BdR>.*‡ȼ:v:6(A)!)NtiKbr_e1Y4swK77g廆 bN ^?wː̾ n=qB 1PN59hQNdW~d2Exu[ͯ-~E .Q/u 2Œ癃`8S/D9|]ZwzոZ(A|c!ʊ\b$txWDZBY~>q'o`T~7&;5g#v\y[_z6fCF/r4_\6sܮ8zYVË7~ e$!GچaH%ZYew(Z? Y̻h8͘tI-[Ged󨋗l]V:w=w甙7}s׸ g` 76$A -K -F0%z>ỌKr˸wGɕ-6faO_7I)JO>p5"g0+H6Um)Ӟ0DMnQŸ]pkb|3Yv|P@%~~O|{Bk u4tZj(%޼:C2\:./Pݝ6$)v _B':=# \բcy&73{*X :KsI >`m@Ujf1xTjzP?D#߭V^:[݃X牵V؛p18|ѹ ko5{ o?NI[ Ao.<54rѻ(?:6݄K^GmȵD"zPT"FYy#h%H0Bp|1Zz9zPiaVJPv1FI4}M&fAw׷yN5n,J-9kG`m鏰Zíjq-uw^ʐCz%+u_B2i\j>FLqK0'`uo2Ýb@] u]%BU@',`GG&2 EHG+"U"[ymd>5.tjAw2cbՐ{}Cw-s&!:s>yY ("LDH<eZy^(9K96=rMF/a؛}yS&o[6Nr}@mK߬ͯa+ w9a'c|zw>^r}Ozjc}j~dBf^]/SWmL\a-<ӶZc>Hjo`yfxO3xg4z1+^?_j)-_uwSǏ~^P8Bl +Hyks=)$O(?}^*KMGS4PՔ)‘< [g4V^_W_5_?Q-~a[@4dӰ[| 6LrZxQnV7uͯTo kvTt6N ^㳘5JYۡ0'=l$z<w ^Ko5 n&q%CyYtٻ6ndWT~:[иCUI=Te>45u IQo7HJ$(Q8K gz0nS^uE"FىGDԡ#FUT3M؍5)$Q!.iPna')y*q76.WMmP5ugk|U 9{_R^XWW5,Uim8*RwXv|1V^xbJ|Hy=cVf^:dxEY#`6z}P$CwAx_ جUUʟׯO9[ڋ::ܟdEWDvD;m/_ XbO5eJK_(1K e/䒃ڠ%>mBeK`;m 6cfvwjm Ls]tv :a:~܌1qûLkv_~1{Bn wW1+ oS: B>u=pƺw}2?htA;}<ϯؑHEia#~)4A֡.)U XrZV>Ъ"-꓂>k :X>3IAk ]HãKYhW~;}nOPލgYg^ݷFz^IuDHdM^!AŜ0Z"r*!LT>+sqD;0ajc{׳dr)hS!+c0*SuI-T.4L"`b"]!2#-S`#X6n eĂ8 {l49:'|jˬ8M·,f4"{;vnk3=٩jU\~5M )ΠzmtaOVzREr|t={kIi /L EBP29k tp꺤ȓ8#mֻsQU\HJu,g2#5E+ R8ۑq3Ky,l;bX(E…DƷ50{HӃ>XFi< _Ͽۡ,6=vMb12GI9:}1cI)Y6ʫkP1J%+l*dL>`CoJsy,L;vEmu=ݤX阜,J,:z"oDEZ]Td}Jҥa1aހf!h"XI$+Id8Rm)0EbbcrwЎ2y ) :KavI6CS,kilV>U,1ESQc&,LY8[@+QKB|6 XDh̜-È^o$i]ofτ'ր7Z7B+׻L9FwL;?^BxUcp2!މ$sBDbƚޝ*qڐ7҉ԂSS,)C`R|R2(d*I^%i&v ~g;^_˚=_Z+mu()z,_ȳܳ\SbV~$m*N.:$k78K%ȷ_w1ǠLd9NV1٤Y,EHLEx+-3sKN~!ɤJ]cp, Nكt1W:tzײOۮev?܄ ֊l8vN)/E 5.mpDRU)JЯxěBxض柪qgӫѝPt7z `i ҋAOfT^mΚfo#|,~-X</`8j̕%nm吨7~ r/;qLꦡkk`B>3.o戞ٿOW,N(Q;+ūg/n֚J"&Y'"#y$O,|:RTTbB /??/o!|t5#g+AWl`2?TeB{*"Dӷ"f#Aqug¯M\_ܩo?kM<'i^qwZ0%@oޝb, 緎sC ("΂:qGvVD9 8xlD Z ~l'G?Hy;tc=ŎSB-%(si4em1pX>gTt+fySM#3"M+%Qނc  1p.oL.Z:JYFl_)kKm-@ϬIO:C `\'yN5Ӆ͸ 銅Xʚڪ 4|gL 4SK r ޷`%,F"5Cbu?quu橊Y.faZfmbA Pt0 Rpƒ$T1ʨOܥ[aܥ+x_ RvI+>kqvP_\ vϞL#uҪSET^oޢI`,(WN!s<Ҙ1s$:{g7+(;Z劍0Ҿ)hc S05Z&y pyJES1VMA,5U`zLQ&50"8J#(AޏIl uJ|Hf"zV> uyB6Y6w//~s[cnt~U5wG(oGG^;=Svz$:6Tul6t7{nl߷Tdžĩa,`#B.vk"ݞJ/"?j%+]DQRN%*A* 2`@`$hhIn z׊ia8nSQk,t:4@Inrkw1eha K?S`X*> \ys5=͟]sRXk6^mRݎwzŝǂ4˟neSfSye+>47AIɹGO[.ÔXxЈpq5 e(d+lۍuGModa/1RNQʴTqxLhM,(8P#4Z"Ku7Roq*nU:@~KfwW-αC[Q9Q@K09Jºϰ>#[v8_ dmM j0e'S@T) 8fLA. .N`?f1L1u1 .t3% X`wM>G#4;L[Ψ e-i0O/;aNreF15ʎMSV;+vv;[0&ۈ%-<"0\ {&aWc1 Z񡛄Jҕz&!+N)hU @K_\e(i'KJH\e-5"2U3W*ÎI\e >q%в_؝z>JÕcW v,*CURH\e9q5X@K ;tq43W rNP ?K`aʢǧoE0s` r-Rs4'Pd\K9[5vOz_Dv+C:o@_SNsNߣ*d̢uR;$J F2WƯ;f;櫷t;~Yf]HxBcp%*:5Ŏ6a-꒥j7 ]%L-6`9ԋUfw&r mӡXr%ZԀ[FZD?qNza 94i lSO+IcYQx e>3:qx9Kn9( ҽ SA'{,ii~ <z_W|vլR ,E`>ϣN~za7!~$J$t O=a_ *)m=`ʊ ( 1lG96 jmRw>;U為_$ z&341z24,V'%<7`Zvg R!լmA݊bҋO\f hE%yݜWZe dWmfP1[Ga09+6$w$!=M[0(qD8M*Q KsQ$ՙ@]363ā!J .2IV1 >fXLȃd!PƄpe28\PO\j>QJ*m:l !HhtGB - < prS^w]-' d]ܓ06‚`B',8K&G4C>!C%]=˰`061o_a Vs|S:ޢIl&Q #pry1cFWUvn1$[,ŹٻXC4 )D13 ].X)H`0Vb,5Ow$L3-yVAq ipiRrghgxz5mW}Uȭe_|rAγn:]Y5`nӹ{rĕ*,o&?YҞf{7/n̹2f]`nyy9nwMj9]oeÚLmXՃ]"  ?jUDҲp8 zW'FUwrvrEB)fm/L?zu#pRpb2n~GvXvK-9uwK9bP7%E߶[] )lh2D (0| 2Ht.xu C(,_Q< xGJTz785dfF;P>? dvJ)(EL2+S,˕OmwjԜi蟒SfgJ9oy2R߇p'~@A>W(R)_Eqx6xa驪=?ypԛ{CLFǩORĩJD+'ķ 7!v:wwBwH:yjG)?'wJG;o5 i F<1NqAںC."wY8H4@$*EQOQ>piX>eD.4nؐS̝KP2壶4EK(+u/5|=22*]63G<|S[lo98Q+Aڔ89QC7ʁ^- \A먀m9$x%Z-Wr^.IYfK,r ٪iu|7;o27ȁFWJ~\|kнbRAop#/mzލF|ЛQec1>z?AZ8z+d5zTr 2B9aFpc󃧋 l56(N U`=uU^@e|^n|}뇏`8qJՇ}$g-޽H=~muu7B1ed0G-z4*.%tZ~ω<>ȑ9z/i67×🵩+?>^fk#ыܹ/a?L]2OY?]#-6w! wʮ֦; NwMaJ4fX~F0i4uS|4<7z|]fmޕw?d]UQi ]u#%a#篣re,6.퍦{uRE0Btpz7H@qC~Hx?=8L=fMK/G-@#i[Ro͍֚|نr_]}\:keGn@aKF~|j.=3_e#Mq/̦WוU1nh/Mʆ}ȎB4粌M6loݪ_eb~Mё4J}$=^ђ>䅖2VF)SO0M^5l~Mq@Ǻ [𝶆sC텦?/^hg̸ %8%-Ŗg۶\kyn; ԓ.%E~m\ {1ʛ.jDbJ8+x5FʼR"i^D捻Zh=~OL'9ќprb-u7.r-9o1VK9O1VIM=Ӯ^ Їfb]G|C0t:mU"<ĔH2 ߋ\\$FIb B/ I8%3xC-@rDy%,wLv2[ΎlnS"衆F ak6ZCd;LȗY?i<*/v]3C1e)ԣI=_yk;/rƣp ղwm:zk ?uqtϣv[&]q7f=!pd5ݧݴf~}Y&3/Lf| ϼ ͏i7;s:[&fq'l Ҝ?|>I^yh};QQt[!K'l?Yi_:L9gbDҢt.ՇK.jV|w.jO{D^I~Es?FЈ|/fߑe =¯Bj%U.|c *8{|ߚ4%JxsxW{S ʀT25@qmu&}@+3wA:D0z K01)B,j9YV𙗳w.r}(2a99=sZk>n_s}U]#/>|BgIk5o97,D\b!4p"=M'N"2-ȸ `8`BM. .PrrEO'y6Q׉TXt SNQn AIpB3M><0ti%"D- 3Kw@5u:$%L-$PTj XS*/BH&"ӓ0}ͬ/E?J -j/PTZ G6Bx <z 0 DgQ #, ER9yꄰÃx)1#=hQYv \qiHr)l9VJ^5-\$RsltNfxIy ߑ]**w` t)!Ev$LH΃OlD4_2:Ř'F:<LL2=L0zDp"-(͎OR%clJ1YX2rbd᡹,#'sM7yȸ ?c@<QY[v˔w#ӏM=(E#auZҠZ P ',wO.c1e*HG$Al5D%m#i#FϜ8w>ْd:iɦ~QuP\L zQٵLQyE2$DHV"?K\F/v~qWa1woVg U. f?>Q##h1zǮ*sZL|2ݿywur%MVS䠼ɇi5hD^tJo4 r힔v' r#YʧA@$Dq%0xfCED:uJh*Y" sw^Yn^[K:8!^O7BWwCciw!,JOG<2U[JՊ$ׁQ!N91c4q8S,7T$52J1hښI>kEd hTɓ{(Ĺ݂oY_O'y|ss@ysL0Y>Qd)q*`TV*C[eh m2U7)SjpS*C[eh m2U 5X!< ?bp'Ӑ@= yV^?_bܝSn;䭝?O8Cx:^l2#Cu y9 ':PgהOgc|7 .X]y7tQo߿|!R#D OL3Mb"P0sqDCV>ʇZP+jC|VʇZP+jC|V>ʇZvC|VXP+jC|/V>ʇZP+jC|pEAV>ʇZP+jC|V>ʇZPkdqdi(YN~7dY;r$9jw)dRp-lWUR-đfȝ+ޛIhi?t2m=h[sqscg; mk M0ђh!̹Kѱ7o#nn'݌m13dzdF8O~"Ʀel.I8 esg:|w`{$«MP?[ǓTrV2b>*!7K=rGnLǡ^S=y:V%EAZ@ԑx 99Ղb[=DpQkLO莸Kc=dz : h/ rA#BqE1*!tOZ}1KFOv[c -zlp`6JGvCw0=,J8l]/oğ׾FtFUD"T΂q [J8J[#ՍrHwǖ}%c9sb;.B©М#;*E&ˉDk$&Ko;6n6{ҥM~u ,RW}KIf&x;ܿBM>XptHrS|y0E.Euq"(e !tBWLT0k :?Q 󤔊Tqb8۱}MkZ+A 8fN+À[b#ChQr!\Π1a|Zf$FKTZ0+tlWXHƦ 0޼ .y_+JZT\TkN%E %TV\\N%rSIPO%O%"QOsȊJFyzT1VmZD)7ԣnR>QaVYn|/BITDij6eV*F,){N*J oI٣ѨSO#B-ME 1Ax Cd]g.~hĹC5KbJKrG.j#| MoҲBVr!4|Ƕz±PE>Fs戶3νB1ӥq)8ED:υfB)jcAwxpV[F"CqjwA!BĹ#mJw"WP$Cڂ[V]^<.2^;u;w|zrbkrG{w,OM{arY[/8F"Zڨo\Z U 2[l! v~IƏg:,eEɃL+axEh.8u'= {qݼhN-$#'Dp>W+e]VN$IR@הPIv1Ukj!IT%j"  J|ND 1g2eK "Ր&Iᕤi&ӦgyYW5ά9=ZY7&ގ:8շfK*hUt8jJخċmHۢ}m/]88؟]I%hPo4DȌ{Nx4 {Ѧq8#N?èl ԇgZ#.|~}20AZ94/Ɩc{ih0:N||o58~ {:v#$vsYYޣц8 G0b'rgkN(+#{]>d_=*0z卌ܱ@dgq0I6L;M%bM%o Nj?<;Fu7&|?՛}}+q#0'>" $y_t?ݣkQko5p-݀ߥ_U{|4%Яր80ӫq~8bY; ϳ&F AX~F3?]$fgݢoפb\=Zه3[~|n@%᡽ߚw//лW}(:^XÛN{ K r[`_as;k+ F@a]_QM!<#ee OTBƝq8Ťl>*HA%j@+#Kܴ.xE979*<<)RR-ON }PJGw!xcR݀*StbJI$|!P;aͰ9 Xs& $;sMFqYخ<->j%'zap"J$ahh*5q™&1T5!EZkp2J+UMRNJ}m-Y(A #b$P4)xF 6x`x `;WP&J^=r(^AQx0vysB- ~}S}߫_'qɲKHc ˖9Nc^@3_o\/Œ+rKf7l 60WDPOy-gmƎp\h, ຠodô XgkHŸNzT\\g<6y42 (%`SΓ$LqMNS!\Hu/KՊfUGZ3oQ"~7O<Q$-}Z,6ϧ o?&]'M PN5s| Fg$+sEw2WtgsF#%抍d) `}0PͥgfL@/-%H2btP$L'u omp`ĉ Zyώ "NYM.P}f:% !劜ɐBXUYvSiqҸӸXN2#Mg`&[Ɲ n9z ObO3 ϽMNF_d`F,2hU3^dtmXAQ2VM A:IIa plĸ\9TY'E` t*rcxIӬ JXu%Xh$={ԝ]=Osfkvdk,[{XfJ.eQ mU2ىig\ 49R̵jB%HCIAP0KnCBe2NЙ J61-߮e뽵lhh\)K7a0M_\8&bEe'UP7[:qlWͅvaQԱSٕ_j^y5khOwQGA?M)>ˤztKAVcbx>^ RDofH{ X'p2OA]h{uUF);uՕu qj* N *d߿51o^RLX9ޝ;Sf'i>m1T%{B X'Yp>ZFmk{ZFiYK?OkM ᷅"R7b4Q^cVy8Uk5OÍ'qqedekJ{/uorI|rg>Rx>/?~|TJ OC*!ep"*^b# ˊ7 Q*yCf^W<=KLfHgj ~_|yw>fOj>^AVG)^KP2Hs|0L!wA tdVY,.X`)m|2*K̏b.nY V{4<@U}ј~Zg`;ۅuO-eO[(S20X4Y]GEz$,krRe~oFi= iZ'Q(/w!B Rtz+'B +_%žnWx5Z|?޿?BrZ­4%hRdp-%‡@㒉;l*2dJ O]56jhU1 AȌۇ+p[?סve>\Z߆$Z ^FV"k{3i^.>hXfvP"FwQo<7vhc/}F5GrS35X-v\P_x{>220e0 z0g[O4s+.9ziMhbrOq'Ptae7ˋ# mO4e.>^'zrg?]pʨ Cuųj^'%^2R:V|<מ W!?aP?7Tkմ&44{?BxǷ {O?~xK;h OԪ!Cuhntw?}Gתy]S&|~MҖXrkSoo}pѾ YB["w/wͦ Zړ{i+4l?Y|mTBRBT_,j|Pu|A Z l_a_[]kUÜiz۩-p=o?@}vsW!agZX̩P#U4Nq~'_J͟h6c`d" \!Y!9T):n~jQ>zG ]{@VUVD#^kn'i#ʑ體W$cH):V&8=m|W[imM\q ]"W n,7P&}!6PσqDW1`bwZI<)aejB*mUӉFVF @+BSBUPUY("`v+σ`~zT>S ME9bhmN2xW8]$9sY& q#K@ԑoI:r=դ* U)~~PbW \\ٴ-SowP^!׳;zs_a4-mRSp:P!:] ke* *{Бp-Q+>m4OR^zyx+Sb)sTBsКhk&G<& &2p!k&RG,$IC&)Gf3^n5l݄;*zwu2>V Y5Q]W><ށ uZ2cI.۷WgkiEz'穌&m!׭l&G& 9Lى T䂕N`H&2\6o$O|9 ڽʹ26ܷv�NigvCwlbdi3+ɴ:ɓLsS-$h?(-WJfe,M`Č i12U]$5ht2G26ƆI}Y{ f߂;:wc#ΒRexCÅ4X4hPMǣ܂S%CuU(*,U*Z+B'.yPb?Ü#¨dsȸY4 184IdT6t)[oŤ'yDO.{irv+ :H%v|gA?UK!*,QF}qF+b~R9m8[z}UW\]LxV,Prwm$I3.+<]OX1={o'6`慑r4Z17oy fov>~t]vtWZ89;M1<<-ine^)Y?xsY̡m|no{}B-~R$Œ4ieєHzfH)}Ϊƺ. t/n^6iqI}uҦ^.qX~s+VZ\l{BLKF %;od w~ kX e)dM橤G&,m.OcIAH*e)xWe?qZqsV"~*"aVb&¥ (S~pH:L><1P9;rR[1c4kT V(sH:;$之)++d L@M-9U+lCl$y_L#a$; 9#f&TjtznJ4]ޭ{F|.UE ӝ-Ҿk'\wzŃ/<ϪAR2g@3b9!1By ъ툽x։ϫȨ1fAGYrAГBˑz]&gp:E[q^|HFjl WY K l{_}UaM1UuE,_FYrZUwo jl*ZxKG6Ʉ AU:r :ܻrEew\o!9FHx'"`H,r3.df3Q̥10ذG[Y|=ٸ q۰]e;_dm-ERxf. Z!A&Y{a+P|sz`x` )1.tW0 Ψ36IX&d)C)h,YsDCJ!{r>E͍} z'Amj$f75C!Cz)zi`90_%A7(|N~#8C #K uRDOށ6 pҖf)6Χe6]F{*(_р|X2%U2#s(*v*l2Aw'fLi&t b" sqеӰx͜lHg2 ܲI]Bo Fw5%hL,xKx&fZw_fvGN]2@G5W//;扠I#I%}o U("d$9H): r=\ͽ (w7荶Ix1$ђ>-TJ*DmP ~A  žt!O@o*w7O/ P*]{zڎ!w_hx:kI/&XgLɸ+@0/zή 8Oj[g$Wle:K YJ*r)=0:)lp^C c S=>ݾ#~fGe[*UuX?K_ۭ>g_jm.=;z!v}p|^9ƌ/&_+PL )yn@dl)y@ú@}WڞJJgOR6)Œs΁dG2Dr=D$q(i],#*p!+ 5Ƅ,Gh̵U3g=;$nn!+V f8n2=;oK}7[8r)O>zBS 0 F/L#K5k:Uu2p#'Lr4[tSl8=Dfߡ $8nBk;?n0E5| Q (zȾ/ȟexUHu 9@s#\ܢkIن#oE@`%.g2S]?8IkO.&lUeۆǫ9~ (vx^ZE3+8JAïry<+w x]c'FC}Wʄ/g՗ߟeeu?|*~ug??ZʹخFYϲ#&)&ɲ-|X9s1,;:ã:G/ Bg|r-Z满UGMO0kXFހxsK0:9K߷';?+̾`vf.g&W >|i07A{Myrn*-D.ghGz dkn\KZ/[KR-]~U_u%~w%4BQN?7t?&SR1 ct`䋷у8lL sMa"QV;:z?~zK%W)oO;_G}78*{yh5)N_p|AGޜMg@R,hBc6?3qؗ ~"B[r1 v;D[`bme"m@ wwOSU*]Z)y0TZh &k%=&+aAT Qik$ܰiN Z71"bORhɅ\+-am,n}'O9`4LKS,;2~8S a-ҙx: Z|G0%%%wR 4e5̜ޅ+g^qrG+I܀.=qSq7ӹ AT @38Xz6w^,h@X"*M( 1Y !b9^̵p(<;O/V yH/9 BJ:2W \JMWoNpU1 \ Jm;hU*8*S%*꽇bBEWo0ʨ>SlJXx])6Fo.Fgsp'ņcX>f582Xrs0T+&UX {OՊ joY%$WW\<nj)V.\Jnx; /ͮe.o8kA \=J% #JpKϕT\k 8?A8xZ1m`F Ltrz~;0-`#`ઘ PXvXlWoJ*+:<͞O/5s94~VG90k ['Я4Ji&)xvg ٳ|DYOgz9_[RG~x X`<<"$nS,Re`FQGjí]6Balr/铟6]kP+ <6(Rp&, }Vᄚ6QQ R=p,IdYe n, L`x/[Wla2V[Фb%eɱ8zwk'Ş+FD"s%J!W!lI}H TZԤ8'eUw )Fx6R-Wʼn{}M3?þENԶovMU;鍛7h[> X4vZÇ<)Ho]YxqCimvQrG]<0u̾W˚KT5UQ/sXΥrѤyps+ B'o$(H}X?ǟǓqG2Pi{Dޕ!"5o:ztay d}~*^UYZm{B{Tb37bZ Y ` yʇQΔp ,+:q,ۆ² P91 Rϲ[aHɎ0^c]]J,)FeZ HN߼6G+uNTgwQ8 Ge)RE.i1@(Le]lwеb#dn!G@ff|)H㞊lF &!5z+T%&4C4^BA Y/7 4:Iּ+C H40RrȜSDMaٜTTjٮpvՖu'Z WM$y/o_VS==e>JVGekvأuҎgWah}]EI&gm }ŵ--Mt)=FFËLBF5$*PZʤU;{Iֶ6b#+qpƹ H.9MCTB&%e@62VvdUaa5 XxP,\<0 vëNoi6vO&Fp:Av'#JyɀZ( YkR HUD`J,*sYIƭ*j Y046g"L^2Q(PȒup#vy,]M:EmUU=$DHuc 3ƩfѠ0 %rd0.%axؐ\L̐+KX%#@d8F!Y"lpak/C 0 "V}QWFD#b5'ã3<; !(2θ#M+QQI/Ȍ{B3f"ܣ\fHgb)0`Ƞ98J[eD6ρ^u$\6 :&_g5)me\}=.n> %5J8Y&0D M'9CbHq>N!PL. V}Ϣg lu鶦\kVo<_<A]|=ݷ[ig< z!~|GzZя!:.N^5/2`TxU㰉q6J.b7JX8cŁQ'0Ƚ4 r;P^cu%Ø֠ΧfW28G!hC$N&Z(e6FI hTiB@"8A.A Nw\|e^]m8;A^Ő˫GwqGZO/-ۇm;*/r<5CڻH ׳=k__=enE ,CxrYqn-cF!1:8Pര,Wb<Hx  KX 4D!Gd jTv ~g}z|5̰#ڻկ>gibnaveF-wVl7ii^V<3 (U}Z1gTj9&YfjQX#do(cI )y-Smt:,n#٩XSnFNr(w{n>P:۔ ]C-GNɹx@d"$d0 82Ɉ91lw.JG#b?R?Fø9ߔrou ol}?y{ znC}Ā&"@PY)Oat"qdhDžQ&#YzD8"Y^q =00xU.PD0ې8&MX O2!^u#1!<"tNM=mJL>oW4tk-vY$@]mN.w ]G PжA=նq2d%Tċ/nz]hf_#G n3mRd!B Gd[RrBA)|r2k=K80c"pLp-;.;?Ù?7x8B2 3%a*yCht?W&&=|#):\XG"rm)^J6uGa4*[Es-B٣Q lZ3s|b4"T;ƹ-3}|tGV9Kj $)"sHDOX*eDE OdJtY6Z,vB[e,MbpE"`29-1[1{k6h( __A=-#֖,]_.z:]Uu;ޥTOx-Z ״)e9J3RLڗY;$c> ^j8RPA&jm6 Q|M&DtUK ɉ<jIY'lAu\#yg9'+C Eh%H2ԣs(3<}1#e5o;?; 2%:kC,04ZI3{ގS/4HDŝi5GҒT"3:fp0dI lҖTWBE! wC`U sjY^0;YFF(%OiO㧫oMO;]*'*>ۧipƙg,Y3gm9z8s/dC%+: "RƸd䒝YL'a:8=/N`ŵOgERGcC &f<=\!y[aT$Ƴ2|/aB"fҝ~ehA}m,[tvqy/!q1UI1xv\Ѹ:ўqVv^:qN\}Qل =b_[W?Wo/|{=;_|q=1%] ">hxv>[vI4O?^}$[M(:zR'颫ٍ6ua6x`ŲOƣDO^ gN[٫`{]>dWjz XHXmrJSLpP`㔟d6j5$76/OG+>q>}O?}-8AEkCNj` ]Z]SˮS;tmyͧ^L.ښ#1zmIlaZ-I߿Q]Fhw~ʆݪI<˯аI/6AZ4юoF%_t[q◲3]H7?h7Hxn Wvv?PAŽlyG\npvJz Wk&40\:.7{<៺ (YNds*LL.B#oį۠%Jp$KFe" }t1;!9ІRjJ/peQboнgdMwȏhx 'b:II-A`<m ?@Q{2}JTa9'߀uTG3^G3hh/ن:wgՃ.VOp>mb#t2Y4kJB}s?AXٻ6ndWvG­qQTVƩM|Rqre]*\eFɐm*4fHD"e]#40׍ %T DgĀ9/SQ=.Zᆳ*=دhzKQsTSv&R(0:"FYdAkhDu)S ZB%pOa)X{ʠ\݅3bF/w7*8Pc#lx&XAâ6.r%GW__Liky0lZ'`Qkyd3 +e;Dyӟ-!3?/7>c[z![垈lq·PoGPB歼(ai/p]ދ.|lVx#K5/%tֱ$'NmS%0O'Y"&O/rDKM87:H;\1+(sţVlK+EKPP) aFĈpE\<]$Q(he&PDUFI]:c58'ZZaFb> CYh%RS^gp;x)JlS6PM~@>[]M i?z;0no(' I0II%*tZ2'MQN 91ޥ'{L4\ O9Cq=+DdȀ kO2NO Dq&ץV+ㄡHI!F PX>цqgc܏;'NE_i9`#U3dZf u4*(R4lgغ676i9e8*(-svOZp F&gTlxPF!JY@XPy9R!d* ?".V CbTYg膧| ^:rċls$(9D2%rV1k Dc{cXGQ9vk/fHhkq15Ńy1OkC *&>?LM Wm/7yu?yk.E?1^,:z;P q3rg?}z3jaˋ;#،JoLq|QQ& G(ExgՁ8GZvp\[>6U<~墋.77-o[{Ջ<]Oٍ>ÕlKyꏪo⋺*g5yZԾՠ;1Pkg_ ~axJjəyzFR5Gбg]<:~8W= )‹r4_4QG;W&Y)8 A]zhк|!e ޏ@'~k (k^52dosV y˫ ||TS`1HEdKBVXRz`Vӊ{Q:.jKɬm%[\ֹ*:flXmm] ;}-oͭ7`3O5.Io~5*h:OPߢ&rptp7̐A LӛՍ_@"nJpsG.:GnihZW+V`l9,2_: FygQr1cÙT/Հ"?ů&!eTuKDw.M-9f}ڟfmX)N3k2)3\Z| a\ejCL@f٠ UF9Ey[Ê%8w7G y-m/3X&T[: H@gaGߙmsDq؃ & vj?z~x7c\YeJ O&M/5fʯ#OH.xɹ$QBKPERDbs3eJʍ$ܹ| c7Eg3D,QWX.5gz?(A%v} p6j"W{HZB(]ԶtRn\xp˧.- 7MRS<L5y2P+A{NIE menU3-GL)6"^bWOlEhD]|mdx9.~}vXYUwDĐ*]u.BmD|(lfIqH5*0PNa)-㥻>P^ SMՒD+pW%o,t>]\ J7h"`ZpIF!BNms#UFC17qnMke~zY`v1k5%馋zmY#QuU9ZNpM"v_ B$=aa@nȴ!v{|̫O^zÛ1;ъtTFV:QUR0NG拌8<0gosWXR=EOߋJaEįugnG܎_8:}s߾yJ>=(\s bs~ވI6 ߝq\5VY|qUS^2-0s$VlO܊8PxǷ39IﲟkV7?Լh*L.+H6e~EE?b=B!_ rd9>.P&\-){{~g&qvsG$9B8((cxRX`]ЌSJ]Lr 6fHJ/D}LL+ {Q9(36ٿqw!"J),iَ|0k=5[؁5T0ikIGѸLZ_xxarjQPWR٤UEPO]vPGJu1՜3ZPE!䌢z26R.p{o#\Ͻ*+oo^xF/u={nBun!͍w m*k3`a ߯ySZ\LjPP B! -o, #OG(C!t"yo.Lq5r .Ձ[4> 2rf+h8`p%-zT1;QqYUӧ\}ɆT iJrv;Sjtgm8yS~&OYR-J4!J$CFˡV)*dÔڲ}Uh*OK#=U8&ey9&  (sv>Mzr  ~St#B,#glME IfdyI*X&dfirIC0sro:d0?!3HLZ&dɮp6|`#VԶRi^~u#`[7d/ˁ/W;WO ŧx[cz ay0&W=jӥD^2!3UuH2 Ɏƽ!qDfx8X1cx'e9(SS]44zƩtT/D8˹~S0dIiW'd !uy5؍U쎵]4:rHʬD^@. ء5t Tn2Lb:ږHVrՙKN6 ypv\[`[_w1|35 bCJ=_=dv9~jz:Ie>m[}6*ftwj6ﳬD>߿vơM pu,Naȍڑփ_k=,Zo^ntK7m^9-Ci9Haiwm/J0oyo3,>p ~ ڒ̌cC)U}zKIU,\8C~_^uW:M.miҊ:_Mv\S,ł:0eRj2P\.A\0F_Kk;~I^9^L j.m(a^H´Vj*XTz%5uTZqWZ``c}V*HG &!er塭gh¸V5OUk%w_䕅ON0|ak6f2,Lt*54jti,ˏ. U", cØ\n $Rcڷ21蘁hoPKb"Jk/zE:PG&oJrZXOPM>THTt>Y vuTR;Y8Z_IZRs}qAE݆Qd^ Mh)M??תҾ[KC >_=z™6ex3BP*&]I,]dY@ZlwEFʥLFfLL52CTn,R2 HFn!=7 YƞXZ75bANٗƌD,11g7`OD3#vDUA,N7L--40x*"PL 3Fj:sY+64QcMMz("IUS*&6 C]ӹp#v+XPuaDnx!gueU;DTlhcr`ubgJ-'J@QMeĺ g;"~D:1uv}q1tqsōSLd¤CL(}+4 C^Ob i wf#..qXu싇~n9<am7~r! ~41A"\3~t\f)cH.=*JXCM-}!bpm1ȱXQH+JTW@K٫V޳K1!b ORTtqi 62SʡzIsR'wm(&_;,wXu_fmvʌן~|3)`0kr -jՊiR%cEx@͈Qt&Ճ1i%<%}JYg#%9 L8bkYhTJlyVTUbyX`|XS,z]1*Rm_=W3*{Qƙb)0[dJ w6*utd:ryttѹ =?t>CtfW8F({]۬W!b>gx.ϧ]J3CH dkA*Q.xe`Qp#"-;  1fHyR:d-nTM(m5Zd& (ԾG~ފ1cB{D yti{_o/?mb{#jY_<3plD1xXIIW MՏx.'Ou$kgD hI!v85C0#-8RnEΛ;!H FW$A]`̀>D2I[1F]AQJs B.bwۅcrzu$N?wֻb?zF Q;-(  ; x.azg4{ 1GagP)JJT:Т-}bku TVx7AJ^upLNiYPNɭ~,W7]Eߛo1S1Or\=HaD sdޥtKwJ `6ei}-[m7-!7I.{lg 3F _tM8_8Ѷӗ Б6k坻Jt]bE<}_.ιH$>b0L)39S#%!?Tk-'v#too_^ x;}-1$2E*u !y\aW8suI^jH%/3N|"4ru,YHƾc qώGxJ-1⹼"흇W?~+=؁%]|v(ke<(J8^ZB뮬Pƒ(W‰4YUV0'բ&bĮ(EH;I2=w~zzbەjfZSs{̹Zwv!I:F$Bf+XL 2 ҋkP?֎y-Ș\&ϷxS?b~ߎ nʠWEu'w p? ㇓?\Zsm܋12>8AٓS?L.gULN0-?>XA/ @|mn{y5lFn oImG_CZύ}2 E_E^ W d.TnP[Gw勺F8\gE8ڠ\q)\=JJdm#z]P5S}Xu]N9v_׋ן8ڗyDTiyy~v2>#ข2B@?mZ[q߬N+:Γggi{zȷ}6AY4ߴ'Ԃӵ0,ɪLѲ2Cz2ѡ'/>-d.MmAsZZk˒r˔X BIC+V?n "WPRs}5P䊣GIQUЄ? ”>p?hc|r[Y#WQ{R#Wn8˙m.W-r`T;ȕQ+Dk+DifG!W]ěxrr.ER-Td 1. Fe߻):Z>9Y|7`ϮJ%^ru~W &s7m7y=rYY|ǿ17/x||gIOX)w omv}NY_31~+غl2-Ad}aX3q ˚o7g?ge2gٶ=xS#|0 kָ8Ѷt3E[[d[wǬ}ފ17oFqm=bmh {{zǧ7#THq)FtVΒv(⇮D͍VB2%ұ[R CsVPی3.R7M#s?85tZ}|8fRZH~hCwd*S dK68v4Gͅ?-6ڔ,}n"՛7pߦri j+0ZjfN@uM+5RšDjY{~gмb4cF6cW}RёJUF-9+ZpKFk$3%2?C- m*F)]8vp#*K"LCv(.e`0 +.|KqUlnc0j!S8fQGBp˨+ȈK*=EM>_=dCU,tT:f*Jv>Bn $J!wus2ͫe7-GpNCTG)Y7̶6E5ȍK6Jpۨc;SSkn`ts=i:wK9jkeߠg1X6cBFUy*ڄҙܜ*_*T %% HQR$dHTp,ֺ[7.m|i&%h2X'.V[ZQW!5xKǐ둇 -Ul%E6u:ZaPB B$vtFzY.MVNDP9PP[ r=JNwx A^kaFx^vil5 "(e*9<7J|YX,X]8&cb!-j CKX ݑixu dx.R#uߡ˶NJjJ\`A4ozTF)*:t16hS5B-E9􆣲AP:,_@;Üt FWkVَt5TMPܝ&EIʆmBGkjz4.i`!VУ5:(*-C(ziZ26Xc9b22y v%4/j3ki m 6Yo]1ska$ $ %d&\+Bi?hx"WX)[K1 :,,tϺ0LX;*s a #JB :)$ܙ# 4dt 0j:XpVdFznVz!轩Ŋ6YRp(q ,s& fYR❐(h`ʗ'M_CTАcݺ8'Yq`J*D{m>38 *+=[ BQtʚ9Ͱ"qʰI&P}E׊X{F}f[ҠuthwXg%' cDUJ=II / T`x &^X֓~~?'?^;W0o+?8;5͈ ǀ 3~1\8tiZT -g%sJٷ]e cZQ n*WZ%\)䌎`Y) i:yazr jhnw*Xb30ce@0 9yVxOC4cvf:PF{MLO #i  i3o=ZH)FC;T."1,e"ԿYKؔM1@\G÷h5W jL!zs\˾L0;yMOW^/3˴ǫӾ\ϝIvXf`0.`5£gekOi{0S?3|vVœ_Qa6f4Yd0-7nPdlֳڌ4IMG>  <%*`O~rXr* P/h7VhoSyD;D;,w[Qz낆0dj*g':SC}"XoުͰc5A'suSjCn`ẟl;:YOP Ӫ` GʅPmQEɡf$Uca<Vu`\SFSa6nmVܢhکHk֪WVm8|ͤLсX RVcږzQ=G 9\sA]|ӈX*~ v!` EH.֔70dlCk3^FZ _RrXzGIz`lCi#4 8%߆RZ7$ȭ E>!4T65gs˥ZhU>]\TTǢdM:Sj/#) -(`ZX- UZ*y׹H]O|u"CQ>@J+e?`L7B<exQ-/Z6%* иl17ލlm<-SqG%_QQۻlg=,w}Z6q)6| !+G@J|@B $$@B $$@B $$@B $$@B $$@B $$@B $$@B $$@B $$@B $$@B $$@B=6? `H 7C!@B=F;`,$@B $$@B $$@B $$@B $$@B $$@B $$@B $$@B $$@B $$@B $$@B $$@^8C"ppC!x8$qNO!Jz$ORB $$@B $$@B $$@B $$@B $$@B $$@B $$@B $$@B $$@B $$@B $$@B xI 2VC"`H K`H D̓'f*Q@OAd0_ǧ!go>b󣧿x٪]ܨҫ_Nc ?Ϲ Jyic){˖ԭ^h΢,?ڧ+|'i{I'{I'{I'{I'{I'{I'{I'{I'{I'{I'{I'{z<}~B.G?|͛7.6/ @B>pC:J XOiy:(֣ cl+Oټo_b?c q-VGwAb|~ TW.42^zɔ9# e">OѢb=P V#oon6%}}Ϗ Bj׫Ggw'GocExGKR(gPi֪fKE=w?]oTQ.8֨Gm3LMw)r#ݷs}o%Z˼Wmc[ *%`hi2uRIIU;3I̹b>?vVm߯c=ۥ{S|2L?z" ;tt]lg^^v:>qُ,ǻ/Y[a6y|H?{6俊,~? \fv3l0 ~:˒"qurݤ%%)#}(ukE<5h(D"yX˖UZע ^xg&'1E.Z'OJϛl]3bvu*Omգ!# ^M4 no4vn҅wfz!M]K+| Lo?iܣHnӭ]ϋ|-,.?:$s2" DģX\QCRtM!*aĖ9 㰓ҋ@sGRm X10neKUf^#:攄U<|4!SiH[ "‘V+йJJc5S$P^Fu lO%?Ҵ  r)H ɔH\,JZi#B:l$:A+HV}7@,SS΍霍:$Ah%-fuΫs^ >KkkujT;c\-Rݟը^9Lg|xYK* _e>Era.z} =|ЛmVæ33͐kUgo_{(bz ɴz)  a}L3S)(0o%P0`mqkBJ!zr lAy4 .X ]w~{cS)E^WiyQH!tPZz5θFXūM§[X'`%Cof{us V;onW l 1* |[spiU5bra!2c#1~aH0\aևO0i*>].^yK"%=Ũep0zC/K]ߖ76:N|Cxuga}+`?:Oo~N?߿/|KLz}+Xq30 6$v@WCG s ͇09z6Ọ+sr˸_%q1Xr+CWCSt#|zt3Y@I dYunH&&i#@6M!/{nEE_PEn QCڈ)zT@o WE֭jU4FIF7,B9,#4<'TE1!JgC,XGkwFSz:S86]˻eqQMΰPg3t׭oxf^&o_n>rS؛(b-U406B`0ө\9ɹOj)(:} K6ulaz;>} Gy!O8ЎnՍvkkx:1uN-cB]vP5zF&eE3Wk, Z\W`4 uԸh{? BK݈Z4( ~9o,Bu_9b`Qh*!]XnuIJ`ʠ(~K@cv1lc}֮q{Ӧ;ңcuZk0ILaG|r5\)eɪ|1qs3@cĜEjh`{&+(Sg,~߯B#ٍn6 @]ao yk9VItx+FdvS)b^HP8SL. x%*g`p+ Lvx)qS>rZ){Q%?X06+*D$y("@m)HJ-OX,qU(\F(PpOΤpB}6no?2b*H ,(*ʶPgBx vj y{CWψ;w΃^ǣAݟh6C0@`B !l4X`Eь`X$ `=r򠌣(taj| A:I9o8&C;!uw8)|YeM)Nƃ)"@ F/4_S4aw{GF`&#A]>v8,%wˆh,͌Ki RqLEd>*b$XB0+aKS@] <@@,NnX<@@ESmQcD,t?Q:HF+ 3^2جL2(:o;cƌL"^ˈiDk45DddFΎZ7R:Ⱦtr_!ذ_Xԕr|=W[:\AXX$b>-[»w=O>m#LFYy-I@hsשkU '@ZWZut-.l9ˠo7f!,e1ݻ4_l];h&цw7;ė Ϗq7?-{nb7zן7ua}ЦXmaz*0 aЋłid7-6&\g8*3 dK!sŧiTj+`Lx.r1 DgzY{ؑhoslHpRiAbP52ǁYBS1' #-z' EcD2Y+냉KM-a$EV T ֙lhOqx2rBO _Ӂ簆L,UkuZ̷Ru.xC̅#gSJHT;e E0`gL9ǩZ#$cb:b3")1ISNX,-,xGI4 ,K6r uN-{aN`q+u4~.Rp;AWoLWmBnl]`:^ɼ!l2`%aՙn4 ωYKϗ/4_S0}6܌&, ubtyy; _~:ZqF%AKK\T D4JKB%gB„16 _yf* .$&fTYU[mŴy] VauT=՝_^]Ne|/`_Ջm}o/AۇKYw?,Hs?L73zIt /%s;eyয়4crNǂoBR y:rcË?q#" N-X9-3^joo> e^[>ZYK0X`"k"QKx|s+&9NxQ>$8;8f vg(A>ZrG9_{<`Є9- S}g)';3 +Q(r4"$sFé6Bs4¨huI˲:q2|Z[%.|{d݊[6yjD^?m;,5KrIr:gPqKdm\ IL aCi'Yw޾y6Mܲlt[?2~b ]1pξyl9@S+R[0EER&zGq]ʻHy3WW-lוE=|cam it0׫TuP z8Ǚqi,M(K,"p'\ebdo ,eS)* +`>X2,(S4rM /{Ag9O+Ӯk*w0T#}FSy2!j!=.h8'QhX<0UPCR,6H@.(/ cXXC"2 j6r j8zX& V:IdElM7Noƕbx|_ꛊ˪w8(ȳ8& >8{FÒ Fqf`6^,HʀEV(Gk#TOpYNm7ܢ J*fFǏe*ta6S6υΡ tQuJmB- ɯf0 Okt]D*,I&MTD Ȃ%YPsv Ɣ',4$cx6N0 $`) Ӂi*"'UfێabjmYkNkwv(p=18Zc1DP̪GVD"XkB9Gy+>E.qVqBdXю  D9t\0WQ1a6rʨ|%x*1yf;ičqV TcsӍ3Җj#E\ ڀV #$$8pFBb%=cud,i$.1!n*ySuf}̬;_g7'CQ(#*?{Ƒ$͐~0Y(p1~J\Q`R$(q(H sz{~]]ٕH%vRcAC2tᗀ0HaM ㎧aw?tu=@XeEJ ~Lޏ |+.ȫEU&hYlbJZi<6WNti#i#ۊymżC%`BJV9.5lG㰕TB-A ۜ_x'!_*"Nl^?\CZM|nfӋVWJ/Z?2QESa,%BVZB܁p&1qdʼnDQ vh5Z`(3*h2:&0%cPBj1r{/@SѼXqh嫭׶ɠߦ(-z,?eUϭF`lZ-)#y@*#`G#.nٲ}tn"}3ؗzLkмҁqtg[0)0"R).SC8nއ\lJt4{.ݜCkx=lA?^uhZhD)udo")UL$K,LνTDHh4[I,/C@" ,) >쨋10YY7z{H@p/x}&ɻEu0:O[bs7UFi^ pl<xb*BPRP 5irRPjyO33O/||BO<_Xu*VY(AQޒpb?7I/o}]φRO߰0{.LBe-n͖Z?;4Nsf:gq6I2ñkj0{?} 턷1MT%ٚsam[ XthiaMD4aGO=XYE6o<ħI`p2c9WAqqIVD̉ђx4b(QJ =hHbr`.4sixJ@,6dȱhAPmV0{T*2 -℃ā.0]J«xNTNӳ 4o1yW9KLtViޒ0P  n4.;/BK\lZޚ,’h|ى`N#_-lv#W=N$K[W%)"y)v*i%Xbp5}g&x9x"( -J)#RHDcW4 ?9Y IgOϱÍ uy.pWhO>IatDq"\E-DeZ_ELVAg8Dg&fbq0M,9.,T?fa[y5ڝWvv&j3_60JqIQTEKqYF$@YLL2fp@e\SK#x\a +Al9 sϹ.2)np0-:Rʠv@#gGɣQe%*9oտPcRi|L`KՁ!~p\Zz+UvşveX*QUQAp•PTQ}DpVW\-ZUBJ0rLU"X\%r]+?JTJ)!N5: j|FWWa+`2\~)YJ9XWoj5'`* 0JN'?:ٷ$+]To|OϦbwӸ :;_Be!-Gq`_f?fi,CAeld43NsJwDVu_T e0TpQϒ niU-ߊTUQq?@_)41}|tQ5_/„aLao} 0+tɋnD%9랆sO%a[5QkJ*Al .lsS5c"sn"| ըjZaoi0:/"֪%]$%<,p'l1gw3ŜeZZ@qn%b ѻD0\g2%NńYJ9G_~|ËJ2ޝ\Nt6o-K6SOR EL&99"DRQl:ǍFW"f闝%4^d%N\:F33H){Y`p2j1yuD>'[OO_~+ x-Yq@2oyFI:-aN~<-F— YTNF!]:rYStgkdVռX+0RvPtQ\laf2 WvpӝLao%P0 *:mqkR ˥CR%T֭Ϊg°.bK*?G2dצE^c7^I;(Ь7LnV0;Sì5R-6>qG4Wʫ">[S:+<7Eëeb- b̧0R}KDp4i@|x o0|aH0\aև'XfГ1WR;*AGOnԚ uLui%X}6ifb`8cS(Щ?,g} ٛW߾I?~{{3L٫:;}=8 LM[G۝_w"@3~zмaT-YW ͸)׌{G|Ÿ1w- Jg?>}?ʙ Rֳ]֧+=9W0"(U%sW$Ed#AqU)o2+ŗ6G=6m;FkEJr#"#2HjM#H06;o-lhRMҖ5,)JV)hYf%QɌs2ZиrDЙr AގFM>zã_~7\M7LSdWůwӜ r y/PmDV#] Ϥ{9N#⇘g}Y|[š{,כ{봯^c@˭cbl 8w"'A8~_EmƏi]x>Tu(FX+,40AG 9νT3iͤ݋5f)y/ϊyx+CCe SȨH!ED Z cC*}ZsjC"o3HqoN=­+pя9It:ib| =]6&Y x'59AlΉeR:ˌ^x8-I<B[K҃VH-I>t\yǭQhR339$J3BA^ UY-AOWEnȃ*yGAvJØgCj솂]>#i:*6ywvSƫߦ~ҸMմBW>끮]RRT~o /z^Tg',kч&X0$oC[0΄>:pv}C%W I,Z`Io# 8Аq͜dHy!l* ]iqҸ\ S2#-g`&+jƝ2ޞiCf0{GG&W x}bQ!l^-!Ƞ)g "з#Z>KgEȶF:Cΐ%YYPg'5ƳJ;eFsng qnfDl?JHu"He9Yjc9Y\.shP'E` t&rcxIl{W܋uZjD"3$ZV%H,cZyA: \ANQ }]8xps>\vl`Jz1hK|M .jtvfz u?2pSqW 殝e1D`t7?av#7if[gzO~''n'G)/dfy:˟Yr.+*yRyAzJ$ʧBt$z֞YSْ7oP7z`?e!:ҵ4z+].^.zY~XEѵ.:[Yz7yRةrwݕ_ںX}QeSzl2_w~hyZ6!}~;8~E%4_=gqv0Q)à 5)$ 5.k)!uޏX />J>F$tGsVum Vf\n/<_->u$h f2A+9|0 ǐu .zj-+gӝ?Dl4V$Gz2U5ʤɠ8d$Ҁwq>aS|Sf`3TתiWߺ} Czݐ>5^Ya; !-Iɏ36pLݽr]gY2 eiaQy.EI(}R Sqo**5n= JT/IHb"Y8!d!]BOn~=3TTo)P3|t$1pE>eNf@2تW<פbVh%x-eO#`$^)9:(XYQ[QY `i=~~Lj{=}w}n n=tzOO UOŪZ= )x}dNh0+J x΍ @Ix:=9y]2jk0% Y %dzr,H9]dnI9dj[jQVf mfߴPޫTqO➅9f|-^bTXؠxIJPR/: l$r $ AHQ̦&jDp:28M Y4vOZa:I\vqՆV= m;^Js >pP`J[6gX(4Du(LĺѨ1XŁ2!CĢ̆lM,j#(z6!Ra5qvÖԯa<XmjUe8Xĭ[TX̃~ UZ1xziFG ˦򦤎6hTq`i:J %δDRAkVkTκcy- U:#-")ATԉ5)]K=No٦{Bc+t?>Z|X٫2^}Pع|̍3a(#6I~X0AH"=ZE~Q@eaGqg fa1&hc!kDTYg(٭o@KS ļjkgP>) W D̒ Skf!&J6]=w6e9{y?$1NwLO?fr(AD_pޣNa#No`~k ػw?㾝r<{3|P}u!Ld dɛRYi%]6ݙK#dOu_dPĔ 1kB,O*FK${D&|Df5JgȀ)2ɰ_p A{(0ҏe^~Nzsb\Ccx9r.G_1HFJ+"GQ0eS {@b%Y""`rltly3әq#/59-:p,4)/MʚG3$٤oIH,<# @oZsBtorD-m 4%:gkt&1)7::5&c'攉YzSLgig<ߚ uALS "<&d96j31AĖk}.rCz G,2P9%WF3r:@mAicҷzZp(`hPOEe- ,!+F(p7)B3ƫ.A6Ȕ0Rl12)l@!FH%sD0?݌쥬&̔I\dR9xpE jRrc}ș{\,}O{'{In ,MOJ wLR]ɡoYj^? 6oc&E&L+sd4,x2t붘3]x zOȦzUmgQ+-e&sl9*0N9I"֊91Z3=X4J)!!NŞE//c_bURba (G%Fn5lf5|n H Rqa^u"xar{/u:MQ6/!.{Yu9 z0ϸ~kű;3~ᥜ󬂖ȑspq~Rԡq%/?߀6pGU7G))>FXnc"s9~)܎v2ﶣv Ms6y()ee8c>+9SSK{2),$u>x.Tm+ k6V*Qv>U+L^{ ^vcEbKxX|.̪uތ|qV-a^R\Q[XBZqc8yXp[901Om;م\?peM?w_͟7Et8 7Go ?;gwCht3]Jeܫ]7rG4N[~4wfbWϭon W鞘)` a|4ޔ-{836Ù1vSp?zcQi0ގMqߌO"Mwݖض~귣Meg֖嚬'|/x]Ob2iT=;OzKZR-}TK_/җj9/}s]AR(;]hs&ST$bt(I+s9.WRY=ٍvo.VlG.h j,q^pRHQEOQ~1 )Ğg d\PPGCǀ7 'sVA- ƽuH ÌМCY{d8i|&xɽwG2Pűoύ۸^tS>kfk@LW_*ixhiS_5= )Dμa6ZƐe(vк;H'N9gH!};t@QT0t%paVk^9>w)ߌ>b4|5QmRF>#=[;$J.ZkR-ƫCӣ]7-eGs wǺe{NFJBH:y!"m V[܂Ė90xdT^b$x>m<j~TxG$"%1,2*N+18APvdb^X2TB)! Oۣ*^z|4 ! 4P8` 'V s&k8H(8aF"rZ{n.մFB# t d1UB2%EIu@ZK9qDHgDH%V4 B T"E9Gz^[tFnxgq9 |h1#:K֗gE˓W#jqc\4.Lߪ?Sș <X+rs:UNSfI4مzx!YpԱfH25|}UOv1 a :>0uǦ(µ;.& ÕgJ`pUtb K KdSr)9gpRHkXו$qrS E4ͧjp~Q-ճse$ćo{oX}H#] CZႵ3,`0 V0b]l`dr*-dۨ-sk4#4{$ ́}>7nb >٨:m DNįUga8s `ǿ8:ߦ9?g{{7q+0 q l{~ފ ߽ C󮆆񚡩bk -q|qeSn|N%妦*@~~пn߽]\'8ʹfuQ7_a_W5E.J~*Bk_E>b*GAyU^U.+[Wj޾Xj-GlLF9H9)P$ i$P^1!J* 0~Go$ȼ>$r>`Dk18F ?`:Ök0'9J-eEgwZPC"Lnodߙ(s4t aimSyD5tޤS#_z%\]/*4JoNQ^UD:?⼋*9sH[G‰uXTMa{ӒЖuVɱ:INwI|N]0. IUR1qs3`OLj9+ͺT%@okO=3Y@p4{"v+_JytJe.0@"?N YmXRwW__ѧa I^!=ͩ@``M-F"J2;lc%H (C0!B*Caj3֦{&yeĀ(hj4BZ":vvFΚnJ6 ʆ]yrP3P@J%[{kt8ޒyWLu]y^U6k5Z fRi%MRҊZ7S82k~sh]?Ϋ˻+:Y?WZ2un-gWw^AFs-x4Z_t4:<z翳؛p]3s^r&=efMuϾ)>kr">Przls%RtCT"!*ѡjoJT* oܕra_\a%^ ?O:)WK戻 r=ϭb8g)(݉ٸg#E{^Mqۑq#Zw6rlHpRiAbPe/x0)4s1:uFJ{PT9F$㑅H>HԔ1тFQ4`pH9k:ָ=r8|QD 4Q('ΡTr|ٕ5:h \:rF=DS B8AZ4]wmM iANv x`M jkdxV)J!QM,jtUuS2 aByk KVT)`E!H#6 ,E#gFEALIPFYtR2}ʹڂ5[+]h,Uz7LUz8kT]32!= ?u]%MN-J:۸\ڝncbß[bGkB98 LD˔1Y -<4vt^]0*z%f$yg5s] mg< #G"$Zz~ζ\/DCb%ݐjAțo^'0TW'NC/N.^P(5׽LpfydddDQ5+PJ1fѢ@:'.*z^X")C$ 8$2'2Z^9}<ʼn;o)Z xP` :}P"%N&v,͹b# K0.pMT_ 1t1@9 ڭ H L@l PF*2DY.Vlz_H'B&~2SMK-mb C :i}`:Uo]BJI{Ȃ8o%ϘH6 ↌UM`A\ڥxYcnضAZQmvur*-]R:ۃ%uKݫ^My-I#w~}f _0ߑ510={\P\rI߾KRG㮋}w>ĩa TB !g pڨ% ׉'n$X&;u]AX 2'wg@ :-~ ڨ<毊VzQw4uwbOzɑĭ-W6w.*@4c]xԯk؂E*!ǕўS^ӘPbZ9imK2qur'!so*88EvJBR+4D0ģ^&knRBTݱ'~It'm#viцXݢ̍}X{g:5,8J3$h@ZgS2p0-)'qBLX6RsVTY.+3Φ &f8l"c#ܡP@\PR/' ǙF.mOX{1QlS,(Ͷe0%#2u_A;(㽊Vמ{S g2jRɜ#ؑȓZloʿvt}GyQAy:q g{4]!4K01(̹O1 la77 )݌t/cS8dzeHQs+K%v'O(,imМܰ@q4 p"n:D8 "|u 00X!E&X .|458{`xR T\'M"(mY„B N暀:E) $bRQɉ}#WU-2{S␕}RTڲ \&1y# K%@< p[*FY;MfQBD"4L 3SR>TZRdKo`cѷo2[35F~ȼ^vf,v4L.\{[$[0֫ e;H-fIWXI7R\j7]UkUj?5anm˙ߪ"im[Q"\\Rn55H֔F 좛U.Ɯs[X9,sʻ^W޷rnt87Jo~nkd!]5rGM-RaPgꗷ-O57ak>, M3mn_0RTד<֋ov<Fa0ZMM%z/h6 A~k~1^B%lԇAn㽏R5g 7_4c׵pԙ]WZ-p擬zoyҫч?r~a!_XlWX{vGj.Ȫ9o͍+[rc-5TqCx *zc#fؑ/x ~ &8|8Kʚ7ճBMMąݼi^z³ rrnzp(Oa4[=z%;[sZɏ%eZ 3eplY](KYΚ%JsV!/nHϞ xN YQ= & m^^vx^ jX/7E%1.P 6, VUmLH헋Ee`~.|Q˻ոߙ'|{?V OżeO{ c;{Ql.F~dqyR94NbN9<Ѳ-b"&N͊Vн#V|Xܻ, ){?yڢH*NIIZQNVFA.AKD Ookl>EMCs^|4ٛ{%%s<;yDs!fì*[(+m"5YU[SA֥3nFBiǛ_|tc]vO.5-*9LYŶӴHӒ0]>|j4 Oqq-+Pa >kY 3I:Q.Ȥ mfc'd;-)f_Md4 JD\q"X0H&;8bL[8 24"AFb͹Ap[tqq$N:%_gYr(.¸:;\}K3 %`NviЖSE`šb-. aὟ[ lY#z6#fXQ4A7&S9?h!>aϫ1>S;̇E3Sj:AМcvb<}G΋WbPx6'ċ3zul>w]x$:sBK =Vª8zb6l}L\Gkub6דkNzj͵Tb,6l[, \Ljz(\ X("4WRWĕ_Q}ͲPm geʽb30:(S9W{&xˈC&2~y}]4.[3,1V;P>X.@ &߾wΈ0͵cWIry0^%&Jtn!sV=V :*3Ɯ<3sRK)S[̮y|=/^tۅb[ۡ`mvpQ5[pjdiry=猒Z+︌X2\XK$Qcr I4s [J$ i][ Pb*\Zx\W2Vʐ+Z!B5tڈ+m4/]\5"tj!wEu8sW$׸Ppjk|᪅R :: PжB|=r;H`V5>Tn ŸWw}iZ-?ulk=~[=Dz*M\ \WzV-LGN)fV ȷB0%`J I6M4t 1e]@BZ`pEr+R{z*pF\JdtcNrn:)r *o64i9ͭV;; 4W2ixH \GjjG*ܿB v&\\y7zS?Z-TW-ĕ,MW(XW$WPpEjm:Hlۈ+܁W$XaBrU0KV7W0i%,hրk g0Hr WT/{ip(rE,P@".v WTnz mpE:NiL!}\LgS,X+)/?zs^/LF- n@ܧXbC{7mH"(C>nք.*bbOl"zkgn*oF|>oû9ೣdPAncNg^f,@+_?ϗ@WNw<Cu I$WP\BRky]BRd%BV] [ WVTj᪅R M\̺p#wJ\i ̄]`:\\ Hiw*9Nڈ+#3!-g`]\5"wJ\Yii0 '3Nj?$Z+8XP0WppEji:P"\U}d:j vćԔO&juT*ƚ& 5p[,As >`(\Zx\JUqevڤSE%82g`w$aki4Ux֘cTZa- ? 讌=1`LvB< BNVg NxZ$#.IE˙#J:qӱ"V+[+RuqE*E7Zm#̅A`pEr Wɦ Udڃ+m4W* \`m0": 5Wwj#qVpE]8A'+R+tI*+]@ZLHS(Xp$\("B4WRvUqzpcq~w{<H fMp'?yYg0wUK0ZqUO=fZjh0XOexV= z@ Vr;WH-n7o'dak.4ɕ InӤ-Ĵ5z9G̈ӋJhģ;T'Nb Ġ4 GO~Aăzj9kG*E7JOXU^VVunB >Fh^޸v>4%݋G4҂37Ͻo~ "bar/zź'3#zı3:$;]>yŖ4djxҮ)n0H=Tү׳\4+UReNGWP$VisΣete+_7u't4Yn\6s%n ^W7%SmbȖn$_Ulx-٨7JcV'Zi΄H 83wq*JaʃK˴r#rtUߖ'btY<̿L{g=}_˘c0Bѻ ,`IX$MP^gr:ɌEo9\4F3]Cz؆T~v5k{bjTa0 #~%H4U2[8r$7R?eTa>w5#3;g44A:S0_Doʟz9Ox|9/ZJ1[)l[U ih@ x]Ɠ kWѬtIy{3N1wQUo$ŁVS@0޿fw*V3K Do˅',Qz E1EVk}ďSt)( Q2˴xO \?+Ǡ|Cm%v~vu(o\NVwGxSu?W~(.<]..WW[HQOsMeYm˶2IMK9ߦp~D2'%)$)]KMFi7Fɴ "Ӻ4GWeoݝ:RݓW Yؤ:rmVh>Ogzri6ЇrY<7Pwl4)/Gg?<߯^.>u^OK`S$%|͎Y?Y#iPIcz뤅$m!s*C5=~|t`vnܢ 6zt=)~|zu=b;88"EdQ f~mLW(T}Cb{UV 3pdPw10 //lu|qhכeT,>H_qdB|0ʔDr`9yEdSDKs bo-irwǞmX3W47qaB?mmltwUﬞ#iEp$pЏ-0tF $|`szb<> O9bnXb7%*RsXt(&;qSak0C)z_RJU?PKęt@@>+g ntzt@,Н1Fq{a1$8tGh!T;Vga'L狑I)fȣ/<{-r֣f2Z[3I;8\dM9(J@+p$ lVwdan36AW7"HqVP>R`BчK?xuO|hX7}U_/_'=> imfa՛T;bg /[xa}.owI;ߡXxeGh'`Z} k#v%iQn|ƍIWr4P̌g4EuMGG) B2Jҩ8P>4`tHci|o*pYm*\\62) 3A*m/\3Rŵ/2*~KiWm7{h jрۉc(:0EC霬;Oܲt,Ņ iqDF%1 C>ˆ 1{V]IM])kHyᓗ@1$@DԔeHK6>o&Α k|IkLW=ݱ?wi-yʬ wY+ Ǎw{mJ|D`84泏-koO$NK dg-v]TY.aCk|)| ;4޵pi;{?IȨ :¨$UVˮeMEMҨ`$BU Mٻo2{(CPHKr %UT^鳰(|T {3qd/f| ӓ"z!1.uwzg^J1׭i)VȀZ+E3V)D \{d<1 "dTS#Yf WA2pַ2ZYH.I%VAFl]E8um/--tcֱ)_FXj:X6@Lr[L"u|nnn\^}UQ1̓% ņNip~.*^|~o/ ϔFHy#E_<_uS'XY\Zr]qEb/-m94(X:SPI#)`?I^:_LAj61m@a89#YޖUetN3} ۅ[2*7֪tS/ꪆ%|cڡvCnxaۋ퀴i!67l1W8#aIZKSAB"N,ɸdY1sDnoW*wlVz=(/ G "k2]ɘg80bMke~9gLV?hehU)Pp|-eMXE)ɘ ٠ 3|SC- \aSIZ X y_Y%@+e:& 8 ]w1,- v`;YCIq߃WC~E{^x~dgZS^B,^.c AtFk4yŎ^$\gQΔ&"3iW3qr,At%3QJI9<36ӌ}i-|/dcpƶoRLRfR/,) I !M1$c]fȌ %0F]\ZUG}S/aR]\a3qÞߤTx)~Mch8xĽzcL6Y)gf!iB1Nio[2-x)uܸ2uRglgRNbP9f fF8=w_ub)tI{ʹTas{5^j#4(O1 +E')R}|fOdHHqx)vG?5c"zng~Fk>1;^|iD EvmKZꮒ򮦎v^ytn`&RyJ%/D{6I1%ZQ/dXdͧiOF( 6ˡw^M~|1a6'ć?4Cw1wݺQڃw'!<ᙲt !z'HD5 \RI5$XLF9Z0i(-AJBh@ՙJ"٨E3fqw`ǵJ;ً@>B0X3ݼb`,5{tTwS9.;(ZvBK狱uXqU23p<鶙k+d99w=sח4ys29F8;1E z&2 dec3o>o Joc2N)/Y竛vJ~OKx !Q$_si*W(IDR8kX}3~IF/.{c -vp`PE8R '^ x.AuY}>8 C6)`F9mBB+lB4N*{D`ɟ nn'g-tYa\ZeK*)BH Hrdi-X/|}8Ia ⺛&c\j~ '2]]4[/Uc!& Wq5[|*j$ߧ._V0zylQ6FGwo 50]/#¦E|GV~ͱ[߃[l*!(\ s |TJnvM:\!ϗ ˜F:6e=z<:77GyV_=K~~l4k؛%O=RQk3w7;MsYΦMaz:Z^}'~y,l5٤_+ReRb ^I_!siG?U9V~Ae5}.Fo>|џ4I5i.:w@TAwmZQ—C@6]4͗(zYn^Hmc[#4ssȀ`l֤;vH5zۃF5sW111ϋ}ԐB/_u^֮7:k*5y)݃_heo,S5pvtgm - N.]hݵOl[N?BNCw}PYxrUS|5%[dN߬;f7ж/$yQ/ꮖRo֯@/k[1noc7=\_z`6Mn P'ax?mkQd"?+Dg>O5#!f\\oe5ҥ\0i_Rˊ'_uyPbee&yol7_[1kG57ZN7bcwr}mzc>{?_ <41A$ld4ƸZKP1* Uvœ %JIZER,ηlYb _πxt EIXmL&z8s6/({aӶTu6zD/wR:#­q)Qj2D1H;_:ųm^4yNf"b_}EpUz͋A0hY>̾ [>(࿞ Mc`˒BZr Y۔ҦKϺ}ΪtțvE^,JBxP ]Є,K]kE _x/N^u([)qr`ծI' E,jdQ9={Lbl֭9L*hokWilWX|fvxy6Covr>WƐW:m1TC8u¶>AOVajPߊ72WK5;yТWwƯ")VI/htA0K:ɢR 2DAlp5 碣Yz٦w>ie"ɽ?>vb8Wa$.iMJ0I#jj SH13هT%nvOw}-.d_ߚ%7+Ul7 sO֚dia}uF)J5r@mTZ%jLlQ #mjLR"(#b+,t{m487C0C#5w2Oۂ룟%3?KvFk񱗘O#1AZzQdGC0)3\mwe?TCkhC5~(&Z]5t,\rW誡r骡4jOWϐN+gng p]v2NW@B]yC+RW$ RNW ?r0NWWj3fhP*)h=]u襕jmR;CW . ]5v骡dHW>_eY[\߁[5k)OVBUE J*s`Г3y߽گ<1SA^PdO߹ch7;CӪ=aw֒vJ^[ei-e ds<ڎpL8BG >OÇaJ_9{^ܫ1:m?"GaGv!W3bVUԔSφjnW=hɲ݂Mrf;# GXuI6l$l(-%3]h`oUwfX 川5R+Z]+J ].۝+GtPzgHWN2BW CghOWϐtDW_ǢhW hS=1J{ztrb6I;ZGNW w{ztlwcwvZJ]};ten90l[zdp3V=-]m=TF(Ylʠـ̞n=_zqr8cmVڮK׾~xtZ &M.xoI(CUbIg컷7]rg8::Iᴕ%\;dy?x~4_иY8OOfu;jo~}C?vM;?@9c]-?) s"Z{ W%ɢJVBlωPmQwZ ^ƠL= }wcaTnx)a;A)M .~@_+RCh=RK؛1 X !DDЗ ")6eI A1#G-5<V@FY)dU! QCH+蒣 SH*2](NV^u )K[0]Nh ZS͡mK28QQARXʃmn]9 ڢQT 1cIUhoS G$l ŐZufR'[˺ X (aqm"\c p.CE H.9ù"zP2. (ԠR)zc':ZvJcb$ J@F;0 & B8:\Iҏ v+tz\EPB]^ ߁QfBlTzCJȸ@#MABm9g1qЕ`;;q b J\&Q!NɷL ufQ%@ aFƏ*"UZd[R2AAM%hD]"iiԞ% o+Ex@_T;RDBJ.YA2ZêQUDI)beIن~`^* H 2 JDA ՘`dV"Db;P{W0#ۄpՕ Byl ffmCף]b],|144_(*/,Rvp')<ϛQs;` tt6?-جV \+UP ^gkQ'm`-x^x:p0& f%Ӟ"Cҥ`suARnY L͆HyE < 0/,R4A"&$iyU!}`M.**"K1|OA r`t_Tɠթu<oĭ ‘ ^,:UBN~D}% TU`dND0ɰaC$ovk}wq2 !5*+tmp a,zԥn y1j$!-./U pQWS@R@"d$eP׀y(%G&B&! %Si @pȃ,K^ϐB.9B&ڜ7jF F,0Z5A΃p:Dڤ td&.a6 3*I@@R$&JM>@2 ~ȃ!2!wGzc5*#+Yk"PnT\U1dH $c]:|֔ ǚq .׀mtԞEwi`d5 3K oY RP)LUEE,u i ot@f)y͸vZ(AD[2dB@`0X8hko=MO:0iL;?<9.+ε-3 U n]\!L5J+f=gMQ7"E0QGMAZkp^]ioI+?`]40=cṁaiqM8<${'xJ"uP%eE7`bUTe2QmG1*[0?tg[CZo*Ͱ1)s8A%ɪ 謏0C<;7,M~.uUbMpIBwF*`=L@L=P`=@P0嬀z;`X4VA*D \Lg"&ꬍ  @N@6V'yX c%, cX0 8#6\TYڦr {BЎ$=97Zqz"3)!juV`%1{`PʚI d҆'҃JApͽtU813cYTWdBDr`V 'M.>w{!4@ P(B P(@ P(B P(@ P(B P(@ P(B P(@ P(B P(@ P(B ~znn,$5{~TRMS7i;~PU6)[1νeAJ`H_1Xпb!]Z{*X1}.N Rlo?7X!w`DW`Xkرt gL֝J3)Q[]{W}£XL_\VZNU"?g'[wmnY}^Z&I 7eE&i.m29ΌY RHE0f5ot)b7ۖz⦧IDž/ȹg{gU pX+_i-i)U*W"qZyMDeTȥ` #*nK_ࠋ50|y0L;wY4w6Ӵ% |S)vqFW$@b')z /AAƮ[~6G_Fh ڲfo~C#MhsoMp-l̥p~~)vdz7;]_幛zrpyBX=׳ڟѫ[hx1|^|2_/KMOмb?_Wӭv[;nh4j37QƁ8;܃iYZU..Fy7 ZO_-PHe)dLSU\l, zS5uX{Q 9dطZ,ʖHNU ccOR6:JĔ``Ɖb,UQG!f%"e@F9ĝ͹7)nk]\a]Z{tM6|uxſڥ^Jܫ!n U愲෾K߬vTcNVeԛSIxmmcH:; SSV'L´QۨbGm͜;j\y 4 4ZNR c4SKyZy퀽J Z' >ZE6s;2UZ<@,d],dQpmQyټ0㗻MZ,͗~p/NpX_ ?0bk KZ˒ER)ӔxHJ Ď1"e.!J"#{YMtP^F Lc36 rLi[3v(nǂڭyǡ-ZFm &Y1 KmY%wSxī̔4G2"fxq=ZvclKE2.\'O$\0S$B|lH|N:h$Cr "(`"[C嵟akk-5c䭳X.xs%txVݗ%i5Jɀ\Q4IwHǧ~EdS)i=TI*wl2=NN®7'/|,Bȓ6D4h)dCUVD_eEL|K>΃#wdKQz'\,!oN12@&g:;"x- sΉwOߎ(*H^'yjХ띅Xtqrdlhw\+GuF &rp?J&k8sJi'Bd$$ڃ2{URu%.IVgL %u`"c#Aʁ6Ԛ9<Ǔ=)c_@ywL1"KƱvc+:?!~F-i,ALI 0'eMpF|ZfX zzSօGC;\1\KtW6+ֲ^X5n 7B˭ ۚæ]n_8HSȧnE{owu}걦_rHBFb2z  A]΄QHCgYCX*4g!LsnẌJʔHNx{q(P|y  O VH ).P˔2+Os st\Ǥh!$IvUTUQI̔ ,Q\I蜜);OR8XM`-Ck #1!?"%|8r`qeJoo7F{gt^ďxΎSR2Z%PD*-*X uY.+uHqE 'ӈc/ h!։ʏY']TGbd d3I>󀔧)t݅aWSjGA[򉑣>2~4I&6eQTjIV(U^'^Z*tDIo:ejޕA ͨ{#@8{_jK^tpK,0~?DZ;]tg-I/n?sߚ\^S7OSx>UIhQ~_Fiuaף&Q=*ID¬M"gsuYƓzVzX6bҧU]uJn:*kg7zy`=BZj^"LrSo; ʭoLWz4 _]Wz!i^Fҫ~~$YȆxߟ2*g9U-;vFo_S?Mpڠ,y>Wog֢Rـo;SGo>{ nI_q5mI++"@*ըN-u N༰41 -S&1oQjb4hR9O6*J oI)Rg-Gɧ :D 4q0;c$:'-NBd֭<#֝BsW/,=h\;J2@22ٚQ}p4n\[7:QN hTS (FĂs `I;Yk!+@]>.Չb!R2UO\R٣B)%OqiFhe-p!(\Xύ$!\ޕp2A3⩣9'LIExjۂZwC\+*ÿ pRd&gOﱺ<N p>| 8}9*E{P)%ZHbeAQ) 16A1y!)C=wD(=8WgW%<F)ՠ텒 TQKϙ#JCHF8  Rf,Ɯ#H$HUL\h&6F pgk!2  2Jn;;.@z< ZK5eo}D+ [۫74=wΫ)Ok,Tlb AhnQ%#1ki+piQUU*ȸ1 ! 64BQyt6M2xT"SJAiJ `!i\0ъRԢi(?8J]O==U *cPXK N[HWH#j+g4Z)kD$H TzM.v]DAp$r-h)А7x3i$;AjW %j"  yp9)\ZdC |yָ@(>ʊ {,2Kj:>E/1` /$??H矚7TP.&k* E+ M^^mzD4EbHuўBeVl) lBPF('얜])-LܟMޜŁ){ZhgUQ 8''Je\ǭ/sLT-:}֖HY|mg- \\Oon8o4DȹLNF/ g;'>Ǘ Urz{Lu^U<Tsumyۻ›e`3w~Ӡ[ncd$ GÊ!nx|pqih %[ij56#$46rCG-Tf~y0|ן? .O~bDW^f9IUz>aoSXB^q˵?7cx=}6`ߛ)ϸM&'KgȍCls,sO4t3/oroSz !VSeKd=lA;{) [׹.`^VD{⽟C~U6z+gCW'z.Տx孝K8}x3aiG(?BwxXw5,@[lc,0KJ4ޏեӯThU%wE?3'{ C?| {9eW-g=xj%'%^E, RHNNqٸ@akVi67NS{!M a>_(S-WIS_h¹љ%&ak#՗L8ŒpC)]'$U(he&PDUZXГ@7hdq-<҂F0\u: ֝ u܉ǥYqWY\F:Vɦęύۄ Bt7-Yt}`T83«R2C) DcC4)8[ӠM{!Zy8pr pB `HpfxT^1MDr@kO eLGK[ŝ$,6* *Õq`(DP)X.іqgkَ;g>Aߟ(ل#Kf(01E 5r*(R4lgNDm='o:1$儈**(-svO@BI\[=!THP WY.3`ANZ#BiN&𓩜]J'D\ D3n`QΈ{թ|2SJ -f$ON!9 rG\g/#uT:;!*{Y~;FA[PƠB@M9^۫!qOy*2Z` C1MgBڹ" $kͬRi8q- ^>[AL4'4vipr[K,vR2F:G&-%b5Bd@,)dj;;ꪵK_7K` 4@o)n쭭*76+Oȝj,PfT]»W=u%!ɍ]djY 6Wk֓3)c,]`s*zdK2[R,+Ē,ٺ{Q͋+#wy067 x]hv/n8xS%NI#G<|jVt3/<O3W/knn2Lk]7}.I9|U0 LPR׫ ^ ]@@i8>~흓35bGKڻE\e$Q3^PaFp:Ky!iJ圀Ue*+ &N &dHfEy:ήߦe쨽x4zJs{wqhjg( B]ۂ;yK>F9cI1k"hbq*TAH;`@3C}( cgRX2rT@ dZ6-Kk.sU󝚟œc Sz|(KhÏ c Ǩ]?{Ƒ\| `n]n>Is(J~3Cr$EZLfOuw]]U}tjRZ-'lW@׹tm6iR㭇1g.APj5),UxP*R[[)ed7o_fÅ>ͺn]-/ٸ5/-9XjYsTkŶ^+k)L@yM>jb w,<0v57V"FMa섷Hr|cj?RhO&C0 uDe `Aє5HRu,e \zYD85xe E͊W`~3:`t ǃ?ۤz@̡ugF VYBۗY|^^;\fWs}V$4Nrg:nۿ](ο/tl=e=9p0 4J%>#3N#ncLNw}2wm1ʳW@6`nflM]NL& ,L^m1 urje6ʏE$C)REx'9'Lsk~ܝ ٧V0#)6gF\Dc2_fSI- ɅGqLބO9MSX4|&q6y$qXri6Zw_eVk&Tr;f }DP \\=|W(PI]\X2~#C3+i7^42@Z!9a]Nεgx8z>,NcSxsąRq GX#i Hz40 Rt40ea:Q8it`.e 3vuQpI09CYґ *ڶ{&)FwvW+)aJ**`6ϲSo4wur)ْjA7Bf!3XeLrZVyL$a=`3@0Qh? bq,_CWԿG1c_XW s6F)'Yytf(y90>6O;\ }L` N;trle?&*8c[)$߫Fc~,P"(k2Ԫ;W N:\_îLjc]vbj( Z_20A9"CZl9&FDH JDꨍ!>OB[ݺzuh2mvU-K~,?!~,6(lO:2IHJq8" +L E'%QuB B۵nNp qr$! TG!*,APm+UnCBz@YRӒ~)@wj1u;nꪽK(oV;OPLEJ scR8@!LHs5T`ż8O;3OIX ='ѐǂ1( C=3>0XP9G0I3<`gp;< OR׍',~qq}82' D#:D&衘K*QZFϹ&?vܱ~HȦ/7MbɞThCqH:Rڶzv=0D/i׍v_} ݑĄʆbβ8렢C(q㸠@+n7,}rae GeDjF 3z[ZU˳szLިYBw<s  >  +CN~a,p2t~z0龝|\D҇ }_̾F7ŨCVg^<{|eu |󺪂%B zy۫'seه˞Ϟ?VH{meΌP ,*52 $I)ϗ+Kbxq|7%_)˧W'R\A3ЉVR2IPA0"E8qNg&􋬊$3ާ]ϓy!qˆN?^Ъws3* +.@Z_oC_խQ)g=ZLu-Coq2^]uf\,kl$yڰs׍_?;^ŽMEbKٝJvB,`8ݶ/ l2~cg4}Wf̛zL/J;'ם~6o~Y ^S^j)o 5J |`7e\`r[fY>aY4Uw.C&F~XD&ODMq #򤀉PE/4UoϹq~k.|w1Iu3Mi+&|$Ԗj3wEhwh6(yF:eѸFyp$ [gYdΠl=&RTldM-e[N^_ ц#STrvZTNmkL:,rUP1`r8D@ψ%*hM3JiRBB<9lZ`NZxS]Hfm&,Tsjho꾓b *0j #A9[ s iC@Z$JSIEO %tc^t3o=X:lqm -UsS4.ͅfЯѶ8V.F}{dK0s9;'%4Mksi%gMsIh9WtQ-/`du0ɾ~FXt0ʻދnߘN~,H儽)ڨS_^(g ?Ϳl۸h9\! WAsh.bw1˃62GZ,#4ʘ:@$3:xOE&1iQ"59)w(wdUwj۴צysOrky{kZ~(A6\p%9$F KI8%ldm\ IL#/=ykGػ`ԣ)/.gMQB-4kV'ajo͆KolTcdø}gY:BFjtBJ׾$l>!fJ^Y rVk/;^sP| *TM6f,4||YyhĞHڐI< K3-P;Yv0% oo;2-}6oër}Q j.(#(o^Ժ dZTU]==㯆3Z5VCP&;iZi1 ֺU'WR`*i{uԂ0 yc,S%Mj!=.h8O4(y` IM Q0$Pr@*ō97j cPK8Eqr%ީ5>5I$v=䛌G2(/V4o]Iy'˵W5ʞXbK~,95[Egȳ8& >8{AÒ Fqf@b^,HʀMm'GkQ*&brԖx<[`>FY*aak#y=[]朰p;tˤ/9`,x7w[\n/O~ 7+&?8bKC԰br\JᜑY0$ Js!1I˺,4$cxTH'sXAa) Ӂi:TDN5r#|<n;vEm2jj4UO 3ܓX}$c.*őiĚEPn#O☁ !f`E;$` rG(X1=r`TiLx94!_ []'D\71=* c cΣV`q&BrCmamS& #$$8: z "0(9 tR5r6N\m$I"_vXb 82W[kYR<&)YrDI)J8AHuOUW?U̙HlMKe:{02bpyvJ Nj" $#( D*ZF]bWakڱ=lynLz2\#FObWieIp}㑲&ZE7v<ʣeqa)SU\!os$OLIMlaKA%Ki[8#Tȵι >(d<*F2ĝ/{]:,?n$;ěFh[s3^ew!l^~=e,3i$a<XJ#ߏHMRjịUh4BkM̨3蘌IT.9kD$50Cm&g}x2OcC㵓_}>erOQy~EYɳb, i0/+KI?7-)R ^?_*8 -_Oޙ,P O'&O)}6 a=OZ+Kѐ0yQJ^50<+q.+WO_ჴ `m0mkfh}/|oy1,ka9,müT.G_W/+?!y5U>t6*S=%wf/{.ex6y,W4 n*-,w3ݚؓu' : !N- ` ld{  g"r;EGVXC(M )%W,JZi#B:j$DڇER0D)<XKl!h|pxyF(h=% |wB~\0fq7ɛOZ"Kf[YRպa ]9 '|VqJyQ';Hi穛엿}ȮÛ YIw Hi4Yl0U"1/Y)R?; P0E]I[3SΊ)۳0y}m(Z85!!zq l!q:2iPMTgF辌N߄ΔF|m-w^\ct+A{?dS *0 /z-R?HꘃyrCMҙŒ[N J?f?~3 lGhy\5@`M+ 6Lݠ\D)՟HE^=wKǴQ,H3>(7^"p_0}ځڝ] _lVH䰥7#YsYF01IiL i$P^1!JC7!o 4ql=bA"b>?h4 $:<t*-Mr.Z,zﴎN+{:E:CږNE?|ņrcΓN;k`Us$:b< s8YBu~oU<1TX04s-:'FmbeP}s~J2󒞆KlTh:\dۨX 9Qh#xstei׳_.y`$*Frp)BSϘ8SwbĜE N5x4F[&+(Sg,r_CH#Ƴ/ Yڤ0ʦp_Z"aM gYmw.Mz7nFQIs[#g@$`5=[0s/L*Ѽ&|0)1~~%1|z;\ ̸P%Xzؖ*vX{w$'"'@dB aV3&)u@-2 Lpp7Mc!0pį[pC9-r͙A6ƒEiC }_vP4~1A1ԘA +.Sm0 3J3ކ-DRB 2h4~=#yjkZ>O| ِ஛Q%^ l.j2mQ9.wysr1 DC CQIatN5b:A,cD N*"HXL޲2ǁYB^Ĝh.puY{% Zds}ZV9JyA AGpKq|W9m(4_v~Mۨ91zD:J&22gSŇJ*;rU̞L쉓q؜Cͧ>.;wݪ! u- -.m.nzTXh׋mCVR& 1A}0xxjnDh o3ʷ-ɉs5NM4n.Juwa): #Jv,pWfRrJ>"Jに$TWsp"\i!\%ݾc$-9$% \5^ &cw{޻LZ'j#)邿AtmS*8"eGWI\?\WO\517`X}7LِV5h“ħ^yf.)87q0W@Ə336_Rۋ,Z.5KRc.5KURc.Rjlv]jlۥv]jlްہ酢v]jlۥv]jlŤƤ⚪Lkk+:D)J $`JmR[/KmR[/KmR[/KmR[ӞA<m>_aC-R\hr{*5GNpU3*k\Jcwݰw}h}'#,;dž'T N,&E`&f4bN48F#^geK= Uxd!R&R/5eDDL ` Xy$R]gfYqp4|Nr!~tDeuwvc7o8˼]ÿ]PBБ3)%$*睲N "SE{Ɣs52N[N*1,Rg2DO"!)Hnb΂wDðq$gn%[]oI;lJǧKx,y 5SUI3[Lr[ ww%X-{dZ5 S SŻSܽ>RudֵqW؟\>wƮ^мmҒ;>:~ yYlu쵲2aDD&pX5 o3XǷ#ɑ/_A'Skq*P=Q*GLMz$.ǒ/I+顧KR*^=|0T=^{͗5ǖ͗ zBxCĉ8;DW/6zqFG6-pOĐS,؎c&MqܟZc/_,Vde GE.C ,+WJY13teJ>x1`eXQh^z+)fُR.Gҕߎg喊j Vs#Z"0肌d! M*j`Hi 9N?`>JcXX!$ȨĹ[QYڰG`x{RVt1}ur۸V[4l>z™7 ?CFiFD4(c<8=V:T083ol֋I0yh-RK@H1 N9A9%bdF5h@*72fgllXmelB  d٘\$xI5ݤ7^ j #GT"F )qI96JᜑY ^9ÎĘ$-K2x OP` @% &Mґ0`CEʌĹ(桠vձ-j̨ j vՊQ >hG=B1BPY`FY>.捈\HC3`ю Xr0 .E@4&3fnswa1J? x?Q(Mh~Gl@׮ѲJZiHy"kTЪ=HN՛`8LeU]XfRSg>bˆ./P󛞛MݪR03|V.j[UMޣRT{l#cNfvwk^ [tj!m/i"wT_ C3uݷoΜv|@_qtNTMn@6%$ZAAם?UlNqZ 5F+ =5fɝAVm7lWimɶ]m2D+YOհ_WVTqeXbWcUЍM˜([.Q#3>i0 $gv{Ʌb7ƉL}f__nl3|"LuѣwD-s]h}wЬ?ϫmv7XjX6,T~j[ݽ͟wmz(2Q8̷71M2 I!egۍޜ@NWx։+m`1c9WA㔓!vFĜ-WA3lŘQJ2bٰɲ>>0#2exseaURba (G%D rn#s R!̀H e]sКm77Bbo.&^b5y=Ʀ|Oa!TR1Œӥ6?ue,:u}? z`\s6 o3g6`?\Eí?=CxTo}4COzt.B.`G}.'W+{kGg)e<]셁{ߨU/1f\pzeW{2^aH8GߖeY9UvPR[&W&WGbԃߦEH!G|* 1N97(F䂧lqq.]֥HE4p UYd% )gy_&<{uopCWw9 /8 G?_]#4nOG#jug_sƚwVؔq3qQ (s…..6N]b[{P)&:$ľ~LlgS-nVЊfJ2*pq󘔷΃"wYI w%2Q%JrWuH\&.%\h(8Axg ؒIX|ԖhI{V{9K64\ʎd*Rjpsf?Q ~I`?yO/uTe7,?p m&FiWX(znKzp2 hZGbH|B5bv_Ng,Lfb&1%Aht=ap" `G4z-N۴=.S3ŐXաZ& ܃&A{ks#:)9Q$+)<Jd KzkiD$zC8MOQZ%4"knV@Yk\`Jf߆r(Ԩ51\xd[Lpsy JLDPy˙o/Wqh١7:r!piIp85TBS(_iPgn# B+:d7{]q|qOksb^ Ё]fvTx_q0pg QF('䊜ж('n:Mtxy.l5Q$9VBwK6pb 7~rr34wz,MƨlVכYާ+WHD]mV ˫;+~*C QVn'a HTHK6ˬ]@?qG>QuSiyg=D73`7z :ѻl-ەx2n߽[E?$D{JB~>˶dXk2RdVuIO&*EAOo]f_t/2TlK%Jk2)Eyr]h<@8't0iFG U`?5ۃ0:sw/^~}|gϡ,N !}' J[$͍X#iMyg[(%(,ܺq!}>/GӀ?.b38rkG{Й]ΏGQ8L<ȸhh W'*t{aAyUz~Cǃفy3ü|~hX:(x1@MJJ=c ~Ng/s: b7*݃m?/N,'[4f0 w껯Yc{x|~ۯK?\f\hssjXoۉ֎5,G7KWvk!\RBKkhJ"B8KKQm:g*EK܂'slkN=D嚫<=YʧNR@Bivha\Hq$H#RC<))8E41g!g A|η,ԥovQ,}13n:Dn t"|ρDKf>c !jAdH^WsBe`p0{0Avƺn{w[aٿT u0kTfDH8P}VVuJ> /qaRflt'`5ɪL=2: ;!('5 ;KY#_]:אmbCZϽߕ_>5#d/غM(eGaqv"] ې U!(%b "\J /h]/(u^ JeJ~耝{+kE)th1]+DHOWGHW lOµŨ+@+i QrҕrR]!`c+%!B.#+_ۂ KVR Ϋ+DEOWGHWndAt(DuhM7CWWj3LՆpafh9c Qn+I֧ٞz$|d<4/rȊ|14֢۷7|LhU M#\kKi@k:M#JNz>BF֣ BRCWn@nꞮ8%7Ecԍ\ `'PPCoށD5Z6>Ɍ Pas[PUFt!`SgGWWklQUsFMgǺz6s+ᬪbZc`B'>CJBQͮ.N^Rojmն4^**I 0'eRBxAsS$DFD+;"JezIxPXATIt)).--#+iR% `++UDW_w+Dc+e-c B0/BBWv Ʋ [Qt&•BZt=]!]Y\O +Y1tpM1/t] !z~Aj#8φpfhƮ6Ci;n@Wmk(.)(%Үdc+R1UW25JB8\5;7HYM#`ˊiK)/-뼪DciNTv_\ ,_(L耺U"fOvɁ~M[~ƾ$c~y.ױ]R2}rIيq- R`vN+i_mqZ^T؈K: ӆpe1^:S˿#1-MAtWEQ r[ ]!Z)NWR$(gp jY ]!ZcNWrEtux5tc<6d2WrOڀdEyZ$R$!QJK#B7н6B֔BWVRuB] գ@8+Ɗ+[΀5+@I?vt7$u5/u%=]!]i%5]!\U"9@ w+DIuOWGHWFiXAte! "ZvDٵS=]= ]YM8-ifăb_,(EkV=?y:Kq3Ї ofh53ʮ tmS͵6CWWR P*Qҕm`Cn(8ft'BnX*R454 (RixhSW|ʮƴQҩBS@<" ڨJxjì5+4u緪xPvP5\E̓ :$t5 )\ӶU 2+7 ~~? =AX_-_\4vT$\URA ==w (4LjM]J1g89IĤhGmiDqJn )۸˦7A/M.~C]gi֯ތq^|U^2i%^Η߹"QsovOSU]41?•(=^8NK38$0i2=g4x!f<وn Dzw-mIP~>' .,؋aZ"Ȳ_-qHJJT<1"[f_UWU[!HPGEXg!D)oB .Dc :dT t .2,=-fQ_;w6xIPO1 (Xr,dIc%ĮY,S0˖g5Ӽ帬~αO#^Sgj}݃Ɏ TuT^#rA%zr6Q0QIr꽘eZnPh3l ׃[{9/J#M8!喗Vњ}v=|%X<;g%\ɯqϵu2m=Ճ?_]3(f a3_Yot 7w7fG-W/pocbaT r,{Ytq;4vtYU2}ՂaJQ5,{C5Hw/zgg%\yUpJGUvFn;q gVے>g\=1'] h1x)U 4QDIPHyQs\;%S  F5!\k4l\Z;}L:}l@ꨢ_s%>egy瞽m;LV226>r'PXp(T)h4h#%1yPM:OhYbP߃ Qy " ҏVzR8o|ˠ5HxaZooeZ*Ƿ2nz?+!LB eT'g8B%7co uA,iNlUe`W9vJݵ׶Qg 70夺m%T$(8ϧ,qIP L 1aQlS>}":!vHOKQ@H#ZɐpB+]C&SJ '&*QQv}\J8.p $Z e UI$% 5Ƅļ?JX谵G( }J. v òȡH/ .=A{O)F z8bVZ%F%eo&1"eCZ! ] jz7CտoiLJR D ̇έ#;ixN ?iǎ4qJp!PO _rO@t>KE\h䘍!AS>RlJhrpbNYDS8Bs}4z' pn<7*(ҏKRG);SS`Ig &YY fH&J"/V/gih5&WGJkVl=WޅlgZ-'.UKp0ϕ 3=[~]g%};ܷA4)FU]˽I"˒eqvwHݢ~m,~[G??_?*n 1DYqV ~8ҨKK6˥M>`iژ{k͚gO5Z~=~~mF/'s Fi8-]sFя7Y.#-7wo! #ͯ:Gn5#hfX> Fk;w1khxr5zPX]uTFvXdӨ kUTڙȮbbh8bhnCk~ȸ%7}\S1;K[3{?|y7YEVw}-WZsv Dur/.F6^Vg.Yެkc-B  ʥi9O]);tn_촏6ģl$v οrD` FD[U""zaR!iǍt1 u6 W):N0D)KRE!˺ܓJ=c ֦L3ɔi&Nl#sNvjVp[`U*QproZ U!*to4 CjīEKнXِM+>rXWT2 #((qd@(G%qpUG{ FaEk녥8/"1DҀ(bN񍌶hom:>Dkɯ<O5 ?"hiÌN -萧v5,WZ+k籐RҞfo g#)*'!YUMaVgB 4Xab Z ܁DEMyZXt~+7[@NfH`YH*JKBN8R5qbة>lkN..@$ AɭV%CL$ld.i$54V5P21C7D/(18-(K$F)R!S3xk9\ayBxzn0w5BlߏGH]Lߩr2=7Kȓ{Q\zz<>Z[:v$wN+Y@hw׹+SqH5Lן[ֻ{:]sͺClY`bNwk{jZxw5BQ:XpĜ}uO`U+zQ4_4=H!! fLW2[HXjmι$jQc 0+D8-r2Za@*XR?nOR?h뾃Gܫ1L,S{оg:D0z NQHF%O1 (dWujtB. ƞXKF*/F폫.W7]T7^g82O{rWMxs#/^%/1>-n(#)JֈFGUTwױcZM%2 SŘ\B=ΣKhLwp'ejΉ:b?b0\76H@Y9 &L:HyCe*D "Gx&G.cbvD yQ -yJC]MExzDyz@re3$,&cJFpH ( \Tve6#2 O33Bg|RO4Pqx`hj<($TĽT:RC(,I"F:OƴQɑ(}Mhp6P{EMn;  EOT~g}![IM6 {T1aN0]Jy/%$#s`⑎H{b]޹grx(1\s"@znw_^=RI2?IN~-C˗]řL' ;t04'Tਟ"i}QÙr28ƣ3K+f&MŶL؏?y׹JϐN l}}L.rBAЬo?U~yʏ?-vL㗥S0? Wcgq~ʿ _ޢ8],xK8B/zJӑ^N* qލan1ʹRWSsH5޽{]&hщe$+͈${_<O&ٹ:QdEq†4-=C.~.}뇞Q.ݪ(82Vpv4UEy]̬XY1 4|_/6:rtj[^b.Fy Kw.ˤxݽةes]hzﰘrbr)"!y/#)_kZ}f{`a?{, xK7ݕͅu/gGR}%+)we|du}#&2ťdju+7-v)ΊIWzt2ci.QS;;x<;"h%e0Wܝ2H&To ]ל8bLuӣwKlc͆qKW0>Vlp!U:)g{PF-Ť 3c{N2D !Bϡ =O&=/pA !tyazP[hppXRJEJclYwx|F 5)C|Ձie"u`9F&ѢdB$Ae࡜+#T!,>}|1%N]oL6.WA>|RJgA ZJ}Cq."tKڣ?pMZ'YDGzQۓ@OшZ&ƮAwV)`Z|t-p)8s%!2Y"(!0P#\Yiurq׈q8scJuhak󚜯dv|q}[Z3P}ge=ͪru2rEI,$b\Q3HXQ%+~r`CLҰȓ7R2r %F:(fY@E DQ&p:A:$hPQ]Q3؛YWKScrrih*4WfAʩ(+oc$%",R"zrۢ "7NjBH-,tH (ROki 6*4HRM3ccp3csJc\ؘdȅU^ƸФ9/e\:H41ٰbŋ‡Iǡ|Ou(lKzeʰu5MNpCy?*<~@<]CeHE?OCe&׺gϗȃ9r$_K^A8V$D1%0x*Db:%Tj,^E}cJ Oߘ#{YLrh9g]]߹=UUw|h=1-T_) bn!a¤Id|kBX+Y[$[{ aʴ05t֬g;]ewt骬 - xYZt2nq !0t` j? C~(]=wtuF*65tp:݊2ZNW%]=ERFbB/.oxU+7Fūr[a:= "V'Ż_^=/Ҁ9#z@{x)}'DsZP>ݔh!hh23\m錖c録'IӌN`X2)QpoqY$p.q~*ƣxۂj>Tcf8X>\r=ͧaڣ]./V*i~\`/f_F>S^>Lʍy%?Ն|%T%n[w/wmrt#{#[A6A  =yPn-h @1"0V5&!դ5 -nf[h|&! bd]4Pd{+B++I:G^풫"p8F9'ި8Y\dFs_Tav؛4J2#%}yٴ1.$qpQ ۉ5LkZ vuY.t}lQzr߫Rs2lMŲ5 "Vq[HIeW##zDr掹>[j8 0y,g\?5Me0W@īo&G ߇p&Zu2FNf ԁtKVVbIj ҃eOi?߿+Hq!*黂v? fr?pfsOկui`\n%ML4@ĤH@ '&(¢8D(͖cZYHև\d*Snw|IYGA]JrAjy1ρ 7DD/1 0 zvH H S(]f}JpB;뒄 XQ>赨 )-fC` &4˽e;Vi `D td%.(ά$1SQ] !JP b@88"㬪p*p**BȲ$`3A6:Ft3%VmZb n_+#lipF+ j3+Tjj5]ZzVEu?2Yn$BG 4ytВ&Zjmmˇz̃vwqk.vs{LX_{uLkW PC!lJ''f= ]Z(`_{$|v%[Y:6 k*%'ՓFho1& P/F~{x6l}YeFI*ÁwâDluHmkB[u_s ]fuA;( ,3 UbF =>A( =WZyaKl+TOconw`E_Qh"Ndvik3Xz7*V1 /LB)E#6 Ef$ӽ°w:H9-0'GJFdUi{O:~"iusbn8ܶO׸֫jmI#teaRҟ+yn?=UA}vۏo닲jjWS~2g]rTWeݗmS٘^ν}ԃf=e|yL=^jX{O%eIi=5% d@pD@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 IMyH9R' Yj( :$PD$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq脒@_nI W\VSvC.;UY>#Q~{?Np p8G\J8t z55K+B "]EM [;ν+(tNԖCWSmyy:p|i3x \/oqh}:Yؽ+w]9NTؿ<֣})3xʠ4)UzOmm x'JfW?mm‹75)sW#k?iu|kh^brnNRNRdmgZPOnf &6C * mKi 4}4A}^\xY}IW55mB!fK*?W+*Wv\׋j^뻯_[y7ނQWϹg܋_UdEJ3m?\#_\AZטo>_|ϟVw>w[v3mA3k~/+jŤΓIM1>kHSE5a.wC ~cVuWS A%"DoWIB0+S7$$Z/]JY$4z9sơ+(tAӕٽvGj?]b"juEh\:]Jtut ̠p0vrF ]'roЕ7őv0}FІ}tE(%o;E tp b"WW]$];+0tNQ~@Pt+ȩgo܋QtQhx ǡ4 #3]=uSQqJ:t"{.ݱKMLgJs#ϵ9zoSn圲a `04Mp0ڰt&^0M Mө1}lwyݗ1blwg\ߗ}.K}>}^_T8jOf{+О4iPyqۛ 6ɩI'SDqꊎ1*5n/ȳY~2|YR-Db:)6'uXkJxtH 7Qj).R,NP-sa"/ed:Eh+Vn"^BW6P?Xf:rtHl]BȖa*GS+j `'ÜGUj. LW'HW!xFڿ ڌF]ڰ[턒]$](M<m4KCWSD4FF<{a3xBtuEW LWOz{жTsΕ> U878⺣W DӒV4}qh^:MJOR DWaaFtJ/ )ҕb.ϓ:b&#4 'uƽ<'u -I~߯?ôy!zlu DmJ6]X [,vRl*iԣfM$ }騍2Qv/&:1%aBPY+rv tF+Bɷ OFDW؍ntVD+/\)Xq\G"V/^]JtutADW,8[WWSr#jw-m#I{ ~?w9 M`F3jRO[- 2")vuwUWUՔ<te&f$Q+D۩箮-^aL<}+@GS?QPWSWwzM Z$M;0ay|612%ܬ~Wf^?i.'c_1'OE1|M.,X<Z-*gJrTqc/l:´{=\ r50(ۋ<O~w?|3XH"&JpvGW;?_M]zy[O3 ܉QR&FC:Psu6PÃpZrJ [̢"!%9!6Y$@T^dJxzg1έu\Efr`*q>)~*Jb2O t#v G@鲁/|=vMAue>՗G9GØ5uÖ2}o^ik.&&PҮb>\־+B.`1trtVⰺf+ n\˺ONOٌ6Er(@hYO׃PsSoRWOڇ _)SxQkt>!*9)h=BLE1"0f<0WD@Ѭ^ﹹHk}1=jʐ?v"]$\TՄ A2 `A2R St^jFm'&'1Ez|Wa6xc[~8]2` Tֺ]u-'ϖ6΋pGg7/5͟ƣY%`gL\{aq#rU/Fby;DJmx}z˯*EV{n̈Hre#X "ɐ,3ШHGӪp k;cT` 9a3a_/6eU8Rn+帓F%ZW@SPa څ%X-LP)x`Q>$T;!D8âipOazaivBXBSóq9Ѥ^m~'f!?+ u"SjO[#`sֻl{> V8_+ g9y`ZyvI,LφY>x >'.ap.d=y/]`m~W}wo}"V/_V7f@]5Lh&/̨fcilqf4J`OYչ2rS }(fEoHVz޲| {, /:&eb("{⁖&&?x>{Q^4ok/6K  .g?-r!/,dFCabUN].a|6H儽r@#ga l%rɳ0K2rfC;ؼ0{ 3LHJokaxڿ|!2'.d[NpbR9]* H[S+#ERa3 -`^EQQfԪ  b+t@Hm^ wE%KUe5rn.+q3,NxbtۀPіAO`-,7e7u˳&U0\0-gif&;΀|MVKv܍cg=1fБ/~5!79AU9",$LS'ZGHV=Dmy1#wU3r{;>:$Eό T`dH[GN[E3h,&-59&r6ȹM,"1rbrX+?"D4ak܌AC[=vyrH<6:T|`6X}[׀* AL!% 2זw7ۈsș47 FK Ѕ>bLvỻѡf\KtPQb2RՂ JC<5hIA<`m9J{h/|wM izâWx&:#"sE "H[V $fxdT0Ѥ"dbh|BFƥU|\TxM "htL*"+ϡ̰@a8$`=`@3 $@2q}v+u4"۬]$naüh@Cbhqj#'Vs&k8H(8 ,r0c'a.XЂQH k d1UB2%EIu@ZK9qDHgD'H ]cP!J*#u^[tFnx3FɸDc a,V'׌5c70V~ؙQ c_FWB՟l`”A٩)<ǐ-8b TO3QpfÑ_q؇>!~2H>2EOV`$ 2)" H-L]ϔe8^9x #/-UÀPdPX.*&\\6tNjA>y ,$.HD5m6l(Ь?=wt K4{),azQu!~.Fˁ.mφk:Kc]겓M6U`QUer9iuͼ 8fImqg D5// 'utѷ?~>߾>O~<~cL58Bv~a"w$J?GӼeT[4-q^ mڕmv+#ƒXCZr+C;,뤿eL6Z֫&X.|?AV+*DzܙT"D#xaߔSWڴ/^6yН]Jr"#2X0O#ZmP QTd(pр4v6c;*/Ct8zN`Dk18F ?@:ÖW\”Z,zﴎݞN+{:e'OZ }vv N yEYי z3vr)‚dGuP!h e6G=(+ Q ?Pd{;%"=X (r< tʉU_\qQ s@1grH8+7T%= |pv͋j%_ g#c}h^\&>@!o?nn?=w}o Z.PHx30=d:FY^1TDonm+L.|5ruJ@?}<!1:b^O/GqI#F˙2r(*U2;S.@ȡei(N58Ɗ GED+ b# _H TSrk hPFc*h%{qø)aqo(9( VB ]ϥ}} fظKyW?+>kf]Jb b:DxBSi sJk"Rt:QtRWXt0L{7|~ (_V1ҾjOTGxvp $d],9DӘrjk`LD33P0 (92`pfCQ{ :$LۊxaR[1ЅA)ıH1O ȹL^&Lƣ;ƾ,TWk 1H:biu4ICbi#G,8B]^Ũ 3D 0EŨ @:XC0Z y hY#=ہP/y) p*,O"5@M#EjX[HB{L*,WZk1c9@@B*`1<݊뒞j䬓ٿ9=snZvuO{М0CWk@,sc̱|68N{ 浵y$"kXr1G2-,L0sէV0#*)dFRaLs1/#RExj忯6V_kc~jצۺYŗmK oAh9W%z=1%1 yYb,eqHh<ʵ^Ln=ͻѧ].P$ŀa1FA7-VR (6j~[^u|P1bjN}QTnHf5Bz]p,6>Di`QTVC Iv[,vGE3Á``$ 9)PPY.Gqϕh1}=~2+Yx#pÊ/_' sinfD&(d<*F0j Nx0y5LvC1/WoٲY!\o|iV;=pMK F("u)OA0!+-@zw$%8g`1VYE 62(c\"kD$p0P5ll~ T49wֿ\G(sa-|&f9FOVzF`1GIeZڔ@2% /USM{ym[dmpNSo _i ȱ ϶@) Ĉq)A ̩!x˛9݄t׵SAY aEQ[Dt1x|gU-w$@^!- [}'( scR8@!Tj<\@ X1/Z<ܷJi!1( C=3>0XP9Rpagx.(H]AKzv'wsC>ZwόGoh%u2Ȝ*cYH1Ts-, E JZ}6X?$Gbv-VG!9CzSufa_ɀFx.()wHy#zNq`\ ͍Y`< feY$6m`|MO\i #ĜeqNDu W"fMOxMOt fǿ( t~fO^3V!kv.ZL87Ѱ?àv, OxН/{vn_xW}Fʠ_pU'up/`2p^oacpx Hz~:6uSzN>&U]ipB/dFRJbyM'^i:JJ)g]q˗?u?%}/:Q~+E"8_&D@L }JcYYys#&֥aU)x2V+|yEMw{TySr_u2;)9^Wù6>D3-o;Yӥ pΠu x½: `PJfÏ}p tdž,hR}9PQnIU FM ή*6M⻓*|oΛ^7\]xظn4K|6fXFKSoۄL67N6ᓫ"r Joe=Y깯+klijNSxry~dwu}9?A 53|"4,p1ZvD +\hwhovLI5 ,Rc }56}ΠxZk5Qurr{zͅf`Ńu'NBdh'QvDY,\w_Շ_V;͙Ufjvo&Q\o.P|g*j{jYuN z0,4Wˣq͏=O^:"GWI\ʏ:\%)%j•Bc+Uc$Jb}p-\F " \%q9Jꃟj)bNTm}jw#?/\=X`Ճe WgK)Uj[mjG\ݼ/SS.|U.@YQmw7$@i)m1jF /1/zZڒ-Jm|,#7޵q r>f_8I b!SbM*Iq;>%LGt;;æjH=Np.}, C1 ^TDZغe[0 JP\Xdp >0b5`<"Hȍ  $*u,G 3TtBBTVB#RWY er]]e*% TW0Ǝ) hU&WvuTr:uՕ"G ǃ2hNr^L%坺zJͨ>"u dz=Uִ^]!S+C)ԕ<ѨL.cQWZzu+SBAl£ȕcWQW,vEhv]zJzf*رL-7mWWJ0z -`tYvS*Anx/7ӫWZ709{qE`TDN : )1yv/;s QtߗD-W2"}ʳ_~z7e_N Nz+_5ܬ6+Pv/pX9&' Ob*;KEy8n޾=y_)E8ߞ|g{&q*ʜ[J{-.vF݇%|7~{jfYˢNU f7+̾Isz{3,_S:ǻcdқ59ȃ˱cK٠O7NvyV";EcJcXۛOW7Y;Tɷ0](*dh|'U88]veH"dqv-7Pxb5I:RLavַBM2eMpNh"4nH )D.˄POn$ꬌҀ "ڽs5>@(}ߌӶ{lAvq϶=kxg(>:v8,q9IdžsEov:@JBij >Ǖ^RaQm$"*NY}|8};׸ 8>{tmփ1->[Fa6G TzJes{٦341 -SI@$kjb4hRIE)QPLZO=trSdB-B|0wItGpT!NAMc|UH9jm% 4a޺ull'٧J=Sc]ka=ƱCGτxi \p =~n%4$Hr,8DaE BuU7awU݄i -ͣCF!O()| `]ts,6R9&\F#4l Aznl $ UBCe$gSG'FutXm{3{Ncu:xDA֨A>:LS_9ײعDJ umgG!Za2QYAcCy"XN]$V/,lT$.H[AAuyQa&j9sD[y^!c[ (q?6\h.i4DUͲ=ޢyTJM{aa4-`8F5 NKu-XKRAFCS:~Q!+Y$.G.2Ofj FVLEAG~i+{ivw=2Dku7l1ZsmӠyU-7˹] ϷY/ -&wGyaX0BB06O8y _=YٿOx9Qz8*#:ɺQkURa~娓q2>ߏIIR/3zJ{jgw*  L3@9G.r}\y;*+4ÿmB!r7 YY t]07sg« j//JMϋG"L &L lp'+Xğ )wy!w"*>n]hԒ;Tx1v7o_gr }|7,{@$SBe80@$\k1οK_NOHO2\ÅKkp3i ~ -U)Hä!57N&ep[͚ٹtxi efj} ̮h;\I )_,7PxV}C-mHh5)Hi TB4"w'΀=SAKкТ9+ ryT03F(Pr#X Xtй'zʩFi+PDP)Xȧ #k~f4v }=ӻs'Ԛ༄8aV9#ʲM< 3'n}IJJ eWo@BIԭm( C-Ebf2ݒ Z#BiN 'SNAOhUBchX e*hSR\K1RRx(f(o1:NRhxȗc q}ֳFt2"z{{ hkQ cs0 >S8P `B^q~Ps $x(AݴMwEV'0h 5"VtywJo-^vֶ]<NXD M(oHSI R5ɤb1J&ijb (w[ %QMnIlx1r R~޵mtBnh" x_ nf-CsV$UA!)Yu<%AHhxy2|-5b~i7 _ǰnjVz.R ZMç2 u_32v޽xZirݼ(Z]rl!녪N@ d;Kh]?.֛9thxdnuвe;[wvU흷ɧvy067?yCj̻0~A閎[&;#d%\Ԝtw 66./səzn^fn7nN\e6W]x`L{)cd%#J'* o%CN1FaNtB:сg"щ%K~α!II@8jJe`QBS1' #,<T {Ҍ/r }9Xͱmry1BБ3)%$*睲N "SE{Ɣs52N[N*1,Rg2D4SDc>$M9/9%:b%0l\̒ٮ u;4?9iǒM t/5RkL΅`Q173뜥f񖎻>wcளi\z;x.cCSxPOhrֳlK[#\,] [uQ]cn̳˷ͅL笤]ov*v̄}Nsey0kʣl}GCKK)FD§,x *0`fAD̘:@$3:xONg [ĤFEOÓ'êopfF;'? p"z%Ѿj鱙rE.揷Ԡ;SHR5@43z 9ə4<"fQޖ ?M˼fKkmhh0^R{-"a}=~r)Y>2I\ &H?2IJB2/ &q 0-̑Zڒ-Jm|,Mhw1<iF?.8 Nyus&Gv!gP7Y͠j, pqi,MVT䲌8(NOr2;#|>yrs*E%Aẏ VqFߴ[Iw0+N]z~M~xT:LFp1b5>e-j!=.h8O4(y`*N. Ѐ]QQqp (IJ taa9DeVllW1mX0Kf{ZV><~**YZoO;#_g8CzFD4(c<8=K~K*aRǙۀbqDR s^dEFd>ZPG @Op7(ěUܢ C-lJ6,63BBb!A>(^Ki4%x?Lsr,,M4(`0q]Źկx&BS4 s=0K8f~,H ^RHR\ZL.8UܴQ%|G\82snI1݃90r:-Pds "61"b\DJ EP,sjw6t@F7!stn3[9zy?*<hsm4QO:2I/*pD&ĥ u 6s/U"<"N:`x`Ќ3o&MyaiPw!hJv,Hץf.Zo !t}H/+eGSw{Zl/־^T {d2AI09,j, bK#ͥ P<3L}g|yX:3-2h03Èc(9& qRtODF:'[ER4?ȝ\FRaz"KsI% Z9²\0;ϦW뇤@Va8$chPw2r3. 4LBfrxR.DYE>w%)oaY)+:  2! B:֦+ĕp01BY'tP!JQXq\Rx-Ҍaz;= ow GH*Q;5̪̏^3.o9Ō{ZLxVdUppx'^^xn_X+ǣp4LeP/iu}ݺ_x2hk1`$Y쇁e|MPN'+އ>Wмxa4Ä/5N9Ca0M{? ]gYo߸Mҗ^[Y2#T % qͼLAa6IJoG7Κ 0C]ߓ_+G'vD+)HN Z/#CtT;(t0xX6^] #:?Muf[A L/rQ۪{zck4%4XNg{WǛb8W6߲S[Ƈhfmg kwT_ M3ץ_N^pS7!|lǖiЯws"D@ Mrϯ*6@}JΛAmupB,!۪}%Yzemqmp\SWK4+d徑lA.8z_^;9~I/nɉTfD-ήXZe7+C_hܤҒ'OEٮ,#{N^њDW?6Yg}[vъB#M3d4VK/ XٜES$@G _Xm>? z9Fu{mƞVJ"=k7\`]cL1 3kfy>ǃaC>1qt*yYVKwu_ߖu?D!kj޻GK5EX>CG׿ d,Xm6-ڹSl0.AicI<7% 2n1݄\ Q)z{lXmz|l 0M X6< ʚԶ>]YtmnVΪ&m6Y 7u ]yp_MB_Xǝn^xv )D0c9WA=_㔓!b%*hM3JiRBB⳼Rw??L^v/oԁXl5ʑc тȭe\áT#9 -℃^.1P2=fVR|d 8 gΡxc,rfWgf5iK[xF@r}+I^r&o^#?Y[чCO{ zpߍ@ /ާI=}v~R.y SЋZ2(HR]JzV/YyNפxMt p  ?@ALpnMX߶Y<57yq[^v¹%\ jl#\Wkտ~x43봷 ldMfaՀo.P}sT&"vf]֝0a2\9W~"BLUc$-cWIJ.;zp%(\J$RW *II:zp%* ,Ux mKsWa &<Jrq,pwWɹ#$bGWI\-@Z~*IIDWȎTB魫; S]M\ݤOWwRÂ+r"\1%dib0X?Zbl&UpUqsr<:ϕ^+g8{JNrs4}}| 3|<؜%X9IKcsr]X/+uZ[DP\wlWO>  %,cx0oK/ Y\?{׶Ƒd0%%ր< {z !o"ifHޚ⽓j [J&"N8]aS yLQg}O<0L^3z3 Vyd@d *!&\m̻~w|,eIFaf`,?K:?=Oy@[>DC'k80&fe]~n ]4\9j]asx(3x~()gi].dT`wӇ_#ѕS| gXO4_F?/-xbߣuHA_J>jUѳm'ao3gn]e//K沧͂cWsɓ2TjpKw1/??X|g[,>Nʢ<⽧,2b~'$_Hp*r!uCm\< j~G ]}NTD9NqOS dIuRD@73 eB'P#/uYǐ SpTÏ#I=bQyݶv$v#~- jbGs%OHCe 7`[\k_M}pu KBWS/vݳޖI$2GĕPpd3/) kJR aU'#C<}adFtt>iO<޸] S2#,h%U>'+OW30ZMG~W3NoX'4Jlvhm=ʩuGr20ɘMdܻDɄ7xم*(GrXa:DPp3% `]zTY,Ձ41XʱR͜#%ݚ̳" 3CI'A(+Ck0lҗox(XPFZt+cC}@S>"b N)t]VWshJ.Ʈ)"""d 3tE$aǫr .ƂFm6 X\+t5 kso*{&êE'='V%$Y봻ܓJۥ~'H|>-Kg|?^ZD% CLvC7ޞ)a㙁MB:b"8%*tfN$1f^ ܧ hT[5P30V5EZI?+^~J|D2k"0lA *j5sj~_dT3!MbIҚgb۲M~?yҒmKhl_k8gܑds˹szV4Gp#ɜ`V*GL΍ @Ix=9L C`%뻶6䂠'GtA$Eor"%=dHFjWn5Ba|!e ZE(+lCə&`Ȓ7%icҊ_*ttryu6ٻI+3T2q$ g>-E*&f.x&`SnnJWD8tc MdzU 2f>Kcz:\9УH/`H ##(2ZY= ͲW.ŗ7耡 `MLgƍ{L4)keD̐d.{e !kByE@ٱ݋Ζy)1x{8TͱZfYqEtY~Aۊ2~eA^rsFgR(qQ`2 /}bN[H:3nSG>vBO<:ޠ K\ KdYVz0BD2ewPƎTL0(~~@/$\AY8&drlh\iXv`R9G ]W+(~~A?bG?jBu)ᰁ԰ėjx e/t?ٷ.HM> 1$%yeS޹)DI@X'es^t0~:IROIA L zt.[L uQD"YPlď$>v;oJw'kXˏ?[%7:axֻɸ?|7?ڽä͍ sm~i)OnaI9כ~,kOFQ 7g7Ob1 Mקܧܼ7zOy2x?nIXT?q~ν)n~oywww޿U( BMj mc3T`+]nHү?QuPz)H1aY;;V0$aCoV8H4I iP]eV@»?e/f$VF:'o?m&_ G'`;~( Yu<8ﳷEr.ndI;ifOi,_˟,aDM 5 t"H [??ßFʧ_ʊڣRT{PLS~1 n/z)o٩lC4az;]*`@-2 ,{l 7e񸯪Mdfxzo?qKYv)|8:'O? gLjB>3=_`$17?6] .l6wB%w>3zedťLVos'2 꾛r(ܰ$YLN/,m^2؃{S)q?-5GYY,+{^1O/ը_i:1LMKSei"9!"+xnesq^gLajINhx|_d~9֣W ,&jܻ±Gr`.0.h1vsՠ  eg#8 s/ .̕Qҭ ?b^uâABD) h«B aƝ qW-Lh_~ qWҟ_ϭ\퓑oRR;M1q J2Sr `_<@&f َI7PT%Z;4NnsfLAUT4LPv%ʼ3hx9)D͒u`.o6V]l;ܗL6lܝ [~5s5Pᫍ:G[P&sl9*d5N9I"֊91Zf1F)e $ʑd|G;ڒE N`߀_*0j #ǒD rn#.DTpXzā.b}ol[K\Km^E\ gѸ(s(t93)gDsf})Gknȡ9\!rh[JL.0">ͻ\rhl`d;͜.ý YFh͕1u"G#2H2gt: L-bEλp2qt}Puqzc 煥Y[^4>h5!X6\p%9$yD PqKXڸ@&O%'(o+Bo_5z⤶AYnpƛoZog֢Bv#7;o|ߜan2nz9Ȍ0?dsm$IQN)93Z'!kw1)7QnmX.qv46S=޵\΀L CA`*Ye68R$#Y@SJx @Zn mKK:<՗Y3nC6=g0Ba&Z sa~&ݫ̦W_wѧa}9ѾfߚݲzZMvQ}0ld.-`ki]Z7i=Ri:I;v2$jr1XGPHcHL~V)/P,)jt{vF.zfldЧ4>VQ$tL+r9I(Pʍ"VFA`YqGG#RHDcָȹ:__Is$:HScbV6'i2O}(LI̟}ze$zЖtsLy.5y~2SчK?~ sߌ`a)E~JVt/*%^2@)2(PF)a-uV+q/:蕸YBa i~@` ֭Ъ*kn[.?IO˳P P4ϟ)/~^!);Uw~*j/%|Spl<·do^gl\Fgi{Π|^6-::̀lH0}<*p_kA?{p<멫k&$c<>\Ԝ=x7=:@l X奼<h:?i (`<`J![nu '1bePJ§3֓j瑞BД IUp;oLƌƁZcĜE NV鸴 j'h:}RPyOZza@fߗ8jږv=c!{CXbmKl0g-:/aN(Esfͭ$G~[euu___{c1#8fl8R0 3J3ކ-DRB uJ*]BdY[}A-ff~#A\PwڟLǗe3OQw\ѡ `5#U vL ,gK! )簁yIs`n!e f(.􉄐Q"&@A GEJ}V X+DGD"%qB:]ZᵬևQx/4PuD{!"m V[XĖ9P\xdT^ {V /:\`_fDžbEwD(RrNaPᴒP mˈU@aʔ!]4z' : !N- ` d${  gB1,ˆ} F# ;!p&T ɔ+%i-ipV!6Bjw!m) )V"P ڂsJC[Q2.Mtfwd}H|Ʌq/aZ,Oq`==?ߚ5`]Lz)Q =!8pa=5CѬ~Pge20>dL`+PA@ tx*amadp2x~:WÀP&W\:$UXO.0o}wR$ i"c~>?%|f`>֩20)ZyrIWrMCwV0dGk?8>} 0>Yl3Uy2ΫYux!ѲeA̹A[sdɆᏳiKz 7A>gͺk#7 C5,oa0 f0bGe=ѓŘ􆇃qTn~ȦQU`:IVP40hR׳IF1KblTjQ~Cx]YX\ ;G'/_z~_yu˷'9>y{qH`'{ =6ߏ`h0^=4UCKSW~qe0#ҖX(}7rDER&35'ekg?!+@6,2f]P{"Db2#8x}StQ7Ghb#o\G9ɕP$ i$P^1!JC7!EXdH76"//VhpPEt~hHt!0hT[^p^̢N؝r3 ޤj'-Q}۞|I ozYyޏ(x 3+APj5q3+:ީO]ހl~M_He鵸Xa6X/ I𲊓t?NyPM0eBP٬oMkrptb=Ht&> П"S!g(gg}ࢃ/%ZwhAuvu=+;O'Q2X:!f*,"7شM.f0}eŀYJm=@v%R9VELjuW-~Vo7 H ëߕ=/}9ίnL'SmhγivY 0+ [zT(͐jQiQ;qD~vlw\oO Y^)9׹fւV?ypyXA}eZӚcK[Bl%f1a*ۇDFIJeI{TZgG~݈ ʼn)fͤ:8uDp'k$ Y$p,Evd$(I %!%`p-~̏RAWϐ0%U+n)t:ytQzt nO5[][w9:گ4x(ݠHWT0vAt娫-fpG<]9Jۙϑ>Fy8&$e9aEhE.:J=câ d1t[ ]u/*vn,*>GDWY ]uDKUGՠHWVyDWVr h~tKtO"}Y/+o #J{V|I7M~WG^㧛1ǓA_hګ:L|7{2~?704 N֏xzs 1 Ľڵf!Fs@ 3YXxR/O@EDr8{?[& âϻϿo>?߸k~A4CAC9} u]Wg|_5`u4bRѰo>G"#Um=(m lIY4룶YGb{׿y??l ~DŽ}}v//jyyv/qv>>-\ִRA7[Wߒnv*(dd9$:*!b]jAKrXJ4PLbQmFR1i7ͩbc~l-HЎ\ TA@17A1J4]µd92:L2h]VD'CDd FU44cK1P KNEz p_TSԚH.ɐuMq1M* k%%q9VZ-BRKwA(V01ИM JٴVkΜU8_{;"Z#9^meɺ84dҩh[І4G?TB4P&RP-i>劁Ƭ5ZEvm(_[B{;z]F$$79?~d坏Y,lY(O`ɐXl+n"A).pޜک[Sb9 S$G]0ڪQY kAh# btsaZz7[ֈc%Gqu$~`SĀ`ndNʦ VCJq 6;KF }"jK566P*< I[ML9gU\:٨lU.CV13,HX&*Ht=n) ;*D{R]J y;a*kCL-ːVzaL9'Xv EA)V TTPtB[ Z CsmxAYӊËJ#Z**6һ_A2h ±D6VSlNS`1uV6 ]˘ Uthc]VП ^38B@p'*qHJƬE6,I̡m2" ŕ]b-PP]-hEآy , R [%7)"\ "U3+VѬ8l`a*\dk=: d>^Aq**ui𷜠2 $_XqVL&jUTC $sz&]R ל 0ȸBæ|+cK@ -ɁHCcm7,٨ߚч`1g#v nQ>TKsv`2Wv9Pgžu.QM~t!3QhB@gJ-ޛϰ$JAQ"Ł.0͑&r߳ 7kϲr (EzeoJA!N9:CuW6!;$u Ρݳ. R뚋03/ULeHH/'֑FjyR" ̜`#dA?r؀How is6õ+|[4qZgz$_n^bB\B_1k;SK;L'a /HLEugu>yۏ\eu-*d`1uL"'i`xV qa-Ԓ"4I"eLȼ&c>8b ׊.Le1zOyL\td"$בCRě!q2pt/ͪd!*T?Q<{SbNTe 7YrfA;> dӭ?kiy\US־'X2 /BFo" A]jS6hrxC$*r]/o5i51 jcYBm.F8ZNR];!z1^ |Jڝ] jFHK&K A]K,oFt\LCRjZM`Ɓ5n~ux]߃Oz{^s$J? B]AICjAGE5Gm Q$HuZ*5fFi9œGբFCoo1e(G'= o=-Pk8n0Ȁ5ARbH~(CG؍Z-&ѬY.*zau!e  D{PBz[ \߼Yall 'V$b5ciOI#ǘ^g].# UA {ߡ0ۢ 8UnFB,&Lg=gus[sOҥJ&8U=%x hNrR 19K1si@ <֬Y TJKPkS=td1r!% 3Y]SNO%@ƛP[!@ DOU.:JQxa2S -+iq#&0pgF֓<8 N²oCI F @:h NCiN;kS&pUj].CC,2cR4KPY o#IEs't\5.K.?"t-+#`Un`j?z:mg7{qfpĵQ%H!1;]`N%Gi/Lu 78hYa22@+~|,SN@ =o:h͓fq}Yl!{o3V* )>в6)[0νeAJI  feY&,ˁKoׂyl4m0YTDC6g N5-+lU{U[(P=DI3]#]I=m]`u=vt[ tLWgHW-h WmAD)s9ҕ5tpi ]!ZCNW s+`VDU-thh:]JJ9ҕ*kBF-%+Dڿ"2[v$؜z} Vhj7aفL}J5m]`Auk rB5eNte]{u^B_VW.Imf:jz3i5e-i||j|8M#JsiL6/h1[ Pʘni4-; SJx# Zvځ7yĶ-; ^W-r 0?bj7lK(uv %0)ZDW3B5,5LWgHWR+M sZCWWɶ5t(ΐJ֊nM0o"Js+mdAl \N[ "ZNNWRdXMtO~npm ]Zh Q浫+ky6Yt(9tЕݲ%-Nۺ`)NȰ\u_wCkO R5Mnw+j߮JjEI z;#7ఄ+pezս_ū0NE9l+DEQ( *pTkJ0ҪXN.䋃eW[y/Tퟳ |_-QGmL=ޘm]@dteLE nݕ1b~?yWi 3m7O@Shÿw+wX?\,DxWD4 Cz?g%JZNCkTqy;Gu}pY|x~Yy2(g{ǻ]Z~X?"W/dV{a%~Hw>jUw = ԏ&>RjL,&{Ű{ue) gBjBu!Rdr$* Jp PS(c$G {s(U\7IELW0*%)(?&׎AR_y (k녥1%c#d.EL'e>kjc#&uaj,c)ɌV>!~{' []?g_3m_X oA\ԹE$5 3dYGqv Tn(?Za^s-x`Rt e sy+e2.pj+Da+eD.4qs.iU>jKhIGyV6 Rc)DϬߗy]J#KDS,v 0P PTrY2 _^pXP&{8^X# 8Ԕ.fp2j%UܔKrgw=̼5O wO/n{*Kq韙1p508ngD\+˟o:U%/d_(z1 X=ZeREt;ot4ڵZp\;P2I40^P?]ۭ_@wځfsQCv]5j:vMu[+}fS0.]A>d9J{<U C(VnםAe2KiGD [X>4)*}W?/c8Q("|4D%I  ڨ<Jꘘo7U7pHlPg9t7]ZFc5OOiЙ}|8'x<~9xi8g}f04w-|Fy<Մ>DXL=Toŝ;$m'W#*Ʌu7pr1aSQ{lI>_c.K]˨z9j16ļk*j*͑i"_!Gw=,m+HA79fTVUK|]UtR|Mi. S^W8_> idWF._IFr c*_BBPLox?( qb}=6a x~8I:f.{A&S xxWyJ:KG>~ttJSVvoe&|=gQ0z ̛C۷ZZK.+{ V٢#:M}t< !s>;I7z WLߋWT/[M&tf^_,fKt 6%Ta|Xꍁ;Ĩrm񈡀 ?]~xhBl.Ywpg1\~V 8mQrx8|ŧgJ͘~.cŒG_(x6o-,l[gBCsQŭ&/)9O]W;< apLX_+ʫ'Z ڼc ~@-lJ >GX/\[Id(1JQC݃녯^I mo{%^X&c)hcWD2OY"e")N,$&W-LYŜ`DbPhv48[ h<19qWw!^"j3߽f|s,v "dK)x䜱҄1ɈU 2 aByk KvT#Z\N%4{Ąa9`H)؈S)R NPؑhp^65y4M1u|J6qtp_d Ťզ{֑>Yu%Ǯ']CTɲ?_{1=Bʥd.EBoMsǒ"RJ_+C*Tdw7Γ Y/_A.բƹ2҆Z;i]~&Ǎb15ser0-S2dAkJ"AEo3x=|87|~0EP3n{Ρ!̉B Qo|)Wk?~Gh% C)DrBQ4-Ys@sW`I7KTxxR_D'-)F(0'B;L y=/Mei8qK Zuez㒜~]^|= C~}@T2 O}% |`օDRR?{ƭdʔ?x?X[ݤvM$ h-Фx](J(Q8L̀ nO#V=%&ptÝ!}֠K(s God= wyEᾌFYRVoZEL q~<]'QYi18NӫǼn4lw1=YlMUrέQ)}.aqhA EG噽_2vy)ɠ2/+^낇3uމA߾潆l =l5fUN20F}B)H(oƼ${}z?1*Xg-maFlKu`u9("!񁸕c{r+ߩ-ʓjaJkP{8yesʎ̣0y9Lvve/9kw/7s]C9c!ty*sL G[3e(K煯1(dh)2IJ촰,U`\ct1x=C*5<eW& AzD*VLZsK8qIB<=Y_޴AwMڱlW ;uSCſ.FmW.k%}`GnG\xEVW4MMg/)A}9+,q dH&rH"9Ü6B2b3)6=E@|Lf Ǔ2vN$gk#4IZQY)_at8ik#rPHf n{e._34(A1 iGwPj-5gLATȴ!0)\D4@F""9Wr R~ Ry m~n~^ѝ׏~#.{s}d>X6a[`6)O9/G-./hAi$-nӽ?'++;9o>,xMY<Na/+22&a:?J4.(t7oO:PmU0uYVDǴWKe Nɨt^ʆ'}3{_q6/Rnߎ?>=wti_ygBh4#mrCo4Et&ThW^3Ɲ=8LdV%iJͿ_4K}~T<5:'Y=Yzٷh9Y5__$1EU1N/{:-ٹӕR:yl2{.$NmѽwW0=~Jt^cKxMgd֬>I8d.XߐCq_򲜒ďޢL&(~Ewp~r_)ۻJvɾ6ɾؙ&..ԵF{$kb `N˫1yy eR%Z mBNHZ0Mo,,^J*ZWs wE/%x>eI}Sv75=|aiSۛ4ya!V\[Yjr N@r\4E 0=4_7F_\?B^buork{iƾ^dhibd:h5]) =0yz H (@砀 p)V y.9_]Ɉb9Nhj]{Ζn;[ƽm~gеqR>i6__+~?W[7;Mq|ݧ滾^-"I701 2܄~b -7%/^;.yt=n 0- I=uߚUjCO-4VgQt;k,+b5M܊+lNUqSw–oiD=a=@Ǜ1\.48Rl5BnMNᰄ|’ceix.$hq~>s s%;e2'{e<ؿg/4}£M&ZB. s^#d! %a2ZZ#"Aֽ!,alo;GXktrٮ<{gӃ+6CwzppHBio̭,ꅖLVwdM[k 6fhVNg+ȸف܅;yĽ`noVc^%{3X`ɁJfW0yUPi $SdA 0dב[!XTk-ͮ(CJyReWfQ >0Ж yzz֭~ӇXc#,b`uq##2%tUv{-N*@ D9[)KW%/p֙ ̵ Аrkyy-,a¶TYm.{"d#WG"o\HHrbu,nPQJ`#/ "FK5IC\9!i~&- .)5xKz8|$j?q$c+ חS<"o U6(o1Xrឈ?A1e)Y H FʔMF0)I.<\r3C05;Ӧ?x+l0,j kȢ) PHI;a> X΄lyTT#9+rF'n8 BĈ"$r=3|J""j޵tLuWU_Ȩ0IExVb|A.>NU^G*&)9IEd9T G ,Vdy({z:B]SV/֍ޛ,$V0rj6WIils3#w #Dse rbdJpꀴ48@sZ+㈐΀]H B T"E9G`\wHp!#^6J%ZoF8߼5Ќ56yIWGŸ/Snߙn]9.oG9`bhjH@#H>TΔU x%*gdp+TĶch;!A I9ʻ؞*ﶒ}(`ym4VT8-"ZHQA Vy@2?z*CY.qBA R(z̝vX-"ro4 n!% 3BssvEVm$Βb]Xϣ߅ǟ|݆:B6Zq̣L2S*LALƫIO+G r2]ђunhIZ+ZR vE#O\%4qLjq* XJR-+,_edz&iMR24W +NLKz2*NLj~ E4W $8UWS1WIZM\ etݘ+6{!%=zj ť(B)<*s檾6m^=&Z)|B  J v**I{ g%RR& h$'jjTɶb`a%*{Ü5bj'w2^!Qx^2DiARTB V/_;Ggpw̋' fI?<ܯ|1YtM4],ŻMf^e'?OcȝCRP!08.M`l0$ƈ2HX*-(`RqS)$-'leBO3#W/) +/)~xbחzXZ]x@F/`GXAza$rk%FɭQrk%FɭQ2]cn[dϭQrk%FɭQrk%FɭQrk%FɭQrk%FɭQrk|[(5J&Y"`-HxrkL[(5Jn[(5wh=*X`}2UUI)M2yUUI\&Rj'SvCs')wrʝr'mxjp%r')wrʝr')wrʝr'H񪄭e^SNnx`XJĬ)gUL\e5SX#ga0;,o %T[eLg1f(,@ 5`'NtHRxaR[1ȅA)ıibR=-ywNsn=)%^S_p`Ch87iWHNa{<(( 9˾=椧$#m05.*FM` `֐<s Iq$\@KeVѸTd~jB6x6CM74V^Sʭձr5b",3Xo!%')x OEKځuJ*H;56 uuf;Z{q +VY)`P(^lr<2+.ɂ\0sBPP{ e$"_jؿJ15& J#68R$#jRz ^s% Rcc;=ODsjEc=⭅mtkV!a ER3Wwd,m%$by܉XqjxKYOܵ}RL2(:o;cƌL"^ˈiDk45H$Z>djMhN:NZDDȣA#xxJ@djz%!@;Y.ױ>+at`I~4< tJÛ9U U]bzh|B*NHCtɐ6*úbnFл%9wbaD >+.~ 'J_BO8ƌRPf%Q] _'Ax+1:S)*!e8 y)FeK\@n λ6K+[N(w?^v\;K&{l{kdﬧ:cXꪽ]X7>T^Cx)AMu0xxGS˟opÙ4d?.ٕ*s#|kE| ֬u뭂.&.j겾zqFtv3}:vӬnWpe9o>o>a Wׇa~Oo缉΃Aln>sz+kW YjN?tշkk^Aގ\BQ 4&!09vHt^pR#m& cCMVGBf"^(np 7{d~z2͈A4ƄB @n.5-F~wXJ_o?Zڒ-Jm|, #4 WӦ5#BUjlcTy7wAy Sz9TJ`nfq+DOTj+dT1KJTL9ȉQt<5q8;86$8qb1 (z/EM<҈9\IMu$oyPT9F$㑅H>HԔ1тFQ4`pHb9yGkl<ƕ}{8"n|<簇 a'. YΩYqy;m+Ae̅#gSJHT;e 8UqgL9ǩZ#E%T8 ygh C&"^!F,s$kȮ5q_35)8ʸsZ py:^H`KCLC^oD-,Up3%(tB, 0dXԓL zvߜ')$3aQ/ĉ[Nn%S?er@_ظo=h%J. VF^ }d_CE]-QLd5\s9\mނ֔wc+JDrbn ݠ[z쵲2aDD&bs+&9NxQ =^mIN\\ 36Zhs@.9{Xz)+[~ٶЄt8-dRbJAs]h$kJZK(,#4dFpш '6/%1iQ" z>Ik$ l0S+m Z<̌rGq^΁JrI &fXA8%,dm\ j")͋;Aoy2'n8^ߴ1x]rpVȿ\`~庅Ok6(xRf:]ʼ2޳ z.= Qw'Ffu 9z,z׻Vuo6b6,eqHTSp2zz͢O\pẋVqF?i᥷2qyY\Fq5d}5O';\ H.AGL D`VS- d!?{׶FΤ_ݴA@ dfw L1EtH&ݷՒeǒm%đ)v5YR'LU$xSmH2%O19`+0U^zhbBaE-&>♂z4M.ǔ~nn8`FV6d}71 2$`x.xT^EQ@<]Id7\ &$A 3i&Z' c^EFet1{Ot0 9e=)`$(,MnqG)n|HFb ak1R XH{ Zgsd67o\rmf_`0:Og1pT*a.K=>*)`xicDYa. hqɕJx05̄M*Pm #v1qGl?5PPEajw vۈY1 t_FhS3ˑJe7)IL3" F|Ri !`E4`ML$( #`\p>6iNڹxXL1._Ǯ( #Cĭ~Ò(\9$JM % ^:3'^0 ցg8IJS#J :4G #b1qn7|=t:]qQn퇡$\0S$}lnTN:h$Cr 0J,sU)vaѱ+a;Lt*6'a%Rх3@U 7&ťm$E.c+=F8!&h >$}c@ wj=A$Ky7(=ȓShUn[SLD.PZgGdPv1yݐ;bn^nwwU ] b泳EE.OJ>=eYpXg4<`"GքxG;YdĢ,JRe.qL28lF0&@DGʁC)&|%zzI{G_]Nm6lWtY@W>Jٌ*Z]{1"5FMJr#]{#`G#8ݺe{ `_1Ay9 u8 g{0]!0˙021Q̹1dg7nn&"݌F110dze25Q'*å>|gYk,di 4ZI|]$'2XM{A˗wRDou8uJD%B c•"1)x9hEұK :BΆ^p}_:}ꮮ(6n>y3d\dJ`%U"[U4(9jc.+uHqE23bG>6BO< y<: :r< D$HSЌ' x'3)K3@px`8zAԙf6Y„Ѧ,@#RsM:G)"= Ni!)=9o"ؠ7%넺)*mBAl[Y/qTUz: 챛Ys.1$QV:k}`!FRTP,mNzj$PkJ^d"tM&™9+tA*Ah'V*Շ=-;7=G*{P[kX^?՛0Oѽf4n\&zs>% ggՙO?QeO^?Uz3\9 c!fg FdQ/x2hO~f0SaE:4m? (%'qB YBhÏ?F_Pz| pp>kN`9UzC|UhN~^K:z] LmH5H+BW2 lxҫgpF7z_z8~__sG'veRqūv3=xe AU.F cYG TMUf37[#z&}*V򅞼ŪhRUS?.h [5R?9t۫ޟ_o8W:UoݩmQ&惫·ؑ;HHCѠ}9{2;@V6[J߇3k*Kf@?_w7!D'VѤhi_̡{^|̽6) )~Ss!]] -|nj{{_vbm7ql_8[OniV~!ɕ}%ߙb4h`vHFЁj%RJ `f0U9Y4 ꒧Tpγ:1r_dKuz`r(3~EoMS'#.qI@DKK9J 0Gr"V$j#ݏ=!afDz,66ve g2) 8%C gCP13V13Slr\L|#$b,6w]Ś4|Y%`Ux(k򛛾gh>\#}MI`E~=57t7qk7r7iگ0wR8/izH)͖Y3e_gN&P³"LijmPЄS H~K> U?bs kȫfy{+&. 6,Wի^/.5~YTki.jt=x@bb>i WAUDvuZ⤂X}'~Iҁ{cc[owfbӍz1H2жBxW`E3$Y̛/Iyȡpvop=;ɲ9.S> av) ` yaT O)FpB(TCl|ҀK2F"i&Lpol<0xeR11^Gmis+4=dM(I?0@Oa iSI6? MEqY)QzBp*so D&MsFƌg<2UanV;g$(σ#Fm7̥31hLAG* Ĺ߆ᰔE=ueI&CG+%i(|fHdh;F\F :eهLLWw+aoP *3`DbZew*1! \+ J0O\vLq.zYN*ڻ窱C=Xw. n_[v 5("m8+k':cj`1ɸ+!Isʺ`W@;yĽ`joV>3͆Xږ̉6f\$ &kf4r4ș (I0M:O`(OZ0Qy "ӏVzR89mIM1qn Rʓ"Z[ s- YͬTc~|?6Ңk'ǎ=;\yYbGq8mU6khLdz(PDjixϫc)\Rwk<мz B8F9 W`kYƜszÉ kjL WiT\,CV_>Uշ22F:tB;XVJP eͪU^m(!QR܄g=80Fkl`B:ֆ<ueSc0hߒ8^K~ҫW%^o"|4*O}7SS 0P¯Y# 4tojKYNJ% ?mNfנ^R2(άD,SCG`Wn5o1}nKWL]`KIFO|W=Ϸ|@f2ɴB0| bbfnի?@ 0~yd>O戻 /{YpƐSI'RoJ?5ojPɿN<. uQCY*I $]-wyxcЌ HYav\ hyo}$NsV.џVj)j *tM?Oa}w"|Do{sosbfkC[yWV50T"`2,&EoYLzXCf%DҘVfpctD5 0غה'0=ṠM[v2t/."EK=0U $㑅H>H}` -a$EV 莍ӞN.ǂ ǦbT?X* b*P}`]pb l&ս R-*BP`6 p_:7|"U}_60aZe~HzР4l+f[哎g2U-Ϟ1b  +#2T^'GdH"U\MgUwi[m쫍7/f8# X<*]vqY bEIq(v쵲2aDD&p!ƒ`WsD<NxQvIZn jOmPe0Kv!̹BRE05(SLOJnK,&;YН=~Qk% μ| G{ MF?K}Tq6U;jd_(mD&a&x@(4A\2D"}5!XJwS3"fRx3W< jIPJGZ)d`uQHx8@%ZI°v$zy>y&f@ fUD=۳7p=sرu_ڮY o5 sku)ӏ|]&$CPFX|ӧ&"[p[˾Cje{M\"c 1H:"bX$ AGXq6Gu+¥&LiF:PeWXUpyvb_9=s%H) 3D кEŨ ̀[! -XH[bn^P8^PWQb= åhbYLsUCQ2BgI|Kc5ԁh* #Kkt;kȱn$I1@cx:-!,jJPArX|l\ZCT'~h_{^֩u.p{2K)zt%>lI{ϵ5 [ c?ɇн|1҆U[c(0 vY4a#A29Hy0Px-HUuØwF1I~*Z|>%} {3(/@>Ïݫ%<~63_5 mûvv؜~/2:\[TE-FLmFLٕvn^js>B*y1,nAU$AK2uN)-8+WrVq F3̴1 {NC鄁- Uv-?at _J pH*|P60 E!xTMY>XtHmg`i[{?sݽ}K/VϘVVGdz2ޫIo)&_w'NYXqpfYwQnW|-f&YZ`MY{V]qW<7ȝЊ\&&5N^+pxQYq!.?wWs\wygʻ|5xcroV3r/Bbˍ)61"b\DJ EP"RGmƻϲ~6d nBKQ[o`٪n~e})~sm4QO:2I/*pD&ĥ 8 E'%Q1³0Bvv#8V9mKpT!*,)APO;$r?16hO&-@ɻsn{m%w@Rry}n;OPLEJ scR8@!LSHs5{̋xNC8pHb1PSGvC8AH/ē/ =N,ac; g=80Fk,D<@)c 0DQHgt=ƏBU1gYD@頢C(q㸠Px-Ҍa=޿yiI/S&WoC: S/pB_~{ 8 ?, }$0q 7+W\'e%zY&||.fn 0 }04N`y.}K{oR9.e>3DŽ_LpJmp5Gp=wK*z~[#}浕3Be*EwmI_a<*UrwI]Km66.<%)R!8ZW@RD=DPb@c~XV&~,NG/fElWӳwF7o>jo @'b;q%>VrL 쇭'eHܘY@ǥUoPcA~^\?.׿sD!s|s-_9FH-XZQo]xZsK]&+Φgi l+{3ѨBw}tI,Bv`1`qQ$#B#0\X?rH|`3_nG;-$t1?@v^y ɑ *<$k" i =rH (&K>pw.丯pyzJ-꼤EKb(+wiHK5rVD|l %:DzUm9:P7OՄt6 ^ylx&y5-t.<-PnXhCw.QdyZ~l2__LG_T\ڽƮdyoU1qcWNr0FzyrW|ʫw֟rxٿ~5Jdo S䏓53S?җd->o=;-IE6=Y:/<T>ēdAf昆lQ^9411z)gged:sv*䤄(cl-ToU: ?ybYc0msEmH˧@h!9#i:HPQPs&YO ϴ3վc*_TuZ=FFh&S:'0F(k@J؎Ћi8|"ct%zR0 "Mtl|sfUj.+WӅۖot#RWɿ7l0w'حt~9|w7O?2N]iIȪtE,xV!8͙'KC2 DQJ2n{. &R) n~|Gd`f%ѥF2K.b7zcX g1da[4 rQV@bLK2ZcCe3܂N1]^K3 ΎaPWQ)&]Ri\cr%2V "9#7sʸ9 y=y: #wnwi],],N1[}=eEpYg42`$\kƼBѣ0t`R EPс”@ %.% SЙTLKb9ZkoT#&J r\镭,BSަf~> VQ|bo;.mٖ~PɼT&x M$t(xl df* imV;aXP^'XgaL)o2'cDcЩ*Z H]rKXԗ.,fӋVF=7."eƨ(Y6T5 ȤekeBp\ 4;ί^G5!T&g%fJBjJ'C (Ѻ>I_W@5d7ml75dv׬}wЭÍ6C{&pfgEW74d@Y /rKn3!O5^%C;{'dì(,6f\dk< r4%Ba&[4fWu=AMa{zC |vG$نT0?,~8OWwچ彋『 !]w>lՋG/^V+TJ=!1'v(1*Xlp/)yn@dl)aZqB 1C ndVrT\QƘs>/f YC65ZbukRԍ>4IiM:9{Cjv3>δ\ :ˍTAgd bd~Q$>'K@@@}PJvFnJfg^5 M ZJVJ7#-ϼL5lXSQ,!셤neZMF)<3NIV ֈ(J}[Ęo\/lMtILJ^˵+ Zt1j]2LF:!%^R,@ f*쐖bӁ&/U؍e{~seXItQNT605s5yr$II,Aݿ=V6Bi!sva}{ّ$k]&JO!kv^(bY!)/0,?$w[r'K4HbK4 d-mbj9k B Huit.EThܨ$53 < Z'M|FB]L<Pӹ;eY!}$jpqxk܅oGI/27.N# i㇫o7H.78*c6GM"iqʨɕD1s`YƅqYP}&-W?/>ygn(eK&&2<|A mӟ[kKrX.&NVKW8(M|7%#G3NOs*vD,[G?: SJQth2f Z'NJٻ+UM~;LQ7E7XZf1gIts+ܮ9GF'_&w&o w|'t6bmP۬3O$1M~,XVd|]sFWPejkUvNK6NjΧRSI"c_| ) (~40 鞯{nt1f0ps^Wmn-*0 ϮZ$/~hd$]hR׷EF1Kj٨r߫ DN̉_tKut|~?ߞc_7Š'x`Qx6!Asz=.ͻ4\4UlKK]|.owG%4fnDYw"h2_5@`M|6M">(2WwQ+"T/=wKYڀ(fz/@/oN>@+3]-e4q/tүF9`b(Zӈ4OVY1f2.6 Ktc(b-`ÌDN%J-eEgwZ~O="LoGC՜86ژ{.l2j7"v>liv?PCJS:sCu!\q,\J\t@s@S1grHx*uG B=M{O\Ҫ+xه2+Z=Nw]7]6`ǽ_~]5\yå O=cFg>LLj9+jh`[&K(SgbV/!'ʻ/ Yڤ e.0BD~5̲oC.u|wuo{ջux5}`4?)zθ58r I;LLKPmM˽`W;NP$?{e"8hf:/ 3ΦBn-RPiG` i]eKn"'Jќdsk,QV8dqCp~,AشhPcGq<\*agFg [9':ajXBD԰ge[xy3s<^{aI ;`t}"`}FYg~/l&)X5ÐK=Xɴ D戻 r=ϭb8g)(ܻ=: EGβ-%o'v=Fؐ҂ ĉ$-K0/x,4Ẻ`mUܮ^4AYuxnkyZ\=^m/(s!&w:A'HNu{Sqj8m9Td1{&CD{fh lL0A9%2bq"d;Jaظ Ά]g1Q I:0e\B-jb w,<0v57V"FMa섷廘53G>oL`OQS٥_mo< èVݗ˹1įv{VM"EL1^G3!]@R#1s SE>YbK9*R(mD;0< fA a.T"A8fF9x@T{Cl:#lq/qB8'u]A b=$W+ l. iT/@"1X⨕$( @62{D*9z/8qszנz"BB!5'-;a&gX(sP/QY.qv;숪{j0#J4˘ǜ𐰋 # \pGLϡ :$L^xa_Im0.}J!F A|?F+aGۄ>lW~SrdQYJf^Y . g}8c{Ky$"k۹S4a0_-JmKF<,14U`pY4a#A2 bP?y,tF(<DnAR @8qS\aq\!- ]GCqX NFRr#ga!:g\ sLa/ú@oFQO,K?[̩0IeVۨz(_v|gއ?'piCɇh`X3Ҡ\` @l9A3rGB5B0Z)_ysc .]vZKeZNv. >. =M>^np46v[9*Bt~gf}bn^پ):%[ f*kǮvơ#r4dl:z@܆˅$:yzs8ŖNgmsm|eMG=[wvSЖz^LO8&цw˜&\W_q0;px07Mw p؜8oL&j*>~_ vE?Li{OĒiXt; &eP ;sKY8O|> \T6?{nnGG- uXL]YjmF#/gItWi{F $J#K$8q)<΂a jIpT !U^1fy6iKpb2U c2s7T .fAn$ނC1izDsK}H] ]ZJ'-Gܕ[GVQ7$eh1W&:R`G 1!Lb:Ἱ.Wo|ujyxz,&|/Y1CoyPկRkU7U7kJ/Hz}0)\ä(^JE.$"mq>a) Qve{SL'B Mv=H$_=I^Փ| ncmoZi9g4;s 嚓:c Of,3_ٯnHpp|-Go9L/mq2]Cn4cOqp9LSk7\`]cLM܂ agp#8 s7/_SË1pp1 >Oϵn4,L ZP-M};e?|i=?&Y},sZSFF/< ~u1]ARFOiPSkf}uwe,݈%?[c& =#gG")۫_)C:]3Sܻ<jt1TwKJMns\|OkmxHuN:v!kg԰ILkq|{ϝn- Dރ~r'yTga)<ǖsJK㔓!8#bNī6b(QJ tP/,ryCmu(.譒: @9r,i (6rA=p(D8 Wz(4]ȏdgsc. @ۋȾC\xHJ)W?Ϙ~i{tK/do  x44\`D$| >!4 {BgALmw},freFpш @ [ĤFE}lA7t$2Gd}guh5 L9oi ְk(TduTMDYK\`ؙZ]f`%$d3F*dLx,7N朕2_VE.%"aVbfWR>C[ a\/:B29;rR[1[ R `C!=]LYYH2kLԶI2msEmH п>FN #HYK zt̹QQQTt^/Q ᙽM3mjW*Wa^Tn^QF R:'0FYנP:3:qTF̠I \`ArNEt))y {3nd쏫􆅽 p_7,4y׀8B:;?/̸#xZuO0ͺ;}CFEa#v%#U%G^|eaZ4gXD"vB0)M=sY q+sQ0^( J%=a-l$dIZE*Mψݛ9w#mtlڲgԖj Y M|(I BOd7y51%iJsZ>|Ȥ"AHI 8 3"f=9'zԞpqLKkMqQZ瀋d )XIhm9Đ"c"z%q A\\aocS0&FApC3kr6G EekGN':{}SF{PQ^꺵a&h, >!WQ*Anc[ Ȗ #zA<Si*56P6í)&| k09SP8|w)~'ǧ#kpOkg2L~k͘^\YJLf c;\YK I3&ل"(8rX{(sȳ?N&qyc_YS [W 򙔥E@ۻ\ntBݞ}YfA֥||-wVքƆe"nE=wYomhV6[fcca:e9LO5F`=+Y0 }PJ#(tQ$”Fr )!vI$Вiʓ9ZZ͜{J s\F pxy$R&Ӑh; M+Լ[~ԔmDϸNe6iƓz,43JcD3!3]`# #% Cpqrߨu cJy9=t&F t$Qy,J~r;K",{z`Ed4U',)S0ds\ β g!=\;?^sp8n^G51*3l9sa3N%I A+ J4Ou5I){x6w&QpU]5> t+yع@wM34]nYgap yq$ :kjģy#C;$k댩`qW!c(̥^aVTMG:ӆ,Ld͝҂GD38uR(M4lݤ: }!e#֞\] 7>qdZi~*$v/Nӕay8gbG% ly E쏨j3;_@2M$ҰT\KvU]N[̦Pbz^w5UpODHC0(OHTb3ƜszL\*C(x.bkԀ֥2/a WYf%%cBkOhkf_Hk0Y]2p?y?h{9-| (& C;ll>Բ?/#|X 5J]{Cm5eA333dyku|~Re~ioc7$?jS5\Y,kWd`}E *|86gegV[=nH"e{glHoq{n6K~io6'>zCoչ4~-hI$iW3T6'ҋeG=ld~٪`ɮV;UqZU0a2 g>դ]׿T1[Tјq{ 'g4Bpt~_~?rawo/hR2A>ބ_eb0೦4}5M-#Ѵ}Ӯ;}M= )q ONf1fǣl\ΚO*[ 4'Wq2Qe#,kV kc/Bx_6 f+K&:}فzQO/nGWă4nq5g, L[ +!([o|48>eKxSR Fb{ AIaIӈ$r)I3)ɟY>`mtzәakLZNl22>\tعV a; MNya6:c-կ)L[z6yRQ#d\o-- l2Jˬ4sҢUmKIm}?>sxM/20[{K;u2ÙǤRl|0;W+U'|b8T9wW.x Yn{3iog/}hKhM"ICa*BY(c"Ι+kMSݩ-{Eq7ɢUq.09fR*{Puq՜WzLbګNn>{]t=>0\;$X)z>q7K)ޒʺMm wnstIW>=eS[{tYWSr%qWCzMжC:n[Os~PϤLVo?ټYF/iZ{uQZtyQL K;:TiSOv89lX.UCJ\)Ih B^)flH,Kb(Kye6yKx=y6-y gOİL_?S'< _֢<1\/eQXv~QXiİ(,{>zM W2rp0sAڅƎ!8un^w4'a/8"v3/76?ك9/Y,9^}Kjl}J+uIUYd &_wIJƢU{k|@ǜjx_.Razc xVĊFKvIpțKhWct9(gp0vCp,4NnEN/u_q(J# )jTl&߻~>]7^Upi3tѥC2(]FM t(E5ҕ8vb-1kՄKYho]N^!]̖ͣN7CWfN_ N^#]%Ol1lٿ;fDDk+K[Zj`gsgpݎ`_ NAUկvץ({v=7_'` ׼PýK:g~`yFwo?`m4 hE>]^_=U'f wݾ;2ɧ#k~Շow]1~#/^:I)t~~vTÎs7=S&/o5G'ǑN!/dեTST0/@$P~c>O薅yxBxZG͗煻ϙ)̏@0Srg- Π01dF9Tw&1#Fc$Fu $z"i?wϛl\u*L%P=~JtJ;sz8o.BsVU+AZ LG*6>E3Ar#ْ5Jxzs?8|X d-b'=S9fqu$~\2KhO&XåՐR 5O}ITp,އέ:.M Z2b>Y~ռX}>iCrfFrr>yxiyvY# @VYkpH! s(ҊA3Y/ E#*%1'S <,{4:6Z8bpU BR[C.fW"Hmm2Bsk%stJU`*.t-a6@ަ&FCVj{@<4c.5A֍YF6lmb=sAE m {e  (j]S5x Xc#4o|G:oۀr U #" eBGklNiM +Q껱_<Udo|# gwd2uA Avc@p!ՠPw^KC2P(SP|;/( `=ePBdBE" 4D:SmAWP s<«N0t掉CC\tIjM3B`@X2D 6R nL> /uF4@tbE,(J892r"XT5U{%d@BYgZA!N þT,b"uh 1y 8 2+=d淊 B[[%b@ࠄnMY;Nib]r sEՊ{\4yYȠΌ,!b_ݠł4ͤb6(!9yhY1fPT)0E!NH)d׀LjLbWO_^\w'^xUs L0$ZhX 3 b3*F($u:U}"t5s1{ZmLu1 %Od$;FNeAP|Ѫ`QB35M6Tf5OmG@i*h0o5{%?BZurC:` Q}Aoۙ%7B-!sY+[9@˓],|Ow۴'gsyV&9ڛAAH0-CgkO mj0GB% bѭy5dݨyHY# 4zefc64篖G#$з]Uf]j65n(!/{C˩4dsCtd7 Tk 3'е/Yrϲ3V` Kyx -G]Yg.Ӽ!(E0BI f8_]oxΙwdv m̂ O.?8}w>F|ۡ:O;TigkኜV릿? c%;k} r|tIӹ_p~u||j|;iÛ1̋=/G?•~zwy8A4;os[Zs9tuzyr?~&~vx|z6zvu-3yqoY9v?U1osnʸnM1m vwc}͟ tcH@R': N uH@R': N uH@R': N uH@R': N uH@R': N uH@R': N ~7Y^ (^ Nǡ:q(そ ďp:9L@Z Ot-+r}M'8s]N uH@R': N uH@R': N uH@R': N uH@R': N uH@R': @d@d;N eٌl4@@i:^zC N uH@R': N uH@R': N uH@R': N uH@R': N uH@R': N uH@:{SN Poq.86@ zN x`R': N uH@R': N uH@R': N uH@R': N uH@R': N uH@R': N uy@ZV~ws͆}G3L )nɸ!aw=n*Ģ@5s0vMߠvG4իF+)wc7w9ԍ )i3$S̜39s8JxX4KL7'+Wp+ĵr^ WC+ 6"6 TF{<l\9wU#u !]A9| ] Ջh9_e~)"B3jF0J#>(ΤG2J FҺ.HCiN bgj2ufV,Oj\1q2G^T:qQsD g՗g6Od~WP%¹~"iyvL nOe( *7|ƦRNnYy~,MBe?ۿ&q:;3kI4KVXpVKgưDWge#?o4ȝe񭻶%9i,be+J2Ϙ-2VJF?#wԗi#w44ĐWZ ȕh\!r W+iqyGṛ׸/rV.W@ r5@RJJ=+B\Mt/WHH\4ɕ!\!&w )M Q,Q4?/,ф*_ i+lr\閦#Xr Ҟ.=箺%W(cKtA5=\0B\%|+}xJNE!ʕU(W BXo$8T.79R0F\e|ih'JX4Dȧy5M,$[y5ZTgn`7y>[ͧŶ(Xc#S!LHW;WӍD~6.%K4qe'YĶP6feReNKm0) wVe!耸2qp!R]#Ԉ*GsT7jo? @ ߳A/G=+Vğ*r\!RZjr%Q'B`+5BZ|t W+ř#`7r"WЮR2jrҧ&x$GWx]!RR0dr82B\̈́8Uϴ@JVGzK\B\}+}PZ|;reZތP\j,x ʞhZ{ލ:v W&ա<+z#W/r}EW(%! rȤSaD;͘ kj̲ k?eUX;dI*-yq2KH"&!$`H(e^`.7r"W@+r]D +GrpC\\!-w^\)%)[\GrO}r\ie Ϭ{ n\.WH)Lʕߓ> X2\!fjSHiiʕR\!jo "m_ttn WGȕmiz;"J`e >8j[69+҂&5OEvU͗fsO"70^/}8[\gX%E&LE%0$Չp䏫?˛3դɲ.jTq]+*aՓ>R)&E(KZVX[j uho_[sh6Ns1xr?fUGى8~Ior mj+<~lڠ1qV,@:3a-\YT[쯵jpϽDiCI[v}OtߑF+z:rl:!GiCPSehmi|'IdQ1.APNjZg ޱf1kG~d%3Yi?f1`|3=k r5rPl'~f_ 6 ]U%Qzwbv(ZpZp҂7ظQpjk_H^,E=Od~mkSv+q˩ɫף]a,E0[bOӲ蠖uO62ݘssL)]R-NA48_%e3[ɒ2w)*V=!6YV[B`[m$SOH I[h讌hOH"m$Wݍ8jGoeJ ( Loҡ+/P5t(PrB:tPMcH6s[ÙmXۗ?2 JXodqEXeIxxk5Di'u,'O\˫).ucl h,ΫkQSBxآ$[ty=:[M99X+i}udIЅP0'-1<,Rd1hQg M!ttl=K',cڧ9Fן*Z\a:DVJe<+Vҟ)ါEjQBJjreʧ sƽ+޼EZ\!!A(W0iGrJq7A R2jre"֧ SM+ĵZC ) r#LȈIUG`stjGZφ)[1^ ru)R Z\.IԺ.WHy!ʕ5qK dLiLlDX7U8t(72 U".34 L#A(8+y%ե0Uv,=T7_qNx~ƒ>f1eOb{wGȒ6zб!![n WzhDZC\ 950$0#`%+Uތ`:?J-Cqr%Z\ps3Y q7EV;/WHiCmra^:A$n1 )E Q4$WlS\!-7R6 r52\RS'B\\!q^Sjrey7Wʕ\!.f0\.WH)Btmiz:",DU7\h%)wՍu:TK+BZ\+tmg+kם[1D%K5TTr3a4["ieFZ|TcLGq~cShy`o*a-ǛtsۮOod:x@K䒚5 M"hYԄ4cU1v%Qԣ-Ĥ+t"v>&EJb!Ƥͳܗ+7E7?^ W ?};Z\a@Bt5DR)ӴϰhTrܛHs6+ /()WZ)E}]!1j}+箐R W+> XqrW+]!r~0Z\YͅiKϤb䮐ScQ]};rZVBpk }A7ھ#Fq\ W]u`7\}+uJA\ R uiVmd!c0k̽KڗAH+/2) 2=@Sv(qni/q+rYO].e=rɲ-Xe9䪃,($D`- H_BB"% !CBa)s`I'8TrBZ-\+"JZɈHo q7EA(WZ| 0g@\ͬUGWHi`pre! C:2b}+euBJ\ P,Qj GA}ӍV )U]};r[%\]u7piU7Znұw+PS*Ir\!j )r5@b\=5~LQN{ t(7ī?Q%R"|iuFJ~g3oN-'w㍥JarGtrLYgNZN!e֣&8`H˜R ?Nycֵt%x'ZW/nկ">׾W{">`sq}j?(II2 iqFCʌ,/{_^;_y?.iNW_]M/t-Tc)뷟_^ϰ>q׎~ j볫uz{;;'0sQ]]$T`3qO۴~}5FE >{zwl9sKeZgeC/bTٔПP(H! LKBv93z)OVEl_Q]]$ *405M2Fr.4TੱILʹ,T^0iM-MFi{A@篵z͒oT[WK@WB=Ϭz5ATգe|=^]?೛v~L}nwH2b\WRިq>Y6\G`G[C +IjsD0(t*MZB E2&Mɡe3BϤFd%er"S3BLF|f]WFT #JwYYl=(ա_sj\xJ6Jے*D\IK!ʁ ~//4- I¿?Ivς}.[FUXC1="/[-9_0}7^RUUnsMi'$vܼSJGP˓E]' .鬊\]GMRB'31cA>"'+9o=w(peT 1 QDKmXʏUy׉ौmbL Q$qFSv~VEfZ\̋ )sL->-0wg &T!:*;wt#sZ!?{68ha`q&Xf:Ft%G b&.k6L!`dqE k \9 vh& Id<2EIgO$ST~:pK ĹtH9H;j |pMm K|aRC8#|\c\2uen;NAa>ll1nXK IEga]W"w@(jE_]Fw:2ΩQ慔dJrb)(ǰ4+T9hJk;@瘄%m< HPzb4 (YGB0Fǜtu9G}p8F1pQKeCJc*t,s.fdĹI׀K1@쓃u1SUcDU.T(Lʸ`RoG峽]GK oǑ`$.DB-xƒK@aJ!,L2;2iJ3`)V)1(,`J> ,V۠UiG+ϮĹ ڧ~I?ŗIV,K6u:܊fЄ >tdFP/Mݹ ) A1(,r;oς;3FYJSG@f8CF(,K&̓e3iȰxIr0LEN1u :vտ3ku6E}ii*"9MrJ"zA"@6YfXbZ)ɭ&DB&Х |V_Xu 4 QZk"B].}E}1TwK96$*tkmH@y7ʙyY$#),5`ݙUqut!۱uzYa: )UsJ9e_ZHmCl2&JRu5۽_VEZGRJָ5mE)k,&W; $WTیĤr+G厧H^io-zt&J#eBLhw"ICL#ABvtüø\.)Wc5L[.jH'd><9~,_߽9ٛߝq۳w?z+NDPO' `b?C뮆CK1]g\-dvw!1 )ߏ~,0 Nf _GMʈ'pw=KMf~ 6n.HR t/^ƅ8L@;?ҎyfӸm.m8X pk~t&#ge=;ϑcQE\d{QIf >قtP)"W65mZ}^rw uoqp!7F 51dR S˫6A??gމ#*L*V[ΉcRzWv9 H~o$DV==ʻG=(ɃN(t`;S)1I?ʑٲ5r1$a- |KGГu;& NipP:Ȝk0= Os#~GpcF؈{scw4q]!OR;ڸMۤP{ 톶 f/W0uW/ SbX(~;V0LϺR|ATZJ\E^®| &f2322Z@cl&x\Zf/1JU:+̒LK%]b@fcٙ8/S9xnY7TV/E'UtRwR骽s -&))CE#HEk8D 4) U#lg:an诸q_7ACRIWxEE`z7uG"P}',RJ7n~Z1n^$T [~QXTJq er܄*TﶬV=ڲZ e%:k&p ZL\ %2IqH2@KXiB8nȲO@Ff-Js< ̩BY7nOR;,u0z1NUO w?y L`FB4K&!0 "NN,A ~ AK9.T,!GaD\oC6ޒ %DUgL|q$p u!r!6@SXOPe f ^`in!xהvWbaag.IRF <ˡPGY+7VƖ&_־ް{vA[Šaf/kFgYݙ&aFxnoC@1X]zTJEUi(蓥:* ,UW zXmx 3|k@d,L<8,Z%1x sI( 4-!R. O츛1utw{at5P:@TV =aߪ\+,1#WU6IXRZ^H >bQnX~S2GHY_,2uQ&q&2 ANYs3A N2YD@4: \p# )B+\d˜8^A?.a 7jo/$cAJ`+oBde!U.1^?$)Gl3L~y, b>]c;?XT]K}IJY]y٣',d N8պ,akg6qAM [+you+gӬC ΋ $XKg.WpAu]n o[&oAeueuz$"tjlRC3򧩢>~NoZV_+(~*J˺n[[E˚f#?ĪźD(vcmcf<'BQ%HIVoVqM.+_fZFu^>] "{ [غn]ZopE\f<ԣw>y!hwz^ O8ƣцw7o-Ǽ˜G@l 볏nxU,ϐueNSs>y]O 7..=U#u=mns9#uwHl5"pv.H#''bޚ4nT*PVZw[ӵlM#&jjCݤm~~=l- \SeP%%xShb9jN&FXBE6G&a'B_;Ey+%.j4%'^sΐ^5l/zNJCpo;~s"~ Eݳ&;ۓ=KI3g)W{w_MfwsTN(8- ì#;wR(H9wFƆTdЦۋyHO7S;uٝfwCM8Kk<8Z_]MqN1cyE,O,eV#ETGI F@"'-t \J:yi#Nlmp _ Q@PɱG vHm5EjS"D$d-sRF6x1xs(5f0{K9tn3ޯ^2`K6IPYR$8 fsr%"xxz:c-tҐ! TiUR lu$^jb[DKw&OmEq x/w!6VzxPDR aB`b9SP\hJH'b$﴾6_n7IICTֻp62fvn0?=^ijWv7_KIqD`4 Ҁ5y "HKRAY&#ە; z=McГxVc4_{˵w9?/k~]a׿/4y/#~~$9ⳗ aa#\h ;%Yn&w9-s&5޷r{SSeE CLfb,![rr3v񉪚o󌪚qZz 7)\ݤW=y4㦬" l'/,A .~-GBQ9Nڌ/-/Gz9}^-k^ٻjGCpշ~Sz:CBr0f>ДHYx@14zF޿\DqOks^ěItvs;3|r}S @B n48ıH^60!CS:o|+'+yYL.?c(V- 6@'Lk*wjSb_ezc"ޯh]o?Iյu6?dt'LAIl(P>b䬿*F)Lz7e|}Puޑők}ƨ%GJ "Aۿ*x78™ŽCPܠ1yͽ#PQUW Utޱpϻm7^Y*<~Zm[aV=fd2&z&S放²N-+? QGsjY4g P*:Q6z󄡝(0k :Amt yRJEbvbjP4s^PMg5iv29jFċhQrT/ 40@zyuS|67kOP]xjq =C'u'L >a] {ghYjq8+|X&~+_n>Ki4$5GϹ8>M\FOJ;b}|>^IQ-y(Rֳrɾ#"ݷ2L? >{S2/dC/QE["!{LCw׷-xx,oxoYq{?.~WYe_no1݌ zcoSq3.r{ ammn w=".{9g59RP`9;m\UV@*K)YW Gf[W(05p*tRjՋDIj?W~*uOJ02(+ա dEpKZWY\.WYZh<\e)tp J_9XPA= nh9~AFuOɅ|,B/zΞ4}S ;kٗ`c0?<!˗R|Q(hg6ozkJ=?Ձ|5_J7sOwg&(޹"|I(QN aS^2%Jv//B.,wR(H9wFƆTdw;gB`I\^= \Se6.'z+qΞQ%[dȢY\.bfi%i!\ c XCZW(j:|KU^\E[W(<~*Bii6a^!\efu "ZWYSN'h<\e)eWYS VT5d0K p^%\i kiii \eiyڳ:zpecP`Cj*tR _=% ;=O`s8r':Oʦ%= ::SM 7-+S5pմ-p5pRr;zps7) L(qUFK&(uq^{;gYF8l*dڎ'Yf2w(LH]M?,kqם=aF/, +/_>WTW<ӾW@ ˰QM9Lyz҈A8 C$FhjShܸ@Xap^0b`!YݼfEXYJzA{,{ܒ-S<{ժXYc'w?x ܹd1QǶ&]|nqC\{6Z6mVE".S1^8pFWoVmNOiQp-GIg/"W.)Qr%b)6vQk^{5=I>'d=km- H-rjn|/iwZPy [\?aպT&џ.+N&u]ʩZ;[~2mWK>|?׼ck2mj: ˕0f5 ^ٜc11V:D2ji0W.2%UF]$Ge8 rJd.1̅W^MU of1@C#ꖰϼGdP{ (99&\-Ζc(~/r+ p想 %ppIq6 m0dvUҙ\P5F7snfq<1 <G#H߹}Gj5~z5COչVK}7]YA}ޜNQ@ ?;EI8"< 2+&;CL|NJjPQO㖑L<üm{t~Fd[X),`_W9($ED8"]x^*E>934c9LiS6$ fvuА9 9GօX~{]+J~h~rP$͍Bϭ00RpQļe;8ghPOrj3q- , BVjx1nק7|ʋoƟ?aݱ̳4w7f),)oareZd 7۹h4^ʷEGga/3GhJa]eMH$rJxs:AɅx~ e[O2>a%^'L]|`?-j=&&+2U'@j}"&U>݌bmRŒ:0(%ՇF/"Nzt8Aׇdʙ\<qg2n#܇o~3N/?]>tʟ_}6m3]v.V"Ҧ>DHyDWw7GfP y~?,R7 }~`o1sr,BG6`HѽѻNæ;pWR;EnEiƳ-7[י;NFc㌷;cK5qN͖/Z S<&mآ",pkQk&b4ȵފ'#JJ,DabȢrSikw<',kh7?:볈<(XR3(Us6R:z2=̼!HN MЫzE3Cϟ\M7,'G.'S~:$S'ۿpAE '#J%-!KS7Tw\E\aJ"b^wHPw{=9|Ջ'<8{UE8h }e3蹊Jl+g%,0 `ƒȔ &8'/.Z&1sҊGvaqvKù/r'4TRs+BIRH*@L B*& G_)blܾjڈ!/]vf;,Ǽr$r]IHv4/>3B"1b{bW< s[jq:*o>O)|hF?B|g۹d moPƼ?//鯎p8e:ϭoi6 oAE%˙z=YUeZSؠNiʢ[_.eнgPC4zdҨMC¦8X" s\vAEZj3IֿCA{3!HLʦ<っ*A z sI&P$p,Bt>5InmkًH$sޓSY4%,6&(HoOӋiOxAj|@IOj y}V O&ʱ 脵~m",1ޡ_Os}%*Aqf-}&,&m,՝^eWޱdeCmIh}GW\铵,w+1jfJ,Fib.(D4DMylV0+3]wr6c.:sNQq=W*s,%oK &x3Pqã8h/DY料0߿`b0وL[r!*zM![ Ch Ϊn-aE&rCF'UʁH02+&:òKPPP-ͳfeXݫɾr}ŸJǷ=UtrѮvٸrgn{6R֯+uȫ FE ,9Ŭu*4U;92k3'[8)aJ.z28pGBNQ[%i $!ٍ|jXXM3®z5,ܷO(ɱ.*qO6`fU9t`ML.Kom+=*U E*=c5rxYך;!R.ʾ$r6`2F!EMi,[9L^2Q(7C!*#v5qv#v y,]M;EmUՀڃM%a9#I!rP`8]NhPWhR6@X@_Qri!gh֤,r @1"نڭ[R -x,XM?ED]"nМY-MA)H,XnZ嘏:Ȉ''@4խgXE8k%GcLVCl!BI e%( JheD=H8_;jZ/.ʸ8u=Tj&pD<;ق`=gHR0DOG2'(MeOqiǾxXy;ߓbl;G\ |oz2 z|DJяDi7Ԋ~Q.#f'}o4; !ՔR14j"aͩ| Tx?R3STO`~UlF'1'Z)1!'RLaA+DR!)u`f"H턶,H&#)&e=fb" 9hj<2^qGƕ4lsIBw~~q5ܳ}z{e.]TV~^QEtd<`PFm.b@h,&BScj :ކR Ld:tDLt9JM$h)[MH f e,{7-H8 GBҌ"NBR `z7X]cqGh< s^`)ֈH^9A潄!,UqJnCi\^GZKAZkܚ mU-3+-Y f?EQ`zhE`l`s y1م&J.b+W FǼ( )&-Jf( $fhXa`lF 񒻪t}/WmtvZz';z%oKؽ/E\fH8/EJ1tH/g\OMN7\\W`lH~r,7Rud4M 3e׊lwZPj/xИ؁bF><oDQ5048ꄈG'X*GL"Ԛ )uCN ʼn&4F:E1y&t_詫TUp9MV isQ&(3`Q|&Mқ6 XJ!fi/ D\f!Ō:,Rb4X_uɭ3[^x3'{6תny$lfvdJdIK8TSӖ")K$(ꫮKW52||ꉧcZ˃+U_M<#l)"x2:=hK]2tL=ۣE )KLB 9ibGNvgط#V#4 Zʥu О k Iz> c.d&1}f'lMl' g\6&b9K?| 8j dh @Sp6=g[\ P叟@/Ĕusp~L)]t 9[>t`N_}& %D&y%Ey(+B(m)昼zW`D rHST!. hRLC}pF& +%t=J& ,KRI4>嚩4:G+].0SlRfAK!]Dn& 1 ɥ̭ύY!Pr^ؗ%Wtl0 l>fF i|mIF=|?*Lb )),BpoDQyYSZ2GmLsYiࣞ1rHA!)S˜ `8r 7;Ne%ނ <Ӵlc(8 rc"d"(AQhh4AL0y9+ֳlg^}{jV)A&Y+U kLBlX;5^ƚy=tIR\yM1 G"&qϓa쾖XzG׏1󄳘yDJN6˟Y= wFg_NsQEŇ_~ rF: /8Wt͸[2Ej$Y [}7')O:7f6r f6Sp ی#L۝7'e$“mLu[rhd 9JlLveġdWF =2D-y1cwX!JxLrLi LYZ>`.ubE=(׽Ƹ0!Jo"`c\\*{o"Ju,}2!yW Ԉp-"{(q1W!1U5;I򴹪0Za\UC~+`he=W/p9s>U5Gs UUUAT`UPUDkľҪz K)K.uRßkgwޖr),Z$Bu>򜕒{~f3N8O808g\)0-U${%SLKx`c 8Tx4ӯLB N2 ;?{wҙVC9qnr3~ROڻ+7$=D٣ץ}H!\ObcavƮݢ(Q)qzZS,d v|)i{¶^7C/O' y|LH1}++H ?}[Ha'WD_LSsr6m? 3w&5al m(zo=˃MD4\$wTՁn.VT*92q&?Pg6DJ,y. `ֺB)c89~炏gGg>X.@g/_y{^VT#Y2˷i\ IJ$)V̴^S9 D&gm%/nkV%Ǖf+h-BڨrpC8%;iXL}Kbħ=Zfᣉec"m1ɳ͘tFwr=Nm}Eܓ֎7d:Lh.kàTw9 ܰNnjgTMwшUN.`HԚ 6/|]`Å̬pX'X/Vq- \l/)#V%Yk#,Su׋k*OzGq@@#2(W/-FԭPvo6q8MjF 6Dk5/&W(( ꢴX[ ͵u";7+ݍ1j۹yV*\igaVq[b1%oSَWs٢ϐǂ)/NJDıN??5yvdX)9>9g6͹$9ttc5B=GGg.'/& ^ScQ*|e~9u1V1`-hu崙\qKxUjB׌O[Z-gh\;P$'9}rs\O PM*kB6v`E*zDԑT9?;?-FFcʔYǿ~:Y0QEm,+:\L5I|qܨ|Fc4*QX6*c\kyAYZː:\T2oe^.3yƶ1|(wС/3yg:%3ӲN' פ2/mwrf|k7ڮh6-3ڲ?~{W^6{$5Ϟ'n:{EFMjZQBێnBD☮_']k-H_2K891*9؄"Z1;qCS.f_=a5,fԟZ ݊h%x{\\-+2C"!+j>aI/'`[YOwGūު`{"4Ame~y=3נ1+aN7N~8}Z[^>y&>1;i;tH EFf69ay <a}R)uE /wiun3+#3Y`&7s5/gLxFS-}$ai!WdcUQ.y0A/{IYFUbW*t" {D)lvs -|DreIYߔko %l(fZ5Ոx(2Y IR319jJLa*#zd7o'6r֖RJgl$L:Lusnuʆ *ZƇ?Q8)"xIgzIםt>]扣 15Ԭo4ֹ47lfaKJ4ζ1a PX:(-h' ҟ @HFލ"0ƶz{j|9me3ӳҿK`ɍ0!I^?F6-Fg%#͕ՔV^:%﮸H@x20^^GҰ#W:=-AVAZl'_e`n}-VUv-xA&Ljʩh߾'#K|v[ w]m˥:Iݶd^ 4vFU<oF4AY_$QL4Kh M|"GD{0lf[݃z WABi-XS x'1Syl mmnH!Wad wA[E>O%# hY'\ j86* V7kHtk&tky\a"Na}:vWۨi 4wf~,iMPcr;2uDMnĎB'(A`(d&泥~:@ފ_쮳&N:iPսF1רV>zUa_T V`C#Ԣ?]&-ϤkhR!^PhzҩAyLu fv>~@Df8XwcF̮*<ޥ u|d bøiaXìhl6~!wN']JvQNFQ8jk6-nۿ2a-HvmXp+ߵohG[90ChX,Ɔ;~#bM>cqUF4:fuzSےx3_BNEik$rPsB`Hp۰75٧O!ԢAٶ+۫kJyM>-YUBIq帨Im<`--G-Qyܮ\wrnE#"\|,Jݠ3NɊjxVv4 1CiWƴƯӦd^?(9V*5HO#i*@T)@C$l(1%J"V'D*,!@D Ô eݑr>;~ *윛;qGR4S* tB$Hq9U%Lo^,ZЇD)q:v<m-k⃯\;N H̜:hkVN@ 2$4FS@R,:EP B{< X+ [D;"X8i9qT$ T1 hG\ER:"]K>O "/0%3~ּ!3J-f$ n O oAX^ )E5(!ai|6cx(%}* ^ +< \uT@ЇdyxuKPYw nپ/y~O,|E9Tsܪ#T-#*h܂&w ;Ҍ?KeNA¤TYMrA $gYf(z[DBGc$19vcIJMrbpJJ}m2TaKy+l_\RH3n#*h_Ҽ&:59Knv>{|e0{!]]*m S. ΓZ% S4Q)M __كzyn!zePs,&7+"3%DʄS˭bqeTW6E|^Za`Qy댧4$%j@BY'e0rt}4z|hz0Ly ^Y_y~}wEld/BV$Ě fq0 9Ȭhf\pP}w8B,Mug1/} {>EӮIi.gvWVFx9%/y$tCzFP'WAg8+pSg㏛WK,rvk\j:&zy~wIuQxJ`Yn_J1^.^uDۣ BF_ '2wYz1}ZN}#!1"Ͽ`gG\ۆ^(Uиv$hʿй3+yE^񧲢>ƞvD(Fhj{<1C'\;DiFA*Tػ@d& <)8xz/ψqjR?TS w;\%Jx*7MŧkX&]6g$,h=Èv}?21գdnv.Ck-7ozΞǗ|o !N4pMId*aD%D z? +ʹ[Qy'#Co,f6ca֥t%L^I/XBZծL"k HNr<@~_˭rN%+ƈ}(V7bzh~08[``j( =8#|8hP*hNq%+.7z#U|^UK WB'q/G}QBx~:`\ *3d.uEdO jO&T e4z\=wxܟ;冣r& SJ6 J߯h<֫Ԡ Tnd%ex\mBb_)dއ~(>h/(Ѯ$KV߭ DmYV6~W"RE䖊EMAx{1 |JK~1qSvٳ٪^4nQ.7Nd_z.fMl5jw;|x>J0A߽ϔ 78fJZy/u 0 :9!exoq+gdu?$]hBP?fY{KQ ډkCZl:z2OM_ :ۇ &::g^&7UWBe^KB4g3F'b%]O:RtyF g\~¨o r>PUh Wa!e`4Fw+O~)ĸRW`Ycb{:}~~_/NXm6[USn_?Oi-uSf}gIpqί Fڣ>L&<%3og[m~NC~*_l S1~u}i:>J6?^Ͽߒٷ|1j'%?=!'}Hop[T)U'EY޻skS [u j1 a8 Kㄨ8Y lb")sCn ͋Kc~ءEt4.$RN3HO +-Қf6<Vakm1AXUq+%#Dſ֛HFB7 n1UAߍ"1:v3m\Qw^w~ :8G<+NBU,VAlXO6 [;lTw|*ΚtcNYԫMc:5laG` 4NB.%?$ y1f|fiz?oSR`>Kh:qمs[\|O_Y EǢPܖuK~n*\ tݙ :-8Xeo_iag!;0ɵs`E%0{38 a=h#_IO٪Xd暫 r#gA=^w ў08i۹};j!KE*5}+4~ c-MJda`8R4쨒{T3}BOX!~W4ad$R+UOgh6/V ;=J>Œ`FI (0JŶw&NZ4ڂB9kr8k9=fZsaJ6WUqw6^t NHD(oS A#,Z6 &X9ԾTc7s. ϓQ+bN{6wD.%WuB,&VrW͐sK0I$NGVJے*qOEx,Sbbz2*HWǙw@cgfwb"1*'iߨZerY =}PT[O'LG^7AXEi>r_+;,;2G;w;hWcivHOykvhq)jո0.ݗBJ#=&!%rC j{ UǚgO_2&SC4 %S$PC1 =bZ^ DKpGN (bjl0o 'Ƞ4СbeuT;B־/Ng'*< zZ%$g_t M07Z| 2)&JQPAެRχ^~9-.g7SE5b0`d}yytG!ȅ*'yb˃Ci C jd|k3EskObہBIa)Spζ>~bqf7:<X}!>\>7L΂ ;\ecpRzZJ"8Z+}\ͻ.>G7p`&ˈaC{X>Y84v.Ch#KM S 9'JլG?F0伉[砠RV!u'/'M=*GPD:kI F&LW- 2 Sk sw |mZ0}60g;>4JMl0e!$B$[ɢu*h|U tc"a ~xie>&dd3WxF$jzT/ZF65p4#߀F/<K~0?570\NOZi/ kGkao{9֧9GSry^#7]ٜ\l:z2OMƊy%D% ̀^$ԙ924 0AT-(-LaƀQxo>X[pHI;$F82 aRUeB$gYf[4>ilK4I@(fm\0^Y^ćб+PM% E;ijAdnj 79QqZa7R`Ak $MQ'd{ x9z6A;r2!,`EQ R'32+\ϙN&œ[=H~ɽ=CъT~XE`gaAPhf\pP(KA%h0S)W*jlME5)~̀0"4c{vJFi"`K\H+c] LVVиŴ%#]m1x4C>o0֧wF?zBa~{Fj}>nIFVPV㧘0/Ihtew@>,͙wYu4K+h5F3;l2TfG?T8s~nOV5vv dKR.=ŠLek` `ɵ?fofV)yS|hP1[b]Pn@ 1X]gU`3AmDD9NY>`tph!{%|LY>7I$AET) ;˯yV2PY&dû Cd@,heffB~\0.n\e*dDkN<.dNKĊ!pHPYʤ<E>^7.N 7J{z_:6Tch޺nʏĜԾjIm0hxI$:Ud =Ȥ~; NJ@-BW)d 4LfdNESZژΆ/A@6?[6=u? _o~=Ep;afçw;ʫP#j;\3.nϦ3:R? |zQ6},*Zz&.)@&jZ~onBjq}pVH~AmV>r~WWL|"D D Kx2b`X])ȻbkEüΟg( pPP%ϥsD=dCz@Aօ? ej8XtV,jD}CXiwA'qeDtԔS{Ighѕ䇤;a MpUe4,V%4:] lSma2x ó"IjqEcoLRDGPl2ޕMdP'êa$%^Q^52Zj&"ZejFOh έ M/6^Zz+a2@KLdZ=:geX̗VoQ_nfiVR]Ifh?N߼A7+sj/PP6 `vb.3lnjOPe#z|Hq-bSe@SuJ fiHDǨ>_: `̓SuZ],nFl,ۈ ǪzgN^ wc%=)=lˆD[T'%R/u?.6WئY)_\u1.jVa;Ӻc'z}|n?(bv+wXǁuEw(Zlf7L[eHJ\lqE0) *q\qs>EwWjh\SS8h@DՓ|1F #r@A+xKA `9oĬ5jJ zCY C[;JՏ3jUvݜzb/%htZp& 4V=4Jp%x2ފTG)I{|-i^U맅Ub))4?#{feF~^Ö`R)wJih}HΘ yYvLpr|# 9bJg(W!B2wP>߲C! ] ^-ZKT5MMW .W2)ǛqZuotyq&6]UL |~*(DG@ 0K!tmcw.f:%Bq=_}/TA <=V 4德eѿRR8d|zI~)a?clN~u)"P5e$jCq*r7W=EimXn2mT)n Qcص{헋XwcmYK,Ec{?E:|@?5YxT]@#Ta$΋0%p i HVj@G*asgA7Lddm1J>9 n߮,7\H_SUU)ˠJVa`2|).$xmu+JE<}(mhhưof(O?> n~ ﷛-7ͨxn#VCycެ8kk Bٳm~ 9όz?ԨL;:?_Gq@K%z>M8Ji"L4GV 8DH6dd9PԨT@C(E^~ Q'b^oTX2hOB;|6,i61Ti_pbo~*85kG_<(]p 6f |)XC{W(!XK}f?jqN4V)kΈA>T <:zV'B>?LȌ'Bf$H'V%R 4A^>NJ j̲:^xq4Xcl[;7a%{1ۛVڭF՚{{+9F2IQ*JA)0*M ^ 4>8֔*>N֜̾|&0ɶrub=v<)aI0&.&TP U ň'4QM/Rg9>֝9 R>A=x܎t)Fn;_ߒhڲȲv,Nzo{ֿwp*LTYN6n7΢%Z 5IB%%Ȕ B ҙLOTHgEnÆ*&M.|j}Z‹|>ȹ[%֞X9jO:%euJPh\a9$^|Ԫ71}n'!elp7ZuЈO4y%AI"99RP<;˯ '];,hnKbu+_~oIR:3ٝHiA4{D^W r|ӼTȵ)vV4>T|6+}u[ŕԔS{Y ]qƎ}`:X*!{UbMCёCPr+BϷiubDX oTHuq~w0t^kyLDK-+צ嘌5{ٰqDK͝0hޡ<}k|T' /@T7yhZS͉UWme>iVޡAXkiX ;^074e_7W{-)Bv][7b Dƛiò\K8Qe~K]Sw*eD둉˾VXGZ]O wq(,t<v. 1b*ѯqEY=U~ŅMZ)G'o[A_z;1U,M#8uZ-%Uh۴pj.qBo\`x o& cRc#}'P\7g\x?-Ecg `b2bPeoʇ(aG[*u;ra[_ K!)\E8Fr?e$ܭcHI# n($w N5] tc,XKy|:^R.s +GnhǤ'0zA/C40'gU@an!B:J}WlS+ι期ˈXr=6/$ì Gfz\T,*lhhMyCkB:m` .д7u8Ri 9ŸGuzph|@z. -vv=Nچc]E-H/k!Yu rmr+GP*Rt `alS`MIZ©% a&tǛ8T%j3|1 >gkj5{zǑ`[zXN<536o.1M﵇'Ƒ뱃hKO]}sWVx-Q$Eggw‚WBڄLt$\_sv^kCyâI:Ieˬ '!>5k_ni{=r^^ZOsC˅eb4jp G"S5b^kjPoer`GDgBf<4O޵6rc"feKy$EҒ-˲C\RnǍ pKm߹k)> 8]{D~] [[}v8, `8QL%,|ChB{wӏFT4֍J~p_+%_{_>~> ϊt_(JWkaԋ%%qh؎$mqNumhiJ\˦KHUB(HX JRu7\Z,Eki/`VGk룶YkvbX SUTނuU,~v7났 MF_cD +w#Dٞ /ac;~1f灷} VN9** gOmNaNp6Ţ39,* B_&~~;α w3Y?!pʺc@C3.`uۗ?k З_P+]ʵfv13,lKm?/]R)Jh`T*B mJ:3`r6*Q,779mQ(h"c-FzPƒBۨ5lA) VcBz:-0td1!NZ5Hec[Z;D54vkq2!%4F+Ф;lF&~:`zF.ΚMf(d&!`o0,o-V.w*<|'o&Y8$5Hs9OpY!RʢV+jh쫝a .7@[h\ )4_KJ"z:[k)Ξs:VQVi(WEޗXcǣ< zOlBŔz*-qO{%0ZRZi ga]7-Y=Xuդ>,GkX~ LiΝӃۛdpLx,xF546rNnߖA|cnh}0ڡp9k fI)a|ߨw"M98RF (#M@{$-榔o%AR-z`)HZhZ+qgt͞>l^%(dYRލY (>c1q:c2&DzhHrtjp߽SJ$!L Gss18Cb^0Hjhl4_?j* `HzB}ôsh,O/-f (p7ucC\L(TL_B6nn5= H_VJyuMnR,,Nh9TjE<ude\b\\xD]`P J.>V46BaY,G[oaSK#[@V3d7JݳܳRZGWWո%e ZP mXQp(ƭhlZFǍB}-8W.\גn&7J#}xRRemOEr5PZ1/q# d5cGתu9k,_ΡVHDst5`$_k (N=Ѱ$)8ΗXנ+ AƗ*pÊw IwgzB[7!{:hS)s&LإT$ &j _Ijj54jM}3׽KAU ;%04ڂ B}Ȃ\ n-*#o" ۰R1Iדsœт""GS`w /<+ޯ2wu<[`*TJqxX}AB'h c,wrvXsyP>c@KMEtk%yF%kMtEWNXKhRη;ǩ6h/_-{R.`qI˧Z9/G_OID>߿N<  ?mYDqLЅAapasQFpFs2[Fb.4Oi}cΒ̿}ɤ`+Wg1 ep,@Jԑ۾ cTT .KR)Pl;.Uu?7 o=!z%D;0 шYz$jhMVЛfg(VRw*[+@ K`K94/go\޺-.q5B a`3;N}Ϙ$demߔIlel{ܷ-`to'a'm7PQd;cx4ZNFYL2jyMouwέ.c lƙo'CsK<6Y$Y B"yWk+tl6\9ǞN-.V"챏< p"I_-%bϸqsN싟!\O1立yd'`G1T]Q}O?n`&\_=,tz7ԫ\/~ǐq5*C';35+'q27Ob)v٘bc>lt.EO=[/[+⬠7zw\Ͽm5謅Ӝ 'Mn^2.gTK0x2e<2 l{ S]LaY~ýg_Cw9ƴRWN91rԶHj_+ 78EϤy8$Dߩt_X]Y==0z-Me3KQtH0a,A ׌bUWշ)T}z+]#u|labT \CvVeAeD(u%`)4<{q|RhK]0nS9ķ`l|~!)FN2`sTCc#xzb뀜*x0ȴJ."A 1JQ\jDC dABe0S|`djh)>t*LJg? e"“A wGpna:^v;8 YJFSwpZ7-{ߌn5c#~9nvoVdI%͵)5r1PCVF 3RDUyS[%9/ޔmlt6@90=.`00t6؊Fb0ϫeKk%Y@PIK ^5v>a &tiע%WRH' ѹ|gw [\ FxQa^-Ѻ.O_<[ [ײ7p;$q1~HbIڮzLtI|R13ABP],o܄y3Ku7S]{(H6iKuμo.C?<6h][SI+ 3q.YD1و0;A` 7 t7#Y`P7UU_}yu_p':ko2Ae}L]Ov7C+ޱ1Ġ^MI(uv+J5ŵ"qC˦ZL x2/6qŇI/YeuroŃn y7:{W{]h(2a1h"6e+ӋWg-1 MuB3iwtrZ7gtLJgHVJW Cd#Ttx) "1'+kCǝJTk]קeJV r l`ðc>īOl@-:7C/KQ>ʦ@J޶8HT9꠾1H{Bج{SīWI N?UD}b#&k[og'QUMRnмY#:f\[ Tx!;E6?z^ΫR)]Oj͵+) orFKE2KU/ ERgh6'׌edJIj!bkUak+R+lMR/_RjKT[u̺dX[tM։DϓkkC\s:ڲ, Yߵz$ff Ѻ֩:?2׺G}CV ?W:,U]뽓RTx9Kiv?-7s%h5`g`%$z$Y` Бz֑lL ׷3D6 *74J'(|wl@:R֪ث{܃?cː[sboV/ȲTٚ_y@kA o7)!w/DOIYs:vD8J|3hPPv"8@Z<\P/R#ǂ0~?12>;;_Rإ.c/bԄNOWɇjx0ԅxpfr[%P2 [BE2|<OS~+iW2^ʼ$4<.׬a𮬰 HRTYNa.R:Kyv(d}<:YVe0)Y*#T́337 9ƒ7I;"(ʃ%ccn?_)zv2jUE, BQ`,al2m t `XHdxJ 11 ¯#H"sTUjLӝX'<K``ɀxAcHERsm s@&.+fY`@vyz9CgSaU!O tZ . q9XE~7``~ش:0A\ PW=_\/r`5ݡǚY<ˈLJ.˳ 吖0()Y*Z@P)(*߬u hG#^nE,늽Z#/Ym={p; Ϩ]XtWׇs7LGL䔁"==?.M3Wڑ'd:KXs91NꯁRߌo's6?/c?6m~BUC.B;\y+Ɔg:ͯsabD!(Z{B(B-a6#}Yw$!ӀI  4$ !$ ׃q 6j\ӻⴿ(IY op;9rF{)j{Sf̢K8mgz%1'+3 _H9ZT˄l\X Kŀˋ4V0Yo%ak zjvFl9քyZ1R G/XM1.X`βTظ vBT'vPmvFjc^Lf̃`vBVCYC$6OX%,`bG!68tvfP-vƠhcЍΐm;HܚLў[ju:B4dYojCIZfR"cscB&s& [ }Z nh{fvL'9ϣ8S>M~&qٟ rT/;~q:I&_yqrCrq vpq?%hx4gi163u|Y@ly@/U, <˗BrƼGa.KḤ4X2<؂iB<;Hg@obb0}1Ro䐥waRjB50Nݖ;V|b]+7-aǖ&j'n5@0(nC7 {sRa\/̚`cxC+ン*O? +M=XK\&]UZws˷${+BgWɇjx0ԅς V TL<Pe)ÿ ct63rW^4>2@YH2eBj%6>O򪢔txSVQ(<[RZhfV#{{lƆĪ`zbB>o|,4M؃FǼPJ[P6;h :h̆XM=r<% A ی,,{7p09ȹI㈱lTϾ'12|ooeB-^F3$]/+ǭu(bfV's3W^Ix`b22m::Lx13+H\U9t!|ETYXJn&]OQyȒu͝к# ڊ|ȋӇMW2Rr1=oWrӷ*q@ZJn>Ovdt <#Y@rz~ Rrr9&΋.9dD5=Y#J>\}j@oݹ !rK|oxg5q%8Ā%DŽyѮdL_g,_v.b17ht}nZAWݕgOF ^ymOgiV:n|t-OJK9۽ZW o cmF&wr[.>].Ғ+IuW4:Ny!?{gc{Y*0/{(3-fh濏ߺm5P`Ȏ*jv61 [JYVTU/>8w{wPw߼-vuɽw-:L., Yy{A\ٰv7g~jHߢ}dtw&lT$2?,#E1? A2dY= 89KJA]TYP9C7Ӧ/W:- Syv߱7oq8Y1I/J).<AY-O3YCKkIϟt5sê qY}ڀMc/|]ArtPFrP &γViك?U9sxbbP%y>ڎꑘte@Pe׏[uB\C<_d>5,!@ROa?كOs{t 5(!<-\Ly^%RJAy´@@UIcX~n]doVl. r1Եmn_ǧC h,y]fIzzc]拂SZu4QHm_y>eI^]qqsIm,ؓew>ǂOj5bzo%iqz2"s%8XV ]]TG+-V;gFDfF"3;G^KP~B6ߓPU]oFԭ{###ΉOtsLl>d#%Ymk×Fl)-C NPy8쭮mC,w>v#y MlceJ5g#s`<gG:|ή%K oFJZ!Ms;vÈm5]"ih6#UqgW}{q&coISv̾"  [y[}$v_xxM`%IRNSΆ3+9 L1Zu-6_9V2d޲}CW܍b;Nfdב+M': }zҒZSr&ڧ4Lꮵ"d95OݪEvⲱF5 =`U2U'y}¥M.[HVr4fJp%Y S2]P.si)<_PI!w.hS:/lSNaYB4!;&oIQ+ًҨNpqJksc+JX~;"4+c:[")G*n5`z~.WnXۧFr:AO2s\Uk&/2+@)($wf,5ӕZޛµ hK dT9n⾭((2ȟ8:UuVK#AC<cxKv)RO~VFwn;йE7b澬o+A7iTOsdv}\<wy;ҧN_.wpq6*0$f OU,d0nGE"V3qV&z[O}q^KNOnUW(c{h)&3Mc &W Ъ!A[lڣo4? @ H 0zO _Y0L/%pBJ΃# `<Ɏ#”'l5vkf^4M\^KJSRHځ.0y,IG1+pq`ۜք;R`+͔s>Eϲxw_=:ۻ܊J^2H6S- ~Y=4WP1KLsX{:>-Gg lܰDن`vN>1=K8fENJ4=7;y;b0.-P1g.9TLIN^ KHD(x^*h)M{/=ed ډ`54Ь# Jt@a6lp.^5gg{Y"s^OdTy_^e^e2+daOE%L^)<~]Dqa> `Kf$.M<+r⺍Oġɛ^@X\uڱ]}w&]d=ɚe{woC'vnts)l^ap8!}Gx~[,YOm5Y&(`,朘GL +`iiWyܝ5=8k"o]6i4Pg,iݤ"S dlvL{G|aRavD$ʰٍCO5I[ّ$U LJYL%:\i`xA ^wf'~M3q<qf.eT0y.^;QMe2yTIQ]m^p3]Sm+ |Yb>/eM~Zp1c v_y9J[>K6i;9ɏ9BeuN Mm|#\FarrbշDO:!fk&X~僋߆.[-l{ѧC;mO!oQx5|=q+JmRdebz`w#-gV(J8N5=ȅ(=" &ǩUؙ2P"OaJRUk%JZHp`f#t g%:5,E`)lc_R|e1~ R~͸*_~j77Ho^L; ^x:5-ˋ h@Ixϙ16vs//G*; x!92 Gl~I_ Lv ?F'F`԰.'h7.Zkș ڍZ FD49BڽYݭsJiuVhGd7W=ė6w?c{VԛP~#sYqF}#0]a׏x.|26Mlg߾=^:6gۻY>o|okB>eysdFqOAEFF`BdBΑͧn[vdңτa9ەCQnHݑ(P ƻȖWzңmV Gd~M(9 +=ZM5@;v&줵n#T8rԯnvG훳RVheB;,*AĬ6c6φ(^ Vhv'. >h7xm! ƻfFXڭ}s phedh'wl3Vtv+{th_w*xrw;۫۽u~547h.\\Z[m?Y3ʆpluwV9svf6>g(`뉳Ws^] }߼^P胴&&`HsGfVF5rDx/C)g9 FWs96p @+[1%|SGv0ɺ) @Eq]"57QVL!s;mޚ-5۬Bt/Y5nwѓ;/ģko\4!n L}6b$IHjntz6攼ϙ2EǙJkGW6}k 7~"^[qh%nc}L1j=6-<]Ъ)J+Dֲ?>>d잎)+eBb-3 X|6JkL-!xʖΟXh P)Q,VΘ>8N&Hv hɘ[RO ??gK]?FRJŸ1 vfC#qhdZ63ai À +`n-:e#jZ+(.Hje4i4}ٳ+[+E2CϋڎF *`:^}~GmJGE 6,:,;aH .C CuDsɈ熔{{^f'Hš}OХ~/cKAt MfBaժCc7̛isT}%V,HoQU뜛 MJ'͹_KBhC/B X@\YӬ`FvʈІaCs J6[,؛ sTh#H (xaXQ%`B .& QOw#[lDeKA*]v @]y,ɩ [aM$.1኱N8`pcuF,%2fqIt?s8}]?`Wf !_]2 aN!o{ }qk-;, GN>#aY;.h+Jj[q ւ%v.h3NMpq&4Zg~{>m/ta:[eMjʪ)Nu)*F2`"> /k1j2 ]6 _ف:E799$dm' R6;*,-5-.w`l@l7g) aH8+nZRP@+[/MSk%*QIgHl_]Jךtoא $N#tn?T1W=uLtipcWǘ5At' LͶTD= xaHs0s$^o ATzmE`*TCdUߒ {|Nـc%٦p c;ɒs$)g$,̤>U0;Yݾ=6($/_nXTZq*%VRe.5$^.Lh(FO:F\'3"yBO6@n=E陖mt|F\y~TovzJ+ogM_8|GCNo>jҦKZl/n PS xUx%ަh kN4GC(E2??7j6U1)߶$ۼPTc7jla0ʖd#&_cfWu [aLGg88Z-,˭Nz/ͬ]`6ݮo ATDf]tkgtƞY /*.7`sqP 5.63Pe#Q ǜo&.-q2pShLQvBٿp&d{V@z/Ս-7:|3/qdK`]G~ІS Z@$UP;m3}nf5 Ɣ@99gp9 H,moG:h%P^:~t2_Tg߼s?L1y1(z Q^l V(@+5GM+ g/v>T$"l0Թ}I`ɶ @['U5)p*={\r)h"IhZvLԾ.%JFeܺroܙs/{~5m %Y<9z,桡(vFsg4O,_2O2䒫0/m; րRcA FH%֩fY.MW˰;gta[KS:h{G:&I姷pћ&rw\HA` #cӞ6R@)rHI~&ߟ() /ϑ,xx=cFѱP8f$Gaԧwu峚ѧaǓ'fܦj=;.N `7'q2,)JR /r5əO5j n6́hRݬu4t$uLlCFeӢ遝\G";I x͒I{ MGώN2B?4K5̞cĖlґ/m99z@ӗC8_rĐlծQ?n r7ꭐzZX[>|w+*Oig-펶r.Ev͸{~l{ˑ{aoF{>39r'wq-آ'w -3>]Gح{6E鰆+J,c)+OLFk-5^+3}9%`]NܺZ{қ7<& TZqn3Fp;qJ}ι}:$7烜[ny4:*;c.CV{꜃7VWuTM-yæoy4la6aO1YVIL`S\ZnxrjkpRgR6]xm}-A?Nڻyyػnht8񀳔\{%,9؃HrgJ |!4x%<^ɮu!t,UdD`C4C_Gk֞am'`ʶn Dy*/~骦*_iΛE3HGU> r|CGͭ1gO?sSMWP}Nբ] ]1*-VK䨩[ҢF<Ϳo ^{e?Ha1KЦ'$[#Ԫ7 WmcpV0g,GT[ dNבJW׾>qnrʗ(\h2Ȑp=?ӡDV>9Gg*?A֙Lg:2OI Sg"r3yLr3!ImN0,GLYF*+>y)1sl&;f O+dߒ[thg0vOl|{mB0dVhBMeY{|^v>?Hh:%|=^z$gB-k 牴i' i]0h}sSRNܸ{OkIjE9ߓ4 =', c7mȦ$jn #+3J9 ,cF,5pƓT$% zGFiB} ~?^L}>%jld@ތoT:nx9lr`rjԺFrg0Ebp{۶dk|Ub)%RJԴXjDE}S촏gϣq5.>bmF+콨dR>H̒u!u*v6bu8.lB!qZF_! Ŗ]"k?=gx]PlϷ$ bZZL*J@UҼ7ZmDm1}m vVhXchX"axN-N&<{q7d:{mۡ7/k:NrWTu`^lO eX(Y. -_,SΠgV?#N,lq򰒘8UPx \bB8x׽%7 _ sR*KHkُ](T_EV1A_US$Tד8߰=yR}i,JKgҘ0 ]P;ag5l{:dD'8aDTHXt/E)AR4O~j}{)m1BELr: CeOm v}#m쩴wz0 w,!ӫ~^ dS#9*~o{fccKL6R1&Х0@jGqw]o}|fҙ%%5-&mƖMxt/3dӡ,!蔇\q8|WQ2D-'Sl`û]h{Xd`wy͖4-sam!^٦-m''$6ðloaqژF"mDu3٥1,@9-dbiRאzLO-Fq W8HU?n۔Aˮ.JPq~l\;h&[`lWJqZՅ8c'40~ky'>q$!<mO !dGOW\bf'LڏX^dA}UXtO@%Gq\&\zH&Jήy&M>!2]V60=)l&8Wxc %rF ;Գ5 ] =h^ogJ*KϷ?;gc3wؔM9p.Gaށ}/ =hQmp!BAv)Kke W]mB~^r/z{==2N>XƲ۴dn8cF}iyj^NXO|MJD7qHMHKylQqp2Nx4Mn)Ҷ(cgRc?ڤ"'8aofMY$zع,}i.}T^~e+S~9a4 3/l˟zf V;-8BF Hbk^UҐtRK~i`.::4q8@|2+9=N/̺aWe ~^f^y^#Zmέb`@`yF@;Ü28ԗ]b>c4,DAV) )s'w<_ǣWl^?ms>zj4l7QMc1H< T:Y &D;Ulc]`0`~0N؃#*gQGhVMFS͏TZ\GCXUn<'[I,KA[,ahC)}ص]- q&n<)f4^^JMbTkb_]06.B ^eQkMl <ڤ}\gІd)=NkaQ}GzRغvE:X,Vl) we9tѝ7 f{SVRӉ%jۿ>_|̄$Ӣ73xZr鞂d{XN"ǔ>~QcRp_[)r]bIO`vdEx[a|n/-xYfg9qb0WySuD5wa]1]5١s#6a`6kzdZ-V.Rg1Yjl- žX&svN 5&Y&Z(A&A:,Y3KnqQrG+gLdlb f!@gI a1ԯ;K2͸#$+,e|W,T@kb9޸%,' ve;Oz{O)Y+.t`x$j x?Xpsp^۟Dt.szS[Q/z* qd:;om޾Xި*l\lAmִҀ5kZ#nacyǷXڻ̰~\pH|Mayn8I /?ȘT!lw4&M' AV]y:/%|uE^=%fv3\)B[D,p~yo]6IlLFrf^HRMXaaq Q&z 3ko5 xA poRq,L.` j@Ujw!.Cja ˜64QV*n\R'N플AAq⴩Z0HTJvA_{J.} s% 7a.x]2TJ*[7@TvO0i@*q2n9'Zj $Cv+ f*n2>LXn/?6C;X6çӚ`;t޵6r/]%mhLAr_rI<{"v[S䵫푧pre5#$i/*!tuSeVZWY9X|SӪyL:+#2MMNqmy"A`+.0uX66fk&D㬬ލրjFB+g4L12rwidۈteM$)e#gmd4"8v ʬa٭$i S02$6XJ/mR,KQRف,R6Jg,bo^^Fr()Cmo0[vsI(U-^zG*!My X )*EsCܹ$-qG/Gy0H J%$) VaN@$B 7ø51;ض<rƱ[m^IUkGb(mXHr GqMIeޤU|T#4h(w;M%vXt,]Uy#ɤ>JzFy^`l҃w[կ Z3.4 y\hNp<%gK׼9ܶ1n[䷔I~^z3qΧP)/ $k…* ĕF+I V>4qo1 n|I_la7 e=1S˸2`xRmEkHFkI[$p"X2 k)*zɆ3r7Ul8{RdKC3bS숸odY]"W7pEON>=&E`"" BxZ.d2: 0҄ s b0u)ie>B%lz5؂_ꄕ6cDF$t)6d9SfK־hC\ٖ6׍AvάIoW'oJfs]%t`K]w 8#`)ad;a,a&hEN)VúMhͽMחfM}9Yu]h4VԽ_7Gxw:\\߾"3-7cIgAu_YO9l]ͤ8&%ޓ]l9Av痟zzYsհlNn؄|,RjTh:osZs~qoxOSbyttg4泙yf١P}85-qq|4tÇ#w.Ft?8֑^ :VZُZHv\>>+)X!y5\Z/,$z-z1!G m,~lkrL~ P$G kjrVh]D>OK/gor_ n`_s֟<‼& ydd|P9U !%c;s<>O<'!گ8|JE_EO~EQ|!TZC#H"W[,JaG/߅I`0J`!Y*q#챖~ wЌ+:r:)] 3yJ\Xe'QR>B dH%sHާ.3ԕg4X܅ - eP3 {{%@RQ򎈧d (N|SJ\L (n wřWﶩ|Mr%Zo`ZY2`>JBS`NTЌ2ZhjdZN/[v avP;ŹJB {,W(w?9n[ߞW16}o2蚜WwгbMs[Uӓ|ypt.N6~4La掼EU3\,C`9@R!a*kND]lafvv֧}YF/;g\babPc'C֖zv'5@U!IE;-*oTvϒ ECԎЁy1ajTP-!ƕPJH IpK dg%I_ wvo6r1w[qp؆XV~|{Rvgr䋃,wJ^v!P=fnm.S!r>q'Ә!v~r1cfJ=k'0|Qyd{M2fn!#l_RfF6ULrաQ=uM025~pU0?n3'8;[ڀN?ڊ4NPc2tdaQz*0wpcR*h2ԬNOU @ `02y1%$/Plљ: ʲI^XЊ We$tTe_UrΖ(yG:rth4?~; Sۘ3yqzqa1@"7T[ZsYbiB@r1Fbf=[LH "ndtmF}89/m@.';Ix@|[r8ݮGVs@ߜU яռ>?ԛV\Z//Ck0ڪ|~yoõj-:+n˷տeMy|NJwx8'^8Ήsvۯyc5yV1bg\G +_& ʀG57_rW7r~|hRO=}Q@8y` NR62]5l6B/mB_8Q Z+c8ɞ]HWzJs̪xCt ;Jn, b 㸥va_af;XڦG6[t8aڴxAInW!+wr){RSNvDL]ٗLzTB]RVxHgS YjBY;ν/xzdhoٮ͆ (ؕ9߈lUڍpb RHL*zaL}u @DGCucȽ~ZM PlY>WUFI/mrDQG(f-Zn T?{WƑ /;M]Г$;&r؞;uh׆f5 D5DIQǗYYyU݈xdBPމpRQZ$ets`V۱/7.sʲJ8* ,p/m4ٝCTjڼ 89(Hq+ƇM"Nu."Ԓ>^8,ωt}^w2A2sc^4+pID3]&M>=Ur+T鮮| pD)>IhJDc1E34K.B4EQ@uP6ǁj2ioAՉ}q3X&9R&IBѤM '3 n>O_ ~زz Jأ*}QŴ'/JB: /4T`hU !Ĩ̷ G?KzXk.+WDX!EН4]rDKx)oJBOXdX˓<75)MvyqFȕu{ܮXŗT".ɊHK+ZW3CZK2fJYKv\H.EyRZӆ‰EʟtHGSj: ;qQA6u}1t)HO(Fiώ؊&h7Xey xŤdKRlKMپTkzdz(?ήc3ľG!fbP7._9 ( 5aEzVbOF+蜍!;P: R35T҈hΆ:gMv|O pFTQRn(YPjI"Ji FH6m.D8M6 -:7$\@2Z 8u`OGi%+mxd>6FnΖNx[' |;2匃 >ϣB0AL$hUQ+5+S }4 R (NQɕSTc+鄧!Uү mfmPhi{OE 3ctEÅAq̡eA5NQJ@bt, 3j*$ 0ѸM6\N\ 7}:ura`vz[*~Fo>zKN@^>p|crݻ|8>{U /?xJo ==çѧNAnn>3"؉j{yPJڔmp|M8</c'@n5oSP {P#|D\eG$i1 }2TK9~05*{sr$8BOɏjֲXQfb֟g˵?3j3?vf 9!=-x(RHgy"pD NU)Ж*I\ q1i,p Fܨqͯȗ ,Ez&J|DI",M7A!PJط#crKiݥ%e%ݽTD≣d=E(SQv$A#pԆ$i:=%qߎ ӂ;Xj.֟5\WĹ | :A'?xHIx`Al15R@H眧Aj[:dWfeFة$y~NyJ6IڐhnֈkMma9g'P֟B.6GtggW780$2ywn $F5A{b+ߏY1Mv8Jڡ!AVp?%0 >F𩤃uO=H nE)Q0aI5 "-+_(0#| 9wnU$#5]dW'1 h>#*kbǭ@T1Ã7Dz#aQ,Uv*@vw/A4.̓_uz+yY )`uIAiEٗչf{Pb%pit'Ԃ"Sӈm~8$ #"O`%%Qu{%)#$iaDۏ!_Fjt#D5G ֟Z-~xcΏR1vS/(#8:vkWzǕ^JNǕN~|ɫZR4TDf$cC1gmf R[uc[&J v E$ZV;8"܅",A$&Ug:ISEpo(]3%OHD)T>\tlEo$TzRǘfq~uݸ\Q]`ds:a.` (h&-wE[J#3?ߔK gx4 4cCVL UI[-c1lHNK\¯S" A=REd)מ8J2zK> +"'UJ=JG)ъu}{MIn+i5\Km-wl3\LSH ukXX~]?oFl4|,$-.宛Z^n1-vk7<;6i颔f"ek|Bfi `cSHjޥ ~|59;]m7%C'325ihxr_>n>e]e]6CGm vďDb,}8/N:kB9UeTB lCz3p[b uՇl];e5 zu0uo}Ɵbz]Νʜq+\T³\v%%n*ErfU1 ^{b ? yCoy޴tǼOz. q[Xmu>|O;?|;򟫛'6 k_4B f$ʋtk50(b WJ'hܠR-7n4O|ud+? !|3WN4F ,99âG)QL,ДP#'PZ^!Q༟ KݒO A@ng0&B]%% yt*:jX118 #F⼩ÒT ZH1V,]/B)/}J(.PN;#\RQ"aK55LnPkr򊮩 8GbF!U;ރE #we55j߆ˇ]2d\g0|T`\m γ[{rivQJ]ܓO*PDw߁p<%(݌B}'aKUK3uPpaM6:M): \RiS [FP552O뀢&Piw:ik,9 ~wvI:HhMhA Q Ni:?.ERb}׎:v6+Zo7[EcF$o\#.={(0|sL9q<,\\PrΨ>ǿ5\xfp0wvxTr=//>iwR53MANzb;P[Fg*4 q<dD3|!;GUQ(r(2nsk{:1pkQ[+j4K ^Mz Dx<|WiC#ȣ_]sDcN8'J!SRBTgϿ!?o,7YvY_L2;#U|sM3ѱGPhxxfiYecQm1HƴryshB95N9/aNN!S:uTכSXZ,)޴kfWë!;;h2^ -@# |01E5^'X4>Y~ٰ.arIo T^7(p!;R6-91̍eB+BRc7FQzlT2瓒mć6(n++uP2sx.8r`h YE1Ё%8N*1ʝmML1.F8,ЛY;P;Cف$W :&tZo4C!\t׆LgE~ 1Fzk!_ *-ObS/s>[gS+#KɏPg[L/T5?T|Vԣ# % _;D1? Mٱ{{S]q{?wg(Gp=ç~P)fM,0.ii Q,$ YHWQG˦Z$24ʰj+{@{\2Ϧ7Y<mLC<l (_!NγOo.~Ë~nz^_FYlP A1\B۩PVwtZH˵B[2h"X9Quh:WWw LaP;󨲳HzER`[fŮK4VԶ%止`W޵,ٿ"ܹnM2N<`1 Iv ߷ԃ%-q 2EݧOUWw5Ә:^$Ā+?]eo[CZ*Zu1Z¤|*ܖDk Fx敏)Y!s.`Дk:pkiI/r6+W\qM`ݽ}W~=u3ZC.3E2ûEBry7[Vw1]5Bzww*k+Xr6 r@5떦ƽk9*'y6=\)-R˻Yk5Tf!\s] LvaշQcᵿ}z>,4/wCh^* F.og AOVlO;N1ORVu -hl) Yx;{+af)e%TTi,7Aƕm R[eK 5]`ٺy'hh5L|ǝOY9nEXڱ# ,ZtO'/OM:g0xMi]Acq|w e-ȍaBf2}koLkfiŶӘArE 0/NF#$m2 ;`WG*Ҹ h\ҵ;JbF::P){.{T`g\c+uP;b{izyly9cz?3Oσ+WrVޚpoћ&\u#MVAQ'O^Il8١[~]P Pwr^Z6&e07`%wDw${*ݻ`k[k%ɽ8OW[^j퇏596兔 rx gH'rؠS񭲗\l OpdYDԨ#g2Z2JWH`l}QWKDo'<sXXY26mR |}[Td0F}C`BR( QJK4EC7TjA;&*Ks0)7nkcڝ\6bҘO c\IcĄoMYy q8إH[%t*HuzBުhʂU@e`0 n;Z@kXZkcŵD{jY\w9S ,5~Ŵȇ?/o5|"i6MfОKX/_ /*d_"~2-z?g_gl9l'$ė.Džy-eK\tKcŰ I{Ľ_89Muo><L^ߛL |nJ,*.g{;b/YmV&Wp4?<*6x\8 7~2ȆwdELmQ?O%>°~ZI2Nҹ~*`~/'3?{ ?vL^O_H,bh%xWxfagc7磳anZ\%/5Qgef/Y2\Ŋߗl>+λ\7St!tzRΝ /BpOJů8-9l #&Gw9t_9Y24w@.;x v ^KӌLd Sbэek*j FTjʉw(`|hBg\ݻ)d}3_*8x,s x}-Ӗ׷TȾky}[^o0zL5S( @ڈ.I=)S3kUiiJnӝ. DC,EkAi!nd$wac>ؖILe"r2lr &rJ6$2* =0*.Rɛd}Y(X0="Zs,bxX<`}NtԷU\sj:U\շ+B6-H(lO[e).roL59c +(˚`8&N ҾSݭĆ7Tb٦ܴ\;p](i|kh3Pb JTidp<TaRrwk@YŃsY1*D4$c,d0QV ׂj}H-l=bt[&2ɖInOfЧN?܈PaC$Rrs+*}hTRaxK%7ylS/Md ?hnA=mnA"[YE+mYd"[Y=C\LjaJ(E).fCR R*ڹce]0bjTUWlYd"3N?E,ebn[cB%)p DFcOX*ѕ/v$jGrݩQZi"<e)N#ɝgRGQkRR=B1&& $%aǠ[k+kG5Ҏ/َ|-Q`ZX E{JhZ"|·"YdϠKh??ˆH А`0}gk؊ O8' JEjPTy][*cujD.Rm(aɍXZˉe4Q$'fa Cc+I s>`^i.JJ"X0pɬAِa+13˒h:d_-tgg`c&<D1Ip`0UHÉ i"Ksx.厅_MIk/n'K f0p[b nYBs TQp`'9V$`'))Wī&RdosSsU T-f,̥VİԤ`4ei*euO1%:;8LpMrj _ g6-Gɸ֠]PY,TE*TrB[ KԎwr@%f6Oܬ=+Xy(? ΀zcV(nz\P;WA3AV<̃QlE=WqaiW00]~~sz@Icٓp`g>1Q1wGĤI-]{}%0:QXtvQ_ixޖRpeg ]:m'bt>tX&/$ n\(YPi,3z$ tȦ7?l4֧bPREq|nG.YK?f}?rٻ6dW}wsfv:_6O$꒔'$% _ g Ls43u`4.U;5J=޾UNxo)qs?du~0e=&uG;|Ψ3Ac5 E,lOT3t+FSl]U]R?|B ^HgZ>yl8Ν08m\德OPB|ԴW2v& 7BjBn\n/F\~ V YI"JΧ!ԗ4hY%Ş~Z2CO!yG}GhX&6xVQ()WaSj+6ZHx6t[DeY!b6&:OYO qq ,Ӣ\MI KŧwWjU5ut1K%fVfFNׯo#*f@..qҜ֌ fp Е͒!o c=6|uc(<2mQ:kޚ"@#EpI~OQF"[%\jʜa4m is(VPL&(#D5&)++ IN;ϴ )hl%b%P9/HR.vmH[EW+R:cĢSf›@BCEh8MJVQ|c"35MPM|!l8YpZ0%V)&+mFF[X5R4Rrm brBX!Z*%!ZH$z`(1|w+jxy{{ E\|@h]8Z9S<$+yN0 KPˈ,DlD&*yo2э`DsJUJyȍ uL88mGtd֣76́ML 04&U6ҨcWgFcWKɫƨT^ {`W 0B>F[&a,9{ꭓ4VFP`Jl-"H!r^R9K̺&@fy7awLrH %T(zQH [.h``]mAyfT0VllW^n>%\oݦLI@c|*T*KH jMlI) 0RגpPA 0Ц?8l%Qh0ow$̥%d@ώ eA2)Bʽb4[IQU[i`y RAJs]951Z$1al z^ {bh/yuUѰTW⿖tT*.t4&eyas 0@+Tf7hy4}h /IKp ?-yEfپL;봕M[V@[k[(ܝW @^jbCmq/X;6iwY7oCmTagV&meK Dѭ&IPq.Lv(+E. [)q-2[{vWqn&̚}O{ _盧;Il?_K iDPx&I2Q&N N,SYbɅ3BGF݇ȣ1S T\۲E׀QAf /19rbOaok榳&Ŝ*DkeRdžqԺ=<|nt?i8@jm#<,UPrIOO);PCt|L郑VpL+q'x4M2oKf8M~1蝝!qڈ&)38s550`%'IP87L{7k*6?cg%xh $J^rGqBICkPRKIs1-^1iD_40n˱a-惭=щ614AcQS4-L%Fz*6F bTѿE3$m /<>8.DU"8p7g \-ğQ T(t?,+vP Z(н+Ei`/kJ 5$Pt ?ax (Ak'RrHRR(G%2 4DξrC\8*-æ&dߓ,+ ^5y>Nݗ^]CexM#K0yp$辅j&Hiwi1N҉/KOÑͰLriDsG W0'ͨ+Py t~NfB'oȍb/Qތ:78kVK]e 9a gc jNAk7mQ$lw["GS 1Ѹ2Js;…r({RSXy"ts a?lQM}jwbE%Gk_ZWPJZl-jLCij^3Os4ք١w7?nY {>W(NvIQ]<4PX.avԂ]?E1OS pt[ ʱ*B !_n Z斎ʐUeԱ9XhY=kԤ8x1< `ЂZiY-w3Ehc鎎TXX 4Ж>f`dJ8VV(ي!.aBV3٢r2ag8_0tlagtc#Ռ;;W~;;}-S0+{J8gWѵqt]gqu%J0HxNM.0Ƕs7;4Ou:+_.CG: Uo?/~yoІocYo}Lq|{7e&։tm߼{C-=}? qsm?^̴]G}}?N[ٮhd?ǻϥ̅=0yro5\ތzh]oAɝ j<|JN3( Q&1e/- %C٥eKAѫWR^e P)mk8{^i0mۋA)\I;.HCE*.Dc" I tgiEL >HntM\fã߳L+rtsSj.@\<-TYo8)PGIPi/GdJk;}3On q*:'iyUSg.>ЪLpB? kRԅkpPLec>6jZRo ܒ|/oNLݒb?7fU5JU|Ju{:W9s(V\P7yDI"Șl34X>Fl>Fl!$~߹s{#M[3E6=mka3J39B55cDoΣٹ J0zjc $~t{sK=4 D.( s}Bs8B[ͅ*͠m#I>\."=/]m9޽fSqpr Ԓ95J%"uYeșg^\>)LU{M't!pywaRYp![/ozy8x|ہaLTڽ~Ot1o=V5m!v<̶g.Ր''>`1[bhb9'YBIv917)d׻uXV FGNl{q-lŏZm||?{?ogIfs;xfV.y{xh=T `]Cn?2wK=1a^xD[ܿ2Y2tʀ8_de27ԇ"[[ ւ8ة2Ϊț]0kǶ6aYcU˒YppXӫ0pl:/7ͮ=VlU?q(P[_Qg5Erz^NYr:BNv-t`9)zp>BN^9%wUWȣe,uahn+'ĭvz"6xŻ޵"|^]&Ya1;8)@]]0BnVωBr8.ˆCx`fQ4K$WcdqBܳ<98j洸)aS/gby!ek8ϠPA} DG'F[v3uL(]d]2%lJk+^8Aa_l  Wd_fd2:唏hyδFkiRY_E$Ot+o sqhZaa)c̅t}Z ,8UU_ʝtK/j[=u_K Iy{h:I=&.'jMHSƃv `͹!(8HIv9fW5mj㗜l@jVVV̔Q{1{nR)lwut}~L.)g'G5˱_)BN&,av+7dI79Z%c*=qa-NO",jMk^^38;? ^^388#u5C -KOhRȝ1cv`QS8yTjUq1k l)݁$É ˔p/ 5DMDǪH*ۭ }QX?..8["V|sQ Ti8b;Vn>zxXpV7l7 2I{kHeNvsYғ%j $/n2`Zhxڌ!!CֻC!E,yż#y^Ohnn#0,n9&p]9X*,X63~}RKO51 vϸA teSC%`!h2`akaJx$b~&PD:`B?_~&phF,A کE.| P/uχ5_I1BqͰC4ւD@c M8PIq9U"9XڤaR\;n p齶U&Yi]θ6'fp8'6ec7PAN+:S5DYk/t;K2jS$:FRwEp%2*o(ԏ{z ǷUlWrQI>6>)&$)}(P,j65qr-ap#xL36#,%>%\48NC&ۼ/<Ϩ#̃Ph,#,K!D2h^i+kus1jB>O7գ|:vSmhTdAD{[AlͻWd}Af?~f|hfΘ8m>wr)CN5-/`r,:`/t$iyk{ͩcY=6!1h|ʡ CC#8A+44V#Y`.)`0e||y]*u?UWwby{]\lLʘT-h4[(MZV w?g+vx{sU`kSre Pp\k \`{a9*Xڌ>.HWO!ζc+q Օ<rLrDҒF3\@ vC+ `Rh~4#(PqZ,}4jG~<#`ey;NѠQcDiM,t )U1EuP-+zktB$E{kt0 F.s}x`?@h4j=jZ L7-Ts P/0SޘSr OCsaNFReBƹ[rF8.{mM =gS `VUxڿҪH*F,: FiU  s"7 O j 7׫GCƩ Q: d'dC-G ))yƄ$֫p٤)q,d4pPLr0|*ie0H n m e&}[0aGm'g{Ӌxk8&ѼAH1SN'dݢZn<[}vx)eҪlDsPDf7(#YfR2ܧyC"JRTоRgo<ێh.R%UX2`XPT/V@& PPGAVƊ9|= ?'eQTh/o/h"%ݧCRhb4z)zR#$۹Bt$!Q?NʯwKٿoqϗ?,fBVlR}!a>|tM_d݋j3'tY:f\o+aSWHc/0 (4Y?~H=`vsgw..e/a~'{/7Jܪ?^.za~.f4rq RZ #LuQfmB/˜-7nvY%Ϟ|q\ iXM`\㬐WbYeofP%ۄrˌ$\X!6sDO9i.o.g)) u6Ky s:Kv,6 ̈g`PH |\tz+fTxM%:}VO~3g8m-miI. 2a CxuТ&^$>*wdAAK:e7ii =&JGݾCݾ 5^0 :xzs4cj<ovui"Kc8A[GPYk'@Zq*jo"J,f; +w0Ȼ wt4nѢwk&K b»"Z+a W>mM?]pSGCG DzlNMo,!/)p&$٤bfS^һ%4KPAR ދɔ)9x06ob2ƭ ƃVO)Z*"EajofH*GX*a\̳@I~.FvqϰaM>gγ-ֶI:<˰[Al Wd}j~iֺ񓧶Xej6ɑι2ՆSA jT$zHwe }?P/"}FHmHw vG p+}8lH߷^2{xo ޕ8n$"xa> 4mgt(UUbF*(I2) _.""32}9}ZԩJq~S"5be("5=,"5jCnJ|8CDj@!RӓHa Zr5ji*))Q&Zo2rJ(̉@i_h $,0( , 0x]@T!ГСcZmf%XI`!)BqKހ"Z tLfxuh).)` є& jnMSu4JJ FSlAof_1АU8FBh8_#P-+* M?my>lc FW/*?)A@o$d,X,9*;9XH>p&a5!uSR՘l;EXِ#Ί( g_lrBUbA^ EvGbn-˩ u֏6k\f.4@(5xn=ܸD5q [camaﳬ\bZ7!XC ĕCdYAT Sek&nO |PwE8܆>Z@jCwa¨jc0f}C Qki$) 2 ]!Ђ8 C6̲F&iVf@mU4F^ʬt8zIS3Tl͚efa.Va2 1GN32_8ძd-65F7x'OHLÑr?1/ 8TLBd("*PX`)G/|'fVS)}Sj 3DoHRD~$#@?a$1f.'|QO*#x~U* Oy"TOC-),Sъw>|J)8,f^J kXG^HPXF^ёOt#)JӒ7; os/ $ G4D 0@ϣ8֔p%V)xE߫VxSYE؄ a+fnL?vэL 1w8{6W`gbxaa_\.}6~yx`Kifh=$u3~ۏnd QLJ4,1IBcV>xN8ш8)#ʥƴV6*DF[x|SoMTg*%u[n{kۍ4*k *v1gT,7F0FfXM~π,m?r-3~Kl b n&&{3-8)s8(Cm4+?dBRIe~S0jy-W-֩cTey8I [6ԝ` cSo#c&?gpBۄz\L ҖɅ.-t0`keanV}Z:!mK {Ax?= j8}f༻ Kx/Co9'g)H:7q (TePA "[?#,E+ov:^g~R)L2@*؇nW,]۝1:]'ٟ۟ Djy8o@+;5g2ZyNctFe>J7`zG(u;cm9ߖQri+d|~t{5n8ʿ/Iwow64ȵSgG%i(n[Z0}Dlaf0? ;^ٱh~0Z{wgWhn fBr.A֖zc;$a-KWI_@T/DF.ݹa\+KT-yjs=9.תvVewbac^"+[ rd/IYT MG<Թ۽Xj>ׇ}sIKuQKM}cD [`Α#$\ۦh]G8L{b)2}i-szU;\<,uG|޴)snݸZ)i_t-ٖFw\ ɒUKI]VUNYE]ڛsʭkuV_*N::;_G IZUTЋiTxa8;nd_i~ri]ew \LyWG[ Qy4f8Tp#2我BQ龜x V\b_ؤJr-gpK-MaCO}o<O~|ɟFdu?ܙ q,V@"\aV6G+LX5kmd^9hHaљ,;QEDbZ#GX-YŰ|-7"Uߖ#h^r5RNAQQg3NrcTJ=+ܥުc"R"8xg{0wv&C CϙooySxMv7K@޿ivMVԃ,Jk*'2 t1sf1Z1E\a}۾ $SgݸT*d d>X;{1AmL0BKK ?R^Mff$5s@o5 ; [yƞ֯)`Ty@+U}"9+`EU4=Ɗ+b3ނ!Y:~L? EϞ^xc-0t_zy\bYC&/”> dhXC`:gZK> GIGyGJJ9X`Une3U@Sb1-.,^&9ncCnG@̸#AV5S= =R:X0aBk}U$W+g|}Oj>t(Ʌ#+Zaj=TŌw5k)U=XdlfR8 z`QQK=: 2s7<٪Zաc NW__C}<,,Y)V;CTGNL8 `<>WBtjXq$vvЖNxz嫰Ab7q{>~`W_ Q Esìi ^l7$- r11.|Lp11hZ̘#˂ ,5R)y Vr bD89'K?!;HN[&9&ěD.lur`L$ Kl$`ĸrcN|Y8 %6|4n( :zې^+$YM3r׶[>hvW5ES8]9E LqL׽>XUˬw\.3.pTR[vb`/ ҂hթ}3wY}LS^XQ Ob8WZȬ59SoNdHƱD0,kx@ΙW^nhMHR*jE-s*ض0]ZQQ WƘq|=kI >-')Js;X&W3*R&!&HW{"w7G#}tܨNF:-/y,PSSݝӅMKwΙaA"WsqS ~$"rPSFiiOr)]{k" QPq[Fon*8 @x1̥An/Gz]קPBvW[t,`\ 'd4໦mυ%WuqD`mAߺl%dRwUk!B~mp5{ɮ,ٱ,}bA}8QphgZ ڝߞ )-XN߂P.D>qqo.#׬:k{L&TDZ4maft jy{k9WQQ`֫ Țy~JMKrQˆ9b|㷬@q$wx_oW4pu}JfukȺ{41޾2'*-+**S wcA/ s$]ҾX ;_DpBt['εSzdDw,F‹_ڽaXuVGN%D:$>lF1sᶞ&.9p69pwv`X!S K NpVU0œDh!;Tk/̰KL[郎Z ~iR &U^V.iL}1#Z:8 .7)St5_#\Tdv} Ӱt(t ǰkIRLsɒ^eW]Ϊm:iSMf("z6@uc\uMʹyv=fkW 8=rۻ}y6 ^0wdBaG: @+A5*tCu1n8!fA 8usզ].](m" M:i(Ӿ!w>Hlm_kd2`H֊x%ҁ!\֒ Z_\lנ+ \Pޑ'[0rѦjh! 9RA;?~5qQƢǧ*9'#7v:܍h1pt’ˀoB Gtt} J ,u\>Z^`ow k1I"Izrբr/ :lLU~hQ BqZ˸[>穗J-j._a/7A..O\N]E-k`fٮ`r!0q~1Xf{T/ H rB WDޕrG>2!(Ȣb,QJ Ehɢs)vЈ)6e&L`\UjؽsǮRRaO;+G0~ȸ@F&QWϕoc{o4VnX_]biu:`L9za,_75oWlXń u?}Y*(!ҵӟnGQ̳ V rTJԤ.ԯ <56#&5q5@{9QQzzm##NNѣ'DB4ޫI40SQ=|T>b5;79z,kP7T1bCFݪ>m(p)z &cZ8 uRꔗ]cpunJ(Zo} h]C\:6WESMEuٻFnW&iawl6d2o]g}ɖ,n} -M,UEZ Qt *ژq„*'؇,f!a9uZnH3iR"Q)U^xD jQ&|m- ;*ƻѽv0D"ջl0A~T~i΄*ux ׻Y9Rnٲ`7W(i,a4N߅'Ña=xC0ւr F]vJUjmyDHP3S)>3'!\Pն||qt_օA8 DrGZrkO`~U'P/!sbܡKP{tŦ)Go%߲K{]!5'_bmA/S.&_Ը B$4p:uEL+]\:˖{ #v}\{qmxɪ?A5|ͭK$,-+K ΈNU lT#'i'Xo;;(aWۜ|M9ieJrp]WF8@qX!1!"p"Ўr_<"_5GNŞL )9FNV<!Ұ{3utaU'󀏌-T03Cq5XtCWO,-#ZJxH9X!L VGR˃'j(>L/wfZ\lƄ0ւ:& 'Iz;_Drzk'OYd<<?/q>tjOŗWźKrlt?ѼOF4W9@D԰yc1}'d>an+w['D@ulv D>&iͿ~'Yi57"WJ]:^T11  뽫 ؉ l&*Ybm!jz_'Ҙ<Y8}D쉈v}nD RU&Jӡ f3&ӏW=$\($IØJwqHXhELXNɱ)nGj2\qmO"ndN'BD0Չf$ Bi(y&È33"J8QV`Bf`^h̞`JJPy%g$/ĵب`-XxѤ_t*2]L$-]W.3`cxWkb,Vo7(Z׋}wڛUf0I[+LqFC@4"0B0 F T:{d6'\ 'E8PLbI2IR$YB"dМ9Wԥd4:KC,$1 M2JJ=~>_/׏rMLW=~t$x=yHF0@0U&Fa'0xmȰμx'6⽉s,\xp7 VE6n{Ȯ¸ƽicLK|rn xVf!v׹1.fjU]tp'2UUBSV{*NGrhCY@q⹻cK=ԛTSZ~kٸojU%ˢ9 @;ް"'έ 7֛tH[!Bq7'G|?pzhR 7Ô1irp^GGrls0 )M;m& rvM& DOgd;d) Bee*eH%jVQl T9> A.hu€. "ƒwb@{–(,8tE/SW|Az罴w\wy<_Kv#K BwjJ.̫ 4/DlJё<Kt99Xp.9d3|Zs|f'![<cyyOt/ 5³uEN{ yQ Է$UWD[5BP޺P~Kwi'wߞJ?Y-wkSr4\B6jTD^u.Ԉ \Qu-5!5j-6jdj5}NԈIS.zhl阕`XkQ1wc5{XoLхWޝ5lVVCI ʼFG69 JQU;ړ,I3fi[v4s\p;UxpAf>'=4=8?}p{N~%e^DsNA*KC??GuSfc<ټpgA+,BZzߞ^BjdZI{',h (q&V.S?EBȜIw VHt =i2wn\wbOQPJYHJI1Km_>XLrl 5/S[_h|$H?3*jFôZB#X1Y]Cu-49tkLR 9h8O|ΌxL^Gp8 s{"H|E;j//N;ۨs ^M1&gU<4+4z*eׂ 6w\EЪwg]u5ZV&Z#15B,qz)'t|rUpj3~s FE1m)t>\j ۍ.pAmfzڻSWNb}3Ñ]=}SUN1EIo6fFtwc J$jJ`1#MC4 H;ډޮ<ݏ%#?jW?^ #LgWNF R5\{t*bd,cU_7&9l//ibN1't9&OWNC1{%I9Es<39ǰ # Co@I]j嘊Bq;L=\]c?hܯۜtAyO>33 Z~18pVlC8/يR9 z 6EJb6ֵ̓Sz6- &Z~(RLZi0s{[p6R̪[~͐TwhZ+ة|}5"1@":E # \8T5cG,٠tH nO&(&qHqGMh3>=YpX+F=2Y|gAJ3AP"6omI 7Bw$s6u9nsL X=d(JA0&)8!F|QBd"f"H8NLByX V.F[R` BioH"vY:~)F/7x;^_i*S! `VnaA#? a=i 2 F(6vn lApڛ ̺W(DZtDWB;ޠdjޑ~BeILJc XKe1 qI.h5@*.$^i4ga?7iηkyoe6Q@hPY ;Hh_-7>]MQp`.wwyc 'q\Χ}X栴`+j_M#ȏ~pӗZ" ސ7.8iYuW۟?ǻg1z/ӫ3W!BPW΍Ϳ]ߙ,U-\5%>J G_"r;[MB3 , -!r=5Ƌcl]; *=)CR환3 1o{BǕƴĪ5h4c2ӲD: Q.LnCp\"Vv& 3 q, { F vTwU) zuLb.LWOSay(mR؍,Fwf'3G_j.g/pmw(ƿ$|^mB$|^!6y=8$|Z}6"HĜM+9ki pF{kE(}bI~p(|Z%L^1W#i -ITϏ.v+ϳ.I,'h¼"oBr-)C-&iޭT9S&ta"`ͻ ڰDKlJ =X~˻)$[rLM 3kV,hwkBr=ڦ}\od-[qf2|%oQSwwYk0Z9kp{7f>YkP !|Nc%4YqOfz;Xf6X2?ېhڄ$o@{Ht-c8EIש y0"Pj=R"Xjh[F?m:/]v(=c 佀vԪw!v`X 3E OZ)"Lk֛k  9,;@bXL OZjwi{[$K"Da,#tHE_`۷ >iv,&Dn#T2@n#5su զLB9ہ#(4R,ɱVҜ<EK;yOi &m70hQu{ƽ/bO#xPMpTFg F|w- &ΥqTg/ΜHEr%ƸK7+^SÏT1c{̌$ٹx5~OoJ"mFt20 "8|fw"Sq&IK^?^4닜?DpW݅FfXZ!xcDpT@\ȸv\չ>3Uq}v*6BP#0m÷ׇ? !g}>f*~,>4t32d1XD~EX]f71z=(9C> 1;KT5x߅w9R5zGmDI{t;0=] 8i/vCil% Ha1qJ t KwSxbQq Q51 h/n 9=JU.ܬ,4qaA\xbG"ge '67I0ZZ:QOQ)2$v l!Jޗ|C2S2q[V;%n;Vk"mb._>mɾ"ch|ƢCNFQ9 laHꢍp#Qd"Eyn+٢nb]gTXS8`ts2!JT]TWcZ *~ @0xjO(JG[Q˚)cZ0E2Os"RO׳4|v ѧk.XDp`n*i]fUIJj}w'%f/FY.5|vsDZ\ {1N]}te:wY?ݷ%dSNyM9ҞV J9H2W>j40˵Wt0ݤ# mWJk* ct}<qo `n&݆3WAVӈprQeq+$ZX` mSJ⪟7h і\ƮZ)ۛnW[~HiMQR⽆"D+O:|\;(]\1R?^,I5nt 1QM!k}[;x3jmRߌΊ+Ig|}xY!|߶A{;9 ZMxwmx)OTc Fk:<)ﷻR5PO|wDžN3хF.|W=]|Gг_Õ% ~i)kށ Hպ dJ&N϶գOբsF̹Wa| ѕVZFa9&k{Q\w֞%r ?TcݿF!1>15FtcmqaƞF-6yS}%$nst :~ O^Z鶽t#\ ) {O4%>,9nvt:$f>,Bc%t֩L׻NpN֦SԎ8_i}$H kYݚ!iOyv{Q@Hm~R &;9n,)C|UMOd<ϰl"k էy^sk給e\m9BRT:fyN}ڂ +*SǍ]%M _|*,֌J# 6W%/Q DHo^5,+;x}]';(M>.If μ(̈@))cR Ax uZnXW\Xz,kׅ?j-5gCc?CUC?؟ռ5JUئ:~hO$P-Uiѭ+w.e2M Mߦ5inrGqX\b׋%0U?V>c1$Ћts? #Xgxi,FvӬZ(]a ~ج* eσϮ&WW|[X”Rr5(-YWnG۔\22V2H=J[GJ9fۖ6*od,ՙB!0!V9aMZGBK[0Tba  zյKOK]ZgߑLSgm ჵ !ڵ[ h4*f^"*N:i!o&NyrzI:@;vq˱4^M,u>,:`Y %Eqmܓx^p[|nb+,%5ʳ`{Il#9勇 bǍPkn#bĞj2+p/UDeH&PMyQL0dʬ SJ뉧[PfO5A[$#o)cEMF{cR*XnNMJ&y*Zrͧ_>)R1ZUˎ1Y*fM[P2 (G()O 0K+%pjpεzI(υqJ6 DpZ ɽJLhʴ"S+QZg0ܞf3J#Vi 8p3I- cAcFcu$X0-{U[3G<,QtrBW]nS@OQ4EBFpWC^ijBt)p_w0rѧ(D +,{2rAy(wY5^Fo|d VFTw AQG'[X _%" )`ʺ2 !Lpsk9 -3mhQ1L#IK>՘ TYBR\g3rmATC[0UgE9lQLj zFV~aϨ޳!<@U\fG @\ʞ+=/_ E:\}<-+C5\m4AXlg¿k!g܈*0ߞߋhvuƞk[$-Kv,_^4G۟-=kdҸQ腚2`㧜 Gr^YwW(lPvQ_FGH#Un;Ul۰)UP_ᳵD53SBh̑u/ g vS(5R~Ԩ(e1-*fah7ƨe~5tE=GHhTӨbh n4]NڔZuQRzZޚSQSbi#;!v#BĀ)h(ќb#w$T(ԚH\d0  eX)ۘPm*d~>v-w[1wؙ̥97kTKӉ]tgd QTJIa0^Z9ݟ-E`43(JN|5+S8|2gUV{\%v9D)hsb u\EyVWl䄑/E171j?=U rɺ.X 4,*1671b;,a7j,UU] >'w)pS]ۡ*(tqFAh G ҈\[bw[nBJ׈R·Ct/(Ӄz2DB,jYEn.Ң #[v܇Fv|pkp3oq$* 7HDeEvBx4!=*Y.Fz*9x~xJ;$wIu6q漣=^ =,%s"|ΫYJAO=֢'1Jk>>>˾4c,1J\+hZe8jhcFCFw[xx+t}l?~ͭEkxhv#hzfݚ˻.1FTGDgG6O>>R#AlD7O>/8-jWއ^ޣ!7r`5sOyeoN"Aq{SU<b>ԫditjuvH[̹9ٴ$X' ($qN-oҗTM!&^,OA7NbJ_TvJΪ%i/p4S Vp!g#g D$ȳhT::-qn1xf]2Նۈtɶ.ek:iۘt1vwe*6{(LlG^x=.C <\dyc:_ JѓHmɔJ bBQkD xs"X# =ByEENfOmj%N52TVU5JnsfX-UN?g0<,:VĦ0A# +VxMey'MrC#Xm4̿lp*0I&'j :Nia0?FY1fl0<e\ JB[q '`xDh[k錈&&F4O9&z,!3$W sXL`!dZ#AF{]#TZ<6Z#TԊjr|  Ӏ}R28xbN&%T U? &#5ȐcJ:h6*i?rXBIST xRn?JѨ mt+ eѓ|{WTma%*])jѳ Fn; 8`h$Ch$xا[$ӆGyT@FMM :h P*ڳ?VH+81e"[aSk,I,VHЈ J\Z[{ t'|يjՂiDZVE+ùA"2bfSy *H KuqG ~T$-DRBQG#g"b6EJkv)T!hS@ۓj YjZFh΃cO4GAT438<0,It BXK:D,Rx%}]@d4K|7TM`JXZR{ D ,.Eu~`j at NI, @[Y *7Bu[Rv8bf,WC%qwe}Xym.PQ< n{ ¦;z^/aɔ-xo띂'x|<\E^N4/^ ]0Oj627t<9wH o(g ΚK>B83|(f+}TD$iF}b#3|;uDճD9#w$ v4YMG@7Dj\!q5_Ǻ!QTz[wu9;5λoǶ#J}Qfoց z~~+(/n~GWѦ_cv`e._D/L$vƼ:=95_\lsx>XG[OO^bB2FTw^sؽj"SJܩZ9B>yK*I=Y'"d<(,D8JdkK<כYڄ&l6a&Z>6EQi.rGX Ce aJDb<8Df-|lkqbO&ҪEC&4PXmk5_[Xcz'pWRփc LFt \n\%3ѯgx|/?-/[*`j 2꼉,jcEf"_ $0Bc;Y;4xC<^~RIDCJ^s>ԇvm`*B(c )FVHdJI50ar?h{}t(mM J-> Ƶ547GĕHh-t.SҐ:k(x兰I@(!z _NxDJũ6c{4 _>B"ZlgH\?IK0(=%(#D u0K%߫QRM ٔ) nHn5jNaf"W|Ar}(JS6]8ۯ'z 'Y<+{bV! {a-Pz<`)2<ǴNzsx鏦 4X鏨21iC.uL4@KQe2^ 9lQefJCg^k,i,i,i䬪R113i㌓(CD*0&`KK*.-<_;Ns?ϗt bF| fܜ)]-d\^?ɶݏ-'P$Njg?,:ҨY sSP$:SS^S'hR#U-8byk>j40˵XwI+m#IC*36m fvޞ5-KDEJ,E1ui)2/2ǵY<\qeTnNˮquȹlI (q%O4If Z/8gB$ 62vzU>g[A $t.>{g(d,;a( 'Ò[0 {*JTUPx["3d#h&cuq)HEZ"EhaKpg-BFW(Mr@?_$IUp / FJ ݵh Q43!k8]i o1XM~u$:́4aNY?^uC T80N _BqdS#NTDDώyzW>iO8U0V8*dZɫل%Dy+ iȔP s6\QY%ZscNW:rXOq"N`k\ A>[ճmhʼn\TN>2h%XeA)ɤp^&ܰg6([Ah& ']EI(jj^O fnm r 3+5cH`6&˄wĈ$ /I ePC)I*bSu^NL1}t18É2F.Q㭉ĤOQd";B $*B$q.zoK"WfHbN:*fH[,$B@DBiD)(J1Z) CzNEPFDh2U&HELU͸ q×3Hu(qh GUd4Qm -q$fmgLrN`t%gdUp KU 5pgmnpK67oE[qwrAngJh)*4~+ۺ c'hi[#5C6;Ӹ6Vj {"1LmGJ '7 g|ֻ+@E"eDGMZthD$(0F =x\RgEұl~\M qk-L< -^Nܶ)b0F3!ٹGg@U+4FͥDk{;P0 klia`B3au9.w, h11|܃@݃`J%4c d1'/gL@+P_Ng4耡50KqjuJ{0IRD뤈f[+S0D_J Ʊ;6Y1r,m-hڜrC-R[6Ty;3k׮-@&*$ H; 6Sb؃q̶(lLLa1ο)=Qړ_-?E{sF2U3@ ^ζ`u)u*eو~Na0/J. уFv36rjpA mlio[шydeY dno+ '$d1ΦL܎`f khɹv0 {o$~}oyX*i;]6_ar;/k[EMpp@d?<yf7Wb]k'q >fޠAGc>.L#aS\3ZДlyVtӛ7*)-"Z~sgO.л@]ً%+02r47hG[ 47Ņp ݶ04r4W H'gǽ6n@*VAxÑ,VLg8_ 5)׈FV2ŭROneY I<-=b(OvAgpz6(\vaw`N_mՀ hnHxv&虨s'E; WRnzp#*E9 '( l|[ 5rW> {S+ལFy6v+l/'PP(\mYg@[eԔ/3,Zz h .rBG%{WwBs%U^MۋsՏkV~q[bW3yPۋ rV\2~u^8kt"'W $HEӸF}ֵ=Щ1ذ+t+dv]6p;T;Յ0o]stk^\l.r,E-g_=yIK2J/@ٮe9p͸zQyktK$3QD"B+Dcr٨!Lzc 0cGg`1phn<:NXEʓNF!ܣr_| nG5JӶ=k4Q16d'>VDsP;hTsmftZSNc/)\MdIML Ɔ{*@LL $w 9tf;I|iKnp`[QrVKlE@dwbnLf àudm,h# 2AP57׺V'h+RNJN>iwڌO?& xh,(\ozou[@lcA[5`2?y~wd1Ż)+QX4UMog^gz=x>{kUutu_VD3yJ:"mUlmRy>w~ٽW^ xfnq7i~\ 9QEء*b[O$i"Lt |3M %f ޕ*ڷwz#o8MPWGr(.{Vig/_(EgoX2t=Ѩr.Gb,{S{J6d瘳 EC>-SXd،)Mq{nǖ,*ْ`:ZDKTZɤ=zV:)Q׆&$ !YmvZ FS\dH!ֺjlJ@)zh1 2Rr|z|'/,vAyY- !Zj`lKVea+06S ema2ea炊 *Da03-hz6KЇuˊ̯]d6Ha6ol̳Oug&}zeƕ%=o&@ոԕQ@V(. t$?-o./)VF_.3! N6x؈'&w1=BJMo#᷷@Fi 4C5B5LjeSL9(>$Jh'Kvz4@!s.uks\߈R fJu ۨ`T<{>p34Xl atÓ-OA_1`{`88ۻۓyZq1Ldʵ#StA4J)v꺆az ~M[Dgys+FGCpTs E8:BTW3} t8硷ĿÔT;'s_,;-Ȭ(jGPҝ|MoI'-Y:\,v.`jF_ [iS D{Badxhznd`F& LЯMhqdb}95{v{Ԍժn>Vz={im t&dD<5{tHo'u,d=~7War}XFZkKd?w˻LgצAg,G'&ko=*].^ &*]W9{TׅҠ-Rz֝ ?bPr $!_-S=4+/T R ^]uy[š·vd{.$Ȋ /ۭr_ n}9E%~U{Fz3NrC"N\jxq\ri;ќ\zTHt" NB߸w) )Ґۆj?^S=Laeä.wt0z9Ͻ8'v՞q#?E/{9-mZ9{1{ ?]^3Brb˱1KkգSZY>whOm5y-!\yԶ%9W Vy$ Xigo/@TOg? zoR'6JhnOP)1 rmuR֠jق3U^:&-DXy{FpydHVjf,\Q8M%ƍk."'u I"O]:QҬVW ϓ_WW $ke.B6<-XY86%(U㹧AbDRTD-;zm!0cQQeV* XDwZڈ\ǣp1JE[tP# P"$HQ&wmmIWz٘U #413v:u&.JL' Q] `D.deefge!^$*i'$QjUhKP! B4[5E4жg5JIFVA{7Vx/pxi,)[!iQ0 PiQ^ظ3h4F g j.܅ZzXlI0u a|0q! z(G Vm4jP p6™G)}B v:PWRJi nȋQהuAT\j+F Z:5*vf7sdB9~OC5|qtlM9#;~gqd8gもa}+ΖCu)D.nH)Ho$wFk_`N_ct \PDžr!5xc'nMbw X`(. ,EA j|B)]5vdD271{]ͷz?137Yݡ|eҷsCҾ~NzUG.v}j]pў1J꜡7$˿g1t ZkZwBGnwQ?T{([C er_ڶ]&@3)i@n&o=f0؅xu<_T3 `|-NZ Q|pa5曘 !_gqASܭ͋8_.iMYtVق#~ʎ=8}xy,oÇI=W^e-(x{ˤWaH˻&j8)ֲb r X6q~p>}L.0;"ܻ%g t߻2>|/?%"TlY{0 9b%>yEΊBcI5'9mr ՇV^ϗo+.'LT d#m5;Sz9ep d{x,JtP_t[TdKغ!P-̆RdM ԡX]Z*飒[R:-GzRPꈚq%4=7s}@$xA%SP>ktO 3J[yWp[cGאUfJFo̓ WL\.'@{R;lb 7,Ee ;z.ӅZlUmHQ DуUuUʢIcCay,B Z ^)LРDrT`a|}QN(IS;?Y%B v:&N4UI. w. 8Q " ,@=LHj~yI;6u:X,90*/6uƉeN J@o 99:-y@VyB>WZ2Z8PUa 0c7*$ Lf+*TK@+L<98'*T EduCBmU0x?]ڑcO$#Gc_Dc!^\?ˏn_%M Tx[/U$C9 dc *N183sM%;s' cK~Ywvx%azBxT2+s}`+hAMF_={Ɲ+Kxqp0aB@؀@` zpEpa.(Lo>@^(XaHy0NqcGak}`rKĺ ۷z?o(UxY>,?ah46eaMp1_DW7~/W 󢌈 %ϋSJ:C_ 1R^JX-&3K0 $!AvE.!I(b51eZw¡:VP'RX`+-ڀ6axP L͙S6J^"E':QU_DEIGuHE,*xp l D WC\ QAN)ԛZ&QKtOm)5-Xn>XnmU8UQPs3,.Rjǻ)!l$BX_)TPRJ]铒V)1#m,DK^BK%_:?6Q K3d^eۊ ,3U3ȶ~)Y6=q}U03H`} u?n->_/ǖUы@Ja,.:_$e0JdϸT*{raXu?vo7J i{f~ȬH?|yS|`OCրn99o4ȄL_U{uUճ멪AxG f[^)Ozq́ZA1Ľl^ŋև 8{8p |l| SWY,ncUix|ton[2Zxe.~jyrW1<`/:v|qU(nv*"rk6ax 1emA &.@q5мi{|J|Mf˶5NCXFTr&xTţ*p(F(]b}@oBVnGy߱V|Vo֏jIT:; 6ݽ=~9]S"ĉ/LJY`C۱W+J:|yriG2e\qY]Ŧ7S'(wj_ZˤWifDJvIsO/=sdh&/D%tCJx-Je#;Ur{YR:!Ū99pYw \%ACHg-nD:ϒUf_Vd/^#u$`YsQd%wn~jx-qIjkU=e# uF|Ȩ=KFe)ȉ/kQYr"N8RT9GVx}ϒzx!ӨH1[DΒ!>7=o({,Y2NsJ}-Frϑ I{|~P驳It&ok2z,6Mԡ]%t{hiB_++ZEr$ifRV@MZkYa,ފeF4L7-!^ ' dtSd +E#9a@ 1ӻFJhh,ԕ霵RkRA6vmФ"[Ou_qt~b'ej>ES~NY/Xˡm6R@T>UKAV^Ꜵ{`ƲWmf^ab ,L.rxxb\C.Q"oA^λרi|B hn g^lGseOn$HD3Q0f>ߘr;\ci\!u@Xto/芨Xa.n@H^-DF`ה > #@IcUkK%ʛ1ֻD#x6;s~1M?_ Z_OϜz01t;gu9+ \v!壣[z U+ˤtT;繇}mVu:QDJd"ktzkV0&ZTLs8-C>f\\ cJFS2(ҹTS0:fAօ qU۠oP{~|.>0b ;ĘY6Xd|ȺC<se??6bHFfaZiNL<,o"Of+ -tSxa.&&:eeb&3y2igJ ݚWƒ?Oloփ#c_K4Y) qip,n#WHULSa:N9/D gs-H;Ƶ-a]oGW~ՏG "Fg?-~ʌ)R!)ERe5$Öșzׯk4WB$H\G|?{oSRUp|ң,;wC^ׁ{n~>II(2?Ơ+>n-$ A&80çoߣ49 KX71ҶKv=J[g?=EWpa49Y0@p<ݰ37ӗQqAGЇT<|D</ 0q$ʁcUpLfJ3.|Fr@kyÛ9\:2݊2K<a K<_]EE}}=&D65h)%Oqt9 CpB,lk֐lNYqTVOh<@0ş,_`T5֦;ײm hW6]1ɟ6ZoCr@V'!MN2aB'n3 $(t:.__J))eG91˂xE4J"MZ%c̒y_/ U'Atd#}j)8n^aڢ6d"HgE.t'DʓA02;Tp4,0RcW$#I&[uX};KNSw t]DHأQRVCJ)bdH%L$R&wsN.1"x; XjQ]S|=T7? !zClX}vk>V.5ʛ?ȴaìAA6˒[dd֖Iy.)'MLrN9QkfrkQMvKyBC &)(t@4 AZᤋ7RHIu @S>}*ȡ04Ҋ3sET5<9AK/RH@恈\4!&n#i{ Z̄ (Y,O*MUKՀm]j߁j)D)8]Y3ml̈)&ϔ\i!iR=W:PnlɭGP'h sI*nIr>"CJ(1>j Y n(/g5 WYo2=P\^N6VmћΡ?XS.~8mQ4BF=eQd@F)OC d@"'A-%7/BFHq ʃ1F³. RtP;%X!7,e#0A+ KP,cbQeK 0*ʎ"T/ W*DHFD< #SdSXb>yX(Bm1Z/,zx  y> ;2tN^ )`<(BR x HGL@c'FeRRNlKځ%4D4"י 49P2R-y \Yd`!BG^V"̻%DiJмY.>!t <޳o+l޿}ave ͊ BKW-֘tz6Rf,eW2T`W.y2IexEEmGQ6o ~g𲲢;,t>_?/n{ث;{Aax^וfv[pywZ>?Ͷ%n5i̦O{g٦O1oD<Lh5` n6/A}x B ZXr!ߨ! )(JzcR`)s m"g]Pɠ=IC˔_9ٱ PܠboS3 eGI."$} Sp%)b:)cEo27VG.^k ^˨I;(mLS"OLd500԰qiSC%vu0WźB3 `r'eM|~D j^^]?dw~X>$7a/ݨ7gWlL3lKYIlOU\IrAB6KLnf/ו9y%R/p+)lK.+@,ASYF.瓗f#R3] ^\rRաK9j!B25\}>^-9C a d8OTu0j/%eҰ^&ƽbL~][cqn+=ДAKL"!t>tXMP]\AsRΏUKfȮFy:B\+l>6w'@sBi:rE%d5޾GfÓ)❓~[FGW!N Ck}[k_owt>JJcDd)_C NB |Kj*<>h;1fU QL'y Lݤ`_}L ܉⟛[.m'>jAj'*V<WY.Xk]g•RmΣR [ #89;HA}IHhEO;.x"{Ke c<h+$f2Zgu8@%fUC,+@[^WI34 ၡq8Z%ܝ~/8شr?wZD%U+IU^gQb Æ= !a7 -}ҞJYe~AKh@5=bm^!eF UP4~R>]i}oѴW#M4շcB4p7=D^\Cu> %ÕhHa)KW)|i>y,Z,_nP {+~:c̵|4@bhk;O5vIڲ#pnϰc  ,Jǂ閣OAUQFe+0lFJmhی2n7slA7O>W=cd5 s0K|@lC2팕t`D)_ v!^&i:()y6P =DزJm"_q3RɄ 2FU<{#D_EL%>b=; 6i3d`XDӲёȿ#KG\J\%YID|A򽄣1d%`B-& d; .@}ri3*G!?1fBO6Iv'kZ ,mzZZӁJ-OFS :拕>IQCp&fQ6J:i^^(?y)x/zDŤ}ؽk/f^̮]g,[1X8a((MDLZS4J: #!ʘM83q#L`+Vк?T=.?uMzoVG"%эỦrY!mDbP1\P;1E`I9цXek?V2n@?SXrce,kƈJ1G'%R8&I"PDª#}Nm`zel2nj3,wɱ *P"F kTFF[ih=Z"|l|eRZ%6tN#yru.Tnvosp};"_w`~뵋+V {_e FbvVKoQB11Ֆ/V}N[+3敃K5Rt&?W$~8˗9DjeakN U#GF?or4BlҞos9O`k6Z Twn#noRq5nYe 1trmcdy,NEa?v1чʼaKrH{|#)A:c+R jTpeu8/~Ƽw}^(-(ȦQ|8duj?Xxm(Qu;E_3%n2m{P3mLëAUTC8}ٟA ogэU.m'q46Y%+ !$jXbcCε(gSowGl8j4*2vagwpwx 㟇'߇߀Ëx_.a/> >[of|7/@}~qvyru~y}z볗W/aP/;Ӷvg{Ej]|tWܽ^hmqZmu_n (8 aZ-vƤV[ ַ%<0 7)eOqӾx/ /Wǻ@3tf|*s74p*ߥ`uzӽC-`}|b 3w0%r )⍿> >873 ë.a%x&ppa&]Wn|y[tNr(ނIS}kIkom߅/^CR,t]LT}xb!8/a)=MUv܋S3JW[a?eN?ի4/ƽ`dwq*Igz#?d3tr_>}pp9N?^ț !~i9Ě~*?'?qio(4vA{xi %9OF1,wLu 9X8竫PsSzXn)Q<%0uVkt|'IR t26.@y+WN7Z_;{v]p OeN9^aSdZa5l?om>`Q}P<&?9hsؚin*K /wi]Vy.-y]\8Z-$Kȧ>&''&I{CDxݾyE~JHgUΪ6玲IfS&B!Z͍Tƌ*È9JNDF%TH^L-Jp?`x'x˳W'WgϮ_>y~v}qy7k[kŁIKsa:~O4`eOtj)y4\D%n젻(d؛7 fE(1KJ/0֟~NG:\kz"oNv/rc@y{l˲3ﭶ,W\B!H8D"%`qbJG8%P$0 N߷Qx?xC{G|=H)r%Qy~GO7rRkA ;O O{z:ߡ "cjj~r>Fce*e.,} .x{!Fw'ʾ寽-ܴR7r*y"Dk]F!Kb$c "[c+G!V(\M'ԪGǸ68-0H9 ( *5ƒrÔq_#&ZˠM%(q]+l1%1+#9Xp(p$YY턤ء"kYx_EI0ԽS#=_ayz:6>1zCzU֭xXi@C޼:_zL;\ ?=[|@h(F4cd8ma7#hh#~ߑl:&'ӹd^,WQ\usELJ=`Dl&b7N7w.4Wk);\P5t*NbB-݉ERIB%?H1ҌkkpXwƅ_9SJ̙I90 >gUWL WΔ`LΔiXE3̙ `H<31p7gΜŜ0O/8Rtk֓4ug2%ZUP=Pcͩ8LV#X2}jz8c*TO:X}ʰ! `;1c}l Tdx]ܪk9^knkpM=7GZt49QtfV2CS0a;f`[cXX$͍&1[LanN.SdUN% =])Ahі@[7#Hױh("JZZ1RDdg5#bIѦ6ByAS8%0 5W$wIEFjDhc?:2~z/t aK;eM ђǁ}uAo›5=V3#ݻLD‚M<0LV7`ؚɤ~pX=d|l,g"J#:$_y,s!D1H$*S]$ 嬀*+"21TkRԀ ͦShCi&?Λp(=V^lwptv~m[cژiAH@*-o|BEX'1Xĵ5B͉e`hrN\Xl )๰61POP 7D{zfWRZgoIl?&n-QɞZ!!3TevTIwMAT|]Qh4M'{go+B|/]Rk)CE}u/!'Ϣ(FZv[Ĵ}&F<† X08R":lu(Qb W&8B*C `'(G(*_jQj( X|;Qyf+ygv&vl~ߩqb oǒX1 r1t`#c܀!f"lqg6ZӴ +c1[09q\Dq"TĐH#%N!3pDZp,A/0[%`Lfw Hj4ڭkVeT[ϭ bys"vt5gε0if[*wgǞ1Z(3AW\?wZV a7^{EEj@vvQBlљ33"0s`^]k7?nzEMIAҮ;ܠ`Fvmn r\ښ(bHYh럟֝\l3aqnqԾr]G[M Ღ?_U?ߪ[{9u\w v}$ DW9]ybtP~DNJf):+u[ZaM?UwY͑s^ 5TYde(U>:D+KE|C}Fzv]~]˖9^3D_!k,1Cf@JBL2Ǜf|VN]'Xeu*r(@@ 'wmK^|_ߵlWtʹpNKi.-sC#hKNE!?%K#t 3\H6f]{P-KJovu|z# v|z3T˾RzD2MCO9Nޑ(arA1ɷ+m/WvpBviVR0cX2rFxY`ȿ>;2Z0y4եOV~[qWqx6l|6l9~v9 MHbr;1-`Q% HJ4:Nb\>n~XXwwt|)u(t "ߣȅQ\̆,oŜEMr~1 'ߛ9T0iͧ~~Q--zap)P ʸ`+Ţ=%L<մS$ib BͶ"dҶer_Җ9^9LL=7h8 /lC ^9@0?>q~J|v/"ݪ7-qi5#0Rjt' FP'UU@}9mғ^jNGk_,N>p900* c""r w5}ǶEAo9םTVkzO7ϟ(`r 8E 0pZ/ (dunYͩ^RrG[Kln--ȧ8C_/;WEX xY:ibvةX&97qMI~]=K<)J-J`Y b@I'Y#phq*V׉:bu|oVW._+F/2L'9t3L'9tR05C-u1% j8Htk®AĒAI5 gr|uN|uZ< "7Ga=I&c=: *Fj 2b@`H@t[4&D*u㮓yhz`h}9ߏW'$_+:C X"Y'90r;-c1 cc I R0R]']'_^|,zr8Vے{!%mW;ܶ!(28[5*%K*3P+cvfVû|UEH E$G˔  BH pWHOY7T$# FB@ m5XPuǨ %*lм92¸65we~~+3c;2㻴Zn;S_9#!P0XﯬXqaF3ĸFeU:j8ys 0Ozb:5e5z+DM RjxrKJ!m*yO/ \l aeunu,ZJ%ތW.ژm#pd+H)P@;ڬ_of|yDU߄:+RB%HymD)I*kiZ&)umy' 7#i /r840E!w imZ5HhwOmS E܅:a4+=N1,W@{@{18iOŐ6#1hY8|Dfhb3-Vזh41/l^)0&9""$((cG/08ǭXDu%RM*i,ƚr#)lcJr ($zCIѐ\es2*EN5/$PW'Ek%KXRaLED!ŅR9@z Ku [DC>+u\D{5c\3_?\;Qk33(v&Xd>>s'U`DJ*7m9B)Վ#9:?~s:yxeKɱ F4&!k첚;?׉Y>jќ*Gg' 2X:YM1+!{}{* TUKB~Zr R~n.D/!)wyvl!줄1p]CHzؽһi=`hDwPAoxs=Dݎ=|Yߌ:pzc]o|D~!]}Ӣ[L I.|ǥV|i%R5w>L4S= 9|??FDiME=иYRߌGܬ/`cs[p0 Wna=ϭ>rpjs0][u)QqDզyaұIRуKK!5Lj:(+$^H Z9FCI^['6p c\A4B\V}{z3pr1GSG(LhAugkP2d8pPTKqn4y_ OT&(aأkdG*=3G_"Ɲ@|h@@r'\煜yxN򅧔Ga{<6_X8z9{օn/7i:.Vq}kGY~R+&((yFׯ?st f;y>XdOzR]a?]~hKWԊ"mgQR4 b߅1K/ .kGOF+n0+ᘬrݣJh T[x*ǿy ^@4!}-*Q*^1[H|Vx38q,Aȡ:8ZKlII2Tg͓*@[zda43=p2s!VDKj؋ i8bYz٘ R]rpJj0+'Y!LzsYpT*&Dr!g "҅TH [%)ݮ\ 5P9\Eʩ`\KN g*d.8xYI3Z'.b$[ PiMz=ϰsa #0U[+KM*eT{|L V72e:@S!}@V"w椬#LrS E٦o,h8BXwU!穱صg,+ >0.*㒀p (ϒO< x&M66[0Fw{Tu+ q9 /pY,cL1FIuռWaO_.|eK`xl,FU DD_uekiG—=G&x1131ۧ  ZnL/X`$vGtR/d/u-0X=r'(V6%G)'[e[Lx]YtխFU/lwF2m슧դ"|PόШƻӝ4ڃVڛTP=~Q#T-5rI\a2tQ<ݹI͋te+]]:ĪTJsjw9Ć(+GwrF‰`܃FIc*jKH*9X&JƫE+RmeUiR؀~+MUmtB&%Sȃ6i93RNyqU8,a" wR/"(["W~2p˲6%:8FȚ) FWu)-#1]37I ސ*% Zo}ѧSnR w0eqo'yR'-yd9a2Z:߆}/-*L _aQqN*TLHZޘ7y5$!;.OeD)E];pTt `(έz5D] K֍ӁFR\:pߴҔMMP@曺96$&xH#9٥GrB(b*&-uf*MrDrጮ'P˙\`/i-53M`Z0~ |-N1xU?>1/(Y4A \,.<͛շ;+u??)O~jM!\O *KEE*mh-9AjD D[p>ρ7*@~|X| 8^b>,@@`6x?Y]Ջzw"LQEx eTn5HjMX} \|.k.7mn-Ww vP-+F=$I(P 1B08ϭV Te˪҇ kpjTj lU9::3I"':;v茒9T?Ȑ|2ܫ걇Qة1V$_M 2)UωT0gaXlM~-_  tNZ2 kY TɎ'$>C~ +s˷zW'E}k~<r#WK>r>= 1֑+!Plمlcz}5ok%GjCC3Nc]Ok3 GVO_d.gcC#}駦4~y3|eH'+Ҟ'4zq5|O7;價f]q\=f$Me Ƴz=nGÔMxC`JYJ. ыpJ TVZMdFwFV^H SaC,i45^FAvѻ^F8sv-BoUd*ERD+(E%F RT}اO} DOW un7w_?HG?{kyz~ޯ7޳l3&[gJAipTwSam99e1F>'r-؎)GwS|e"~eSovf0TޝVN9ΫHw1X*ZFO9 z-}@]”6ֲ=Luǝ"{Xhڔу66h]dfJ1%j<0!wف!]ۇ*7ݐؔTpoua)%{}{OFA]<𓭎}\{puXC!xk7tmW##OâR; T:J%uTzP%Y8ATIb[͇5[/~s9H; HeT_;A򗔞BP O Az&|rKۚ4󤘆3 J9*̸a>-Ih]9y] /.-B}# }櫻/MY͍߇K3VNeorss^ɃV1~:a?p`>-4&V"Meb);k|ڂDQ P.t}tˉWsp; )׭)"ocZonJ)ssߜ% '˸ޒg8GzkntvI|,Ty3Q@đgwSc#klvt{̧@M$i;`IPd g qrC)#pc?OV+ഹ)L m/bS+ȗmާ!$+:w^7¥\h:WTR+PZ((8-j()hV9"V@ykB@(I(Ɵ#^Owr+oX7U5q>iٕ8t7#t6(IfQ!,ҝr;+A8s {^K 8Ϝ✝aݺ{4PL*ϧ-!(#if*G+d2RFQJchFT^#522lE| A T{'~ޡ_j5^dh=kCoǍ<ݧ>OCIOhM~94%X)FR~VլiՕƟnc>}ѠC_c/ B1inq̾FL0Y(=/V%2`Qd$z9)gi>`<)4<7KE0[}9X ]F"fi >t VdPD+_ψ1vRtcsFyF\=4h%΁(f~N|`zVA٧ .~z_\_G}x})ڒBnD=JbɵxFI$h~fo鴘iz_P(8l>m@@eؗ ?c-w6( _U f.ATTMU6_Ӫ[%!bMBSᬭ4)xz5LC&LXz9%gDQV:n_t:`^Z{´1CS-%mc\ \UٲҢjA&٪r0fλR Y$lI:j/A蚖 \%gcIR  A RJgyw{C$L@iE*t~c`J窚ݜ7FJZũR)zpNմfV "ɀd?9nϖ ap{ݰ$Q ܞaw}lJ2Xp&B{ZJRX,}PWK niOIm-H_K#sFƈAlQv"0A-$$L[` Ђ؛geK A k(BA"o\胯?%F)N[r0%0uz4HTzܒIo#zP_3՝[?*~}yZ?ҪEx"*!<1 T2.|/~DNVJ;/5R0q@-Su:ԴB1Oo|&s%ykSLmܷRd/N9}˯\\i:bIeiKj FkUP>RVUE,;'b^r\ g4L9^ݔ9><=, t6c'_.𰣜T;^j#EO]"Kʾv:mV}RK"&5$J1B[JRPjBAq=l%[zynFJcz=h g5PR+(['Bs ZS~C:0݂ϒ1 Ȕ;> JE+Fm~5~{:BPJ\(+U-RΈT1tgJ29&א{ЌbNS=zZ Ǟ}&3E K < 7߮0}XnAGT泇_oVլCtfַNh Z81wZ@@Q󼘯6Oxp(墁O !x0LjH@xi.DdK ȑKIA}/W]?,ˊ2[f̶eo&I6C)R0y5AœNtZ32cЁĘs jF  ?<a-znClpBFBQJk%e^vBN?XX[?]`3džE~0w@ޮs~I7>ѧ֞XӝI^K|x}ׂdry"5P_E'VrX40, \Kv89Lp$A3o2(J'q@0'\3ϊF]oGW}98a;^x|ɂؼEx[=KᐔlX=տWWUkم|Ubi *'[`^HiTΈ;R-6(7 Bj6n>.#I?rt w'P]O!:73ZG~v1_o(Q_,Fm !jzL"F F1֮B6EN'Js0-1- fAc , zK ~ h.y+,Z:i(6Q-ǚ^@IO7"m!fbܶk(nղ4?K4 *9wն1՜l5Q lTP,R ɷN^MIeC2W^(JS` bru\ Ϩ ]q.2rFva򅰶J -be%WFltm&Ù! f"/s.3sYim4Sekwm'1\!5@Hwy4[+e:>kо*\~Ln2SNg ˃L0dҴKVҗOɟV;޾ۻ]i%]K-Bt_%w3"?ѝ7,|f7@$~`>;Ȝ0Ͼ,;x88?כֿ\S1-%U5ۏMn>/#8&7"İcsd'\NCCpߚ2ϝ~G*G%mė(eOl\M/ҾYǽM9rC> 7 OjG>I F*`L(J؎'jU:: P] cp(L$Fq#J\3LHQUu s􇢂@Bym-y܌ &V}LG|K[!vrA,<ʏհMo{O/B P,I.rΕ#PzeEAOE/jk A[MH<ӧtG:L$𣧞k e`0n cY):IF-4fͰ]zLZf#\H23OTXn֯3Fkv˘"hX]:dB.K:ɫiy=VˁWA6߿?>5V=B{S-@6k SZT6 $~HjNgWgQ :^GO|̷"|cJ>?&Bx BZֺv0M6 % N:(%e L-s'Pf>n+pAOrPʒ%dJ赓$%vF-J 䠸T5 {!k7tkF&\`uNNs 6+$e ɸzQER&Y@+ds@ 3@!^ fh2rK6DW+99ӓ&9]p\,(X ]OK 9 #LH# .I%r8УFII>dC!" }Px+\xmmXJjt{J$E}+#r7M9L t(9($[%'AqS,9nbAYp6^˘LF4SA(*L^?ny-WJ Y_xEPBJW3Tb=خK$&"OYitvՇrP$w;PoK%#Gb˖^-yؾzJuqW#!5#ۥً7ҒaTKB4gc䥺&6_KM8>qfJVr*!t2ly\\GtoEt75Fh}~aHJ[(bW<uy^gzSohK⬩Dn̴y=$Xq202Ε&V>7bx3N=AI_rry6{9 %UE2{]g EoպE=GoQH:_A笢G0}^"T.×C.k@ Z|z LX6*pijՁUtҭ!PpL.*0O|( dvk m݊TiFP߃+z>X_Z {pR>k dsuٞRޟjܞ6$'W_{-C1hp"%hD' 2rL 8 .Alpָ}vy!jdR49tkHߐ}TͳI֩ )-6&+q,;WlFvUX\N2 g:&kr 7H{[䚒P_.U>cAyE]&Q($˕|p%=-Y0$i.uu!Y(U]+[t8C%ғu& fBAKx<҃AVzZ'l ޵O=sR;< #8FbdުhR cdPm̉}oJB\@iIE5Eǒϸ| v+)GrgnB?u:{QT~wOo}ekQ9k3v1+[lU;U\~:.KdG>)>_%x9O; 0Ï\j<w ZKs ]1eOksZ (7wysX/~< 6h6\sTgZ]O!JJmF!z Fy6m0joa69Z:X<) 539>ȣB;5KSlSM쏋${V}`8%XK? &i=)d{?|υ(y}Wj`, zƠ,$Y {"4V$Ikm$GbK7d4/ 4e]$3,ےb}eeKyHUa"/ k @k F"Ѿ6B RW9ʘ'X_mg pͿl{O[Cb o@oM9y[GoyOnIc`]ܪeq֭_p >KK <8ڬel.u' rq~'~כgWG[n˝XG24px۞h)SBUj~"Fk4 \.޽; |20:[V'/ J,m1X+N1හnN5kc8i[ ~jEH~uGHNgy>$`@uolxz<ÞGW>汽j[^]"=3R9ID[[qA­}z0B(/*db37@%˝:8E"śؔF`#ާؖ ULߵzE(WUhkm-/x9{.H]u9'ɰڊlly_mOnPzWMTxxD}o@]V ?5`yQzaU5;[Xq2l 4Gqw 2譳g6xo6О5[PՁɍ$֎Xo%Olg9d[ j\_iTs-%ܹpkl}qh#ٕ ATN FL,8+ F :HiT,ӘIErJ6 ̉Z{5p&H|PA+\ : Ҏآ^/R㓝\pW&ԵSH >:<B[B_yXط]JcIp%*補 pچ(NÅI +a<[4+lfxf|<Gɘӭnb?Jcg7UK@ͮv9$@LϻY5!H<~4~28o=F`K&l n>xtϏou:4$.i8#EH^K'{dߎrX=Б[gB$=?P8E] AsJf>ӀGz1ptPfXW9Q=ml(lܧFz% (U .| 4dUvb4pڝ7͟^' ܊z~^;LVˠQ.K]/Rb'[c&:Cҷ`DD=.Z-%O`3Y)h|A.# ӅOsV9r Ua=nS8E x["e@m"PȞ66jQܧ R@.8a?eC$"PIA<hPя;kz߆ӁzKFz:~y.8C ]/S-4fw6vt>wmU"Pxsr[/.vMFtǛW((YML  (' ok:H!Z䘋Es U*WNaE5%]-C*S0{ba 7 iMǰ6a[VAcchdCfH;~ּ>yTT* s?zJaDGc`Ns_w׷ fNap3[l `qӃt;. Jp"%qR^ *ɞ66؈=}IףXL"$Xry:<WPDZ B 8?ʺ!x٦G:(.qIPtbT̜5UYMTH\%R%Mń[v3.WnWt<.tݑS_F?LQzE+M}nBL\#V}sÖQ6M4yBE-L&/g%{@fLRpy(x% {ؐ;2֣ M=,%=0Vř ̆ڊ,ﲊ Wl̶Aft7decӀ9f\I#,Gib%H)L ߽l#Z6Z}~^d~b5&?׃ TJɇʀR͸hCY `cKi%|EikͣS~%Yk8|MqXg KoU37GEm**~VCwPVXSF|E: xp\1Jڧ'W6DTM(k<;66`s-ޣW %Hf^s5|\U~I"6pEM1>)Ɔ1\>] ytUIG9LF5&I$x퉮>_ncuYNtݻ?ЬzmY ?KE?҃sδaKG}cS9|A;0ZќSkc#7+RX0)?nx;XOtR F}J$\5sWgqGm#o9m=@@ie QPGvL,̛+<$::vP헚} +a˸ͭ?5}`"'S.OB..u;7ێ)#vXF ȝ'`ҏ|uMӮGm h)o" p~ G X[+{qc(Fd3_)jDJ P7ud?\Vҧާ|W2{lV7y64M0ƿTqZU @wmcCpt[/G.F^ /q1ݼy)gwW|^Ҟ6~f7f&y; ?/ AsG/ץ#^PbLn-q2m,{yEL+s+MNJv'3:dڐ>rDs D\B). G.g{HW#:w$~Xw]$phY؆|[=~`!c g<3=U]U9i|Is9F0=.^eLw*zKb%Yx$F^tk@݇nŔ+ara\~H MS˧=[7:P+#OfE8O|I3.m`0w^la`5%R8PCPj#?eb"2busaNP I#+ X$Vy #&Q>\GV6Q.nt#({-r\Bߘ,s[)|gRG(W.!INr7 G(YiAH0N4P^qS)*Jiw{n `g3{>mqBt%X㿱XV:Jd}'bJ%b<]'1VݖGiRy4VG(7&d5o,Wi.hO{f b/A.Uΐ]lL״'Y{Y}ho[|s5SsUF( Pni*g(r˭qxqQHhv0g=_!C - ^iZH޲İ1\/fq|?#1k=-(7q4:"'X0@ԱR$S7BKs׍DkTm8YK 7Bњ3iHm6 W~ǍWn|q # -ao1|KYc[H3<-dF:J3g%6UZUb\a쉕SצtTmtTm|a.F+ٖm:.iEeK׫`1x*D2|G35WV1'NJxX`!]ɎE#RLfmo^o:xi^lx]ϬlG8>ÍX.7 ~4z۞7 ?5.[퇡N j>t^?p#&dlml6Zw_Hh]oz~=&|gۻ;7legfww8M4rkogy(ѿOo;ypNϚ˴^F4./ 8>ilp]+y5*EWy㗼cz*B+WZJjfx6IRqʼn0TS'/w;w֋ }qnz/^P׃SspQ3[5z@{jPn}: oy<8ف7m(ΦKS;y µG))Zz/WeR{]/u^5C" U>( Ok'._Scv>AM]nx&P{cgв > ~ȁsii;l>6 fBng4*ḑtWxfj*F8ijL{S<^촸*ހ؟~' uONẘ8qzQ>e`2.f~z[mdQoNIz3$ ejgb?05L,Fst]Ah(>w(u W%S&l4§2.D" Z.=$,+`$zA2(7G  ,݈*TkdR2zɲ6h`1|7 8;MYЈx6sS!N6Q1#p9?Be },i{wxfPr;?H6x#{aMz˔.pˆK-_BQ71HL3Wpr%1) e}!`rf}/pxR7ݶkRNx5$ߕm^&:"|y1hpF . ZI,f1xe1u "v?PH*3`mD1dNS”[)Q #hX8hAL/\k,XƂ5`[Җ !Yu>~ :j L;wYa2F+B J`Aմ.  XR:/:.uk)`zVtɄo׊oƼtYR߮ rs8JQ.pS38h{\ &8׫kыa(Hso\P9[Ig{\ZhXg;WCJW nnpMGvߓ=r&z[vz]G֍ aEcb0v O: ؠ40/"1ȴaF"I &W''W8/\{@Juᴚ,MҴ,MɦdGj7=Uɩz|SX] 9|ϗkۛk5TیXHˆ3Y+am}-΀']x0l_<'4218j4uKHJ9M7z#nG־T/&B 83-ŗ`nե0w:s}c2caSSy۵{͕~$:èЙŶ >LH2& t94-=!Ltr Q)PWZpVޣ鸻e'f಺sz(SP n.Ag#-2M)upgZ{:_a,ja{v&$;_AOI--?߷4%{yy-Jj'@"e:_$ifHJ;DPBoX;^1u7RqۼM0kS#4ژoG^浆aR'eVgJ[X}=ªaB3Cݐ"YS'\X鷄6ظzZ^tIQv@HO%lIkrdcc0H' Fb\V'z)[wΧa%3Ě9JnN0`bx^2 ($p&-%Knf&Hw`'/3۠5&&"`"ULJMztSZ ٹ;R4g!Gx )/zJ$ƶADVRH(< d6 \0"%a3x c[ulM:K. ZlaL[c"l.8 DIV&sC2-eJ2ڂ}=+W~ jIk2d#cNd23egˀ$Os^"YqOGPҕ%0UGgǟiiLȹ+\3 gR9܈S4l8XAӔ6jUD 9M:c#3EY-qQueVE)xΙQ&;JEdZ^$$U> xu4`K`=+g"1Xiw >F)֭F\ zHGBNdNVK$!.p?)F00qWwqSMY70.׾(&]B)Y@C_-sjU3Fl5q"ٸ- O$m 19}\ 9H&$}ޝ,5wcPY|n}?g.Hd`dq)fg=VJ=rbEe䌄R*V&*@rĈs 7֜+}.[6٤ CP (rh0'"h>x杻*$^#h囓Iȸcͳ[{'B{ϥm~FYOZӷ1ܜk{6ڛ7W {Xno}]9mm9- ?rϩq-%>Fڪiwv>N h.?IJ>>,/+֚n/ﭨ\PKM5Go\V)ilz}脠5$I ڜmFCe=;aj+ڊYp⭭xk+ڊ^+ޮwծڕN۠ZvbG@ƹRoahs9[a!('vTEXG$d.6!&K`ELۤ1Ou2"SI$͍˂(nفSmIОDUVA[m[\!VԶHEm3]?@_ CkA #piz5f0YԊT;$ixˈr2äXBeɤry3Y{P };m9[l?Vy[mUVy=yO`5fh0 $Rm,W3I?BA#{;7 ط"2-pF\iɿ Nne^ߛ Roxـnk;^20)ĸ,FXU:Bs;{w;F~ѿ5e gʀfS@$[-^`7'h)9`ӈ?[&2xYev~rWp) v1W/49(ec󐔉."+Ϙg/hvHS39V-}Ӈ*`^֝Cn鳟`(f3)r!Z L`/ZWdo}Vsq^mշUV}[?YǂVimC k-B! 庮`yY#UV.Q[ mwoO7G@Z)xo rѮ4%7/l|~č<[b~#+%u;d >MS7ս1 b/McSӧJ]+㯿i89 9ػ{VK+V- J9e =*"s&JYlFHׄ ^8#q%N(؍3cXn3mpџ6Cf un3lo;YKiCDf2kwsAIVlA0h-Q{"fp xUB>]ΰ$`ol.(։y<^˽u|\rEo,ј/ft:٘r0ăiQ\&xãIVRpŧ0t_gg2 6{30]:8DuV5(0ϠV e[4;BcJ%SVUsB˙gr2X2z$ Fpьr~4jG C&jRY.&jR&jR&TX($&dBhfBr2Y5 Mi<CC`! .0ЅAR3! 50-_BAYX̣BgW5^:g l<^(/Rp:ZC:kNH|$NȠA,SPU@IYߞTTPU@UU=t#t6Q7Zc4w)$S4qaωGgHȎŇ?xa P?}eZ^*Xx(<eGT62G2K!嬹e"c@q7n6UTχwFMڧj}ٞY'h0j7ssoR{46$"JhpT?]# eZbP{^yWčV^erkZIMw%[?&c~_T8@A/}=7>&I\7@6 Os!PzevR~M1A+6;kcy!{fdFoM "ZMo0!N;:V9+Z"-cjHx?}ׄ{۫ hʤd2p|1'u23̻02A;圗fp dJm:c:f˜fzv,1L)ezՁ5( 嚜QK`*_7ꇹ_ 4H/?MO%N?"uaWĶm6e_.Nnn4p?\\uNJorU%.elU.v9KrV..Z%-g1twܸy,iyR/eI_߁7]~lS WycRyvtZIn=)+OO$2Qfnx,?' 17C0\ 9F٦֗U)6qxD,vV$Ddc^-9( SraPMg*.$ gh4AC'6.Gt^Ӌiz\Qn;)]@=vjU^@;24rk4Naέ_jp#BqII+ [b@ݣ{ lw=.ض;vouǓ'VB1YG@3ԑ!LCs.ج%.l<&aV:a9Ks!+Q$Sν[>Bڙj[ϚJAƀEi3g2$0Ik  =l,Fckl,02h"k.j#Iڞ.fO6fPr٦PbŤ%$`"Y(䫊k){f$UA%MWG["d+~"tk}US!BAdHf⒴d\HmWlR)`UV@A I>@*,e 2f݇_@q'yW{%vt0dtơ bdJJ\Wt!@ &`3ɫl\Nr̃n|D˫FldX|x/wCAZo|,]B8j "N:9cQѱ8ۦ`,BB\" q~LStXk=KƘUz\_%BDgF?{ OXtuΗ*ɚX) MVփP%oHJ,IE^\O {  =)3#i>@,Ʌaz)ci/?c5^js U^^䷛O]0wnnOon.81XYA3ߵŊ`go:irtJw0_CX|ϯOۜ)Vݷg_A+K&%pNN/_Tn Sʻ<-ݑ'dq~m]d&Xx:nY;'TgǼn&8쓸C˞۞,In6 _3bu.D*sȎ"YTqS1FHː1ªx3%R!I%%]CD_}tv.ҩB` n$Yߠ 3Vj9iX lń@`0&cIxFI &g_òR'yIޑj`ĶF!YUy+x$EyQJYP#l6t1Kl*#;3{,0D5LYv&*I1 $8 %cvfzE4?h(>;(>EtkGb)Hd$߫ҕJcÒ1 xA:e$uOK3 9 !Dݥ}qH Z:SJUS'aT(/-YZK#k#>mcLUuI.1%B 0=EӅb m =4+!˂~vF4zX )^I4hɧ]cAZm F2mƱ{n{dKi}bnt:{P֒)#h8*OC@3[KαtCa oeb~H\qYX1d념y<.;BQKdIJspZR-/l%O_jyA}Ȕ8u`c+ .5",Ew;-,9ݛ#ju6pLŨjG`UM A Z6.Jjd"& F`ISZWEfd/Jaemf1qc5=]"!Imc[B jݏ/[vAqLh+ <Ϟ}:CǔtRZ7_x`P9.њ̴D}OKGJ&Q˙V7$4{C l~885"wmCK-X$_25#ݓݐ<{ƥLN 5ZlsJՏJ!Z#x_MEJϞS%`߂F84ažʰ6*pp#P)R_O^/v{3J5S#o?0AWHť:2mV)ЎHX67aI}sna)Obd+[ *~4 i}=bbkUG`Fid3(ּr_o%k ْ੔uDtVnjҮˊt=]&p5t?|uf"]LJgϭ p\ 6\ μ(EGmA Mʖ|`MIH}@U4 F(:² ,(~3Gd_QZÙ`(vCR$L'-1@KMvzln',䷺[,rs;G38Cp⃅9̵R%u$ȇZ^@s/͊)9CZ04&x ]av`샵V#-|FV 9"@ݺ.,LhZy>@5Tm[dxT^pQrJ^҇JĖ=lT +7nZMjU:qBmn4H6 >BN%/ [>@-O ]ҍj(Q@wEd4{c7ܦ9w뵊FoC&w,ϧW_:@h,|>7s&z*V/SXP5fa-H >7(?uZ.lQϟ?qW#TYt"}.1Zc:*!0D-TڍKue)(@JCɠ*9bp6R\0¬16z~{«w e}*ew'+픑cH<㋵%'/_S`77}߻Ϳ?g0UkVֳxs @o.ߝsȭXÚ\xKpⲝYfgoxS:ۯǵ~]rj{/{j=ҟb3ʟO$!spL=&c~ٶ:ٶo6W> fOu꣛13VXZJkw0^o|* a(I_~PZډ3rۋ$7FZCjSq<&;X4M9Zϵ])v*S%rSM%>XjCp ˻JN`smw3dTo:Tlӓj+O(gprԉcr>P'[ Y[uPuޝ)wػWwE':`|W.m˹颮˂`(ZF;ЬVj}w~cؚXԅ rQ[l2Q|5Gqh_Bq 5Yǁ=[5[V>:Ugj-ص αUoglrXK<9\1$3~FizO29)!9Tػ8n%WcU,C,8'@f%$Kc;qpG4aO_[3y-yUHbK$s+*S<42;Q~=@-f뇄6L4hM`~ E^=$&igS2 V֠:BRɕ#3G@X3Tr]P No DR'RoQ GJb6^&-MM$'NMTMhi72:&bz71% Dm}DwCcI&rџfˍPSjJ,`6"ShoyccǓuxf-,ȕs`EZ{[z+\*RO=7* yZCӛ?-jm\;ԗ5652J45quyZ#/clxwmO#ksu>-c5Vj ;؈4㯬3,75q10vWZ3FMLB5! 78"V 98izbC(=uvt^lc9rr_",B;%I@!75Z+1G[KjF.v%P#EOBzZ ?Jv9Hj퉵ޑb\Fr!s‡$,-$^r]N nbIY=>!%:8FE ޖm@(YT.Ȁg#_rјs'LJ[bfB\]+턌zjE󔒗ATuPHxF|l!/b^HLebvNQ7׮[O)K9yi#}1?1RIv5FkWGJ`d/芢!I=RfMʛIWnU)"Y^ S`@2 e#pKP7&OFC"ǐ eyM=3@R2&7@C̕ ai+^*$oPep򅪜ebmaYP4@z1}{PC6]G5,!C~щD=~D'{f3diĆZi+%#Aiw3= W?p.T"y|]!ʲf=Q([b#hۇY:Nf+k[J^k҅v]*M}|wCߧi`܇ h*'&Cd%b>ZZQ$@Ug׊+ݺU\*8[,cD,17Gx>75B%I7|s֭þRV7gEڸX_TjY$F67{6'{1)J,P\6f+9?޼a|91]&{e,/-U79m#\|xu&j]1!& i Y,zpԡWj3JX:*YQ"s+r!l;M9NW8qw(I?E=)S4%iVŬљ`HTPU%L}r,;]p mGG.2i׺|zi]OZ.>Jxot?ޮ<͂mxCh h:xKJDf2,2 A̡نwS:WTT{;ٟm]Xj:ze(tm9* :REljRL056gZ}6+cs#QO6pw_gqo~ $EяV,fϭSu#Q~nkyФ]V@}k"2)'l#@.M7V"V\jX r1_ 2b>@^HSY$RF*+ҪPcEP|/M_Ԉ6ckTgV؟0+vYdJXnȪҁ68W,Q1 GxSʳƛZ?k@Uv_g 0c9w+Ox#׌|Ovy]Br%+cLUeΨAfXBȬV! 2%{ "+zXGn@;cst“GP>G59j0}R?>Qd싏\ X8jKJHϪ.M?xzI:~0y0bjGG O:oBu4}YHRɦUA.A{ww4;漶/JI*%֒:+Bn(>F$$ilx8 J-0g)[Ǫ+FJIbjQc]՛,cW#Q_Mkk&jHN]tF{ +&ZN1j_0Lm \UWO%\s(k5 ׊@5-bvG]6]Ew<}gGSAfԭ!_n( Z)2ƧUl;(IȓpKm8},ѨX om޳ /}؄HbQ >_>\O_=~Y ;O>܎r0{Y-aPBf4ע @ cN%5}c+v W ԩZ@1B+יw:TZ[m^r`:6cǭ . 9w2֢:`kK~- cGx_rBl݋y~ Y.]Ru $UۗA[]+`m9oե~cZb"YaHA##U%B͠mkwMcz#,s_L+])lێhnNYCl%Qm ɭk[Ւ\!Jʛ/ >ѭo?Sҍgb+66C6̕:kQjEr ¹4EӶwY$e(h3xpz% +30IG3G>NʣJRk~yxg_囖|x-f+(҇ !./"r\DΗyΗow7/Pϲr'Η>T{r/T%W]Ks8+.FlFzg mMlrt%()0] H}|qGM[շ^Vn}Q][+St {UnYy-j\^-mɏ^U=c)ImJLԒ\ y–P H'Y ` o`nn(EXb:JOrn:A%"}1^,Bx3&yn{{Wҩ&-xÁmgdjV*7h-deRjr56i*5Z!Y$Uk}Kg!n7OPa_;SRcR8{HRvO{FO:>>).XԷng/FZ)R|sLΕӢ@XM*Uؓa(OKyPPA/ppR\J8Qzgs4f/Ct ))e~V3X,f ouΖT+x$gwɘfuaB'tGy΂бio)x&j+FO(/Q&vl|6 O 9P\s W"C}ZF =:d\\sg*UW{U!ݑx1oH *iJZɶʮƋ~x=/Kr` k5i2AuwL!AY$ϒ40AfʌI9}".ji\ԡNG0v 4xCڿA-sL k7'B2s6=m f[Lz54M-c Nj7Nco5SC `x>)$F& @M_rphgM—`\J6=ٸB.FGc ʾ0"Fpyʊ bg!6sH^0@^jc֏bNr\x%(Sg/! z= py h!:|<}//N~ZCZguQf$`#BE۬O(7KlU0#dF gNV,Z0F^$ܯ2jE?p2:̂}5=mf&%nY+1we)u Zk[IN^)eQ{YT^UEK>9T rxZ* Qk\0-eT63aYHW mWC9кa9וUJ CEL!3봠qvnqo 1 Y8>ױOoƞc`{F>gt n;n驊k&#*" PV:- D(D[Bk!S AXpgR DƥPA3~ycY˕*w^XWǿoXvt^WP'|uD|5F-/̅ ]!Hk2n`=xIؾ]VQDrh^f: q>w@V^&K⬴`-Եd#h%Uՠ}(+j6":Vպ(\+eUEUnQ[]+2J Kei, s&c5dR]HL-aK6}+W߉zVN~%H=hܩ:BMjοr%/ISMS5.m/Ap$mcS|f3u͘fma AN 47,e)In Rْ*-\R&#Tc&ҌrrH%c H Ha> 4$ Rj.*tQ)J!]ҺIȂYJZQu\,QJIayJќs oX^z}':U Hx .Y~ZnkwW6C=DD#XyxwGwn³?{m~Wv׺{` 8mH$HU R.W2I|`)9 z YhcAҚoA(`5tx !񺮺} ',#С2q?Vo<4܏]D4 c/}/[ou _j+S0ߦ1@Nԋ Y/X-ak_ccgi3(轄p]ٵ .+OլPw mfEz?\?fAI8fejDCq6؃R"}}MHzzz[_ׯ_1͖w?WnQ__mXq_0ľӗe-VݏKE L`Ys6&@XHb2B!K^keH5 l~vΓ D<=%k S#';5L mRdUS Rr;*YҸ5Ɣ)}ZS.nD*O3 LuR:us-ڲj.[vTGUIH!w̔ZSs,-2఩HtiXOzzQ2t 2;>Oӏ{]uf}:# GǨ =g<0dU5GE-_^t1=bxw,2=M>Mi$F9Hu$h1 ^7!{ !k<هы}}'‡zjH2V+4"N dDgŠq?yƂ9_rS׉cռW5fE/EqCݷ7hIc;~%.k$0 gxoGL1>HWb.Jʺozᔋ*rQS.pE=WpgMYL(,r2}X€Pʜ\"r, O!-»^߉z;˗_3Ω&z>~ 5&};ՙEu/gz}鲜$? mj(8Ӆ"?DTVɬƿ%d߉z-ݾtݎ/YAD:c|]6ct鄘C"pZ3j$dM f+l#ボ6/fZo"U V9CfaP6M)DaR2v BNI.)#jjIy&,m")ls6q'iAmuW'e}#=aW-]EPQ{(K_O}dTlN۲Ӄx@y!M(ȐjZ$rJX(AJf,2Ȑbo(j3A_A.q 1NB,"(t#In )dڥ֚q3Ooذ)Dj/jOl[{1Nh뻟=k1 QN%1Ջ'Ȗ2nK=|O[kc2gv9AUM}0F 21);ߍ1dqq^LS%ZG>q6)(RsLf(FuZy:NvVܧkCMj O5cN[$}@7u*pGGV,r~^nDkox.Ns6Lm3WtU󰙮sFU%dbNX-/ vUZcD4B-w(tX*'@)#Yٓꆯ] Fc.4r0|!ܵTfUF҆JWU*~%e]D/r8ޯ?gТRerNRJO\C$#4Bq猄{HNī\P8CHDzQqB\l*%1$zĺә>K)4W QFT)rNӹ$R'H  9jgsDS LS'Ɯ0_e4hGJsx|:e*6c @;mɂT|TjDH%@33{&N[z5vȏŅf(@<%kjX#_4*"ggHو:ۼ xQ xQw;JFJMXJb%}gs%-;gBeޝmNԫMwևg 4Z #Y}nƣ/^x#&Ixƞmt4Fn-ӷU6BW]e#t6wB:B"B#9"%ڡpaԧR`n>6}fGmAVUc}ٴ>?ⷸRiߚVo[2hy+Da[yo1vjkIzsG$9HURjHu| iUHxTcyd$F1}j1Boaa1|:l &=R8HNTPVz6`Lײ--s:زa,7ė5^if}n+SZB<%A]Ov rs̋MӧPf~dW7ݪeF/]ѹ'ɃGL9ډrD6;0qK:[Wi0ve;Y9VjoXNbFRazw㗏Jdx3VF{l[|g Jo~2d`W`4]0kQ6r3j돖5 K6΋`0wweqñzk@WM.N"nU9BmSG[>nUAE!Ub-:Nez=Cկu׬ڛr&m'dI.tsVDbuvU2KJtQ 2fy|#PEh- e.!^ YƑ5w-*,!l4;.a[G`cc֨.ؽoL-8{dXd{ ۳Z-闣RDQkrSIPAR] v%B޴I fF(:(jۆgc[۲,e4-:j~:[h=]I_&XRr۲fp- }ky& \W> N2.i_. Kպt +'NuU20{̑[)U-LNLQ ^Nɪj4.ׄ'Y&mV }.߭&xˇO?V$*'>'-궆U^l?2v@_ޝy E:X7.+k.Ԓ)7׆i$_ӫ@wW|1gfݘ39Ә39Ә33h%s1gsfʜ){5]eؤ&r=n2ƩI-?/èԫ_Q3-щi@e'^ DEi0w9}Vuje"/p&축IB?S^Ho)ORzZ3g2IH&UK>s,sGZlj!A 1Bx@(Ŝk!0vFB˜Vٳ__Y/5u튲#k4u_5u[i&zӷɄ4?xU4Ar\}e=RZ<'7Ų 鰨W芌N}D{8zZ20dU>XXXl,[?ūG꛶Y͢ swлą`X5&T*cԮ1=ػ>8{AU=3:8Z8a][Wu1f0k\Hfڈqltm එ8B_;a<3ks8nc% 4JSX"֞ebY2zy`jW qWkr^jMak9 \0^_.\|m6_M&kI7MiQ9^C*}6py54% $qλi-u& @Irb"dz>A CaZcgd/`j!|-}hvJپ,0aL"Ss:/g- ] D"dw'.mb9-DQ]4/SRgpГb;/SW\ SLTF89 8Z.NbƠQC-0!G#?\)P0"^4U}*ևa%<zʙJ? D$ʓ#P@yxcdž:Pa|``CJ=^퇒x`Y;_ޠA|xs=yⷋՇWoF g.'cvI3tһ(N$i]*~Sqk_qE?TiXͱ/drfb~)CȒaQLZJM(ҝH`XL O˂vǣ׎$KH})bE00KӗA1d8XFh&O&LOTMCU {vujSy[=Pn$-dnj&X Mڣj4]2/ʃwi/B-wJ|,g9bz4Dh.mK'`gP,/|i8&n`Z$b.w]P/2}+0LmI>e!'`4J` 4o't1ewBM)ưՙ]Zl ZI2HF\ǵ]P/ 2 ڳ tf0dzzaS6Kb4GUR;\_ ,#*" ȿB%#.c2lFkrZ6!#oCF*idfMk/ /KX [W6zb:_ޝym E:X{y2=fk<3"?Ԙ_jC4.B_sb=<)qXV_1qe &HΕĴ8Ϯ^Ʌ_(DW Z|[]﬿lDkH Yg 76NNțs{ru96I*%YO)\4f>G1ئnL.y}.rk^CFZ-#/='{ɏϞ2US?);.6 ZGu]x5;yc]Vw{ݤ;n W9W_osҒ"!5K3v ZRﲹ3Y]O1?`OR^}FPBk9zi]|$SF}{RxL@rO>@ru8^2S>-&89╧T XidbYDPO`h7#LLK&2gC>|4޿1jQjԓ%cw m jn(7G0[`qgK՞N^|=ywaRK]~6gwڳk,nS4UǵisYj")Hu`t˼O_誰#].XNXCum6bS*S)6`]WJt,gt0:uփC[V6Bpݎ`ۗozǓӳOx}M7szBRbRJq+Sy\!Bn l+Pm:Zo)x6GIq g@Rɰ< o(>o2ē9܎sS*e$4b):Ss]'7V=گLuj'&}?wsx9׋0LE!C+,uV Bi ҍKf~`.`22CBr^*c158*%$5,*kV&Zʐm>7sMvѯtGxaGwZg:V`o k{Pj ,}^ 8>q6y}^cySRJbڒjtvh"^>Z6rqRŗF]Oml?1:VW] \V؁% |2K+v ;Amdp;m\NT2 eZ!s}O9isWT9XoRP%pkuLyi}HˇE8HT:;.X.RF /r|'=Gă;/^E2.U!^1ade-BT[0$qT.|8iw뮱pcO9lcsjBm؝ٝxՔS?5lӌFrZ0?8,0 ?ޱ;yy],9$BaE:k+7{n`O G^kr Pn#%l<_ Y`^^σ LaM2 cX~ J/C=86<Ғ@4ށ:gS@Ij.`n˫ \j3X(#˿4,li͹ssY}Uk޽9 џKs`,_ `3|s\=%aZw4Coꃯ;:t\a<9d?8h2IT$>6w>q׌^ ,EK0MHi8h"H41+´ 8ɬÆy'B&8L7Il!b]{m?_Lkaf & i<SxeO{bq+ssI.^BY`Xm%Ģ`H,3籕('>2qA\#PdʧgQ qͽ"zȇ߮sϵU{Z1k}e,ufQNSot$jŔ;XHaQ"$T'4]aE;I:$> h0{a iG5, :) BJ!TKs#I#JNR.סeXQlhC1cXgteJzrRɽDf01AWijn6>EՅ%8=W^wwYH7W ݇Sn/kA]D`olʕ^אK7~"W*"ݹB ȅm_B\/EB̩f6O[xv7K)vy:W੯<3q$nLgk9tDvؖBtWCݓÕJUaaNZUoy>(ߣT8TݖEߦ[*g/%qj3搗y !Da*{FLs-%b3Xuhw.WYx%kv;"i[Mr\ ?;~q`P::I i$??Pd-^,e%8]([х[D39rߒWm7|w8|pcgoKt_.mI tBіSJם9cbw|⣗̨R VEň啡HW2hDu ċv㿞N,h!kQagK##@qt]:zn@(z1}n]Pl/iNgzoh֠}籖JJj+FHG5 ojy&B` zQ$!Ed$IPHl㣉\iA孜mubE'y Ƅ!B){C<IcY)jb c\A.fPϩ^!PDG$JJ˼6 !mĊlu@( %怐Rx7 "#q}@T : b+`#xND, 8@GPK̆﹍,^J*4i'DJГ00%WcA"fX2$\ aT+Kfo5*͜sp愤Z.lIpRrZb>^gQxR3z?yq20g8N}֊In@7}/"BP29c*5O&Vk'a"fwΏVC*4Rcq08?hs{u|Faj=l5X .f8C[i`K&F5i-9P $}R ޵,y&qqU!=Iruj`/ +ڱA/Z#0."Db`yKS7&FnlWvHgarمPM,LGG:٩nd*OE=9_拻k%-1xm.k%-1x8m{E;_B bj,qT!tjj 곻 yOb]~D$:`?DtfFp>GMG Xy)ùQ[ 0őT> loْT&KZe riKu?1X{`4!*E1P{b&=lBWCR9B,abڳJIB5X-.(h7 Z*jX ёĔA"iaG00[-$#!0_LA}̛'d̺HQZTP)e8DvNxj{欵V087 `E2ofRy{9Q8U&Ӌ+ӷ_[x-U$% CO0FYNQ4M N&G bHC >F <srD\hXp ͩq 僳bdFP1VZP%CHvGx /R)"(KŠj͈ұ<UjzV 5hyzk2h) Q!HIo&Hec "U’F& `,AR vݙh^<`Q&/YAbV&% z{O[aƆ *"cBDAjȘR`*Jy00Tg=`Ld+L 2kiqc|Uh!#`Vl4`J4iQ;+AE2T;A0AwFT9XS Z|T1`rj΅Uu;TloᨵqN·.ZI:A q `M 6LMckq8 / 2T?O#/_bji]i\vW=ǐݜǒ C$LW]ڂc/yQV*!Gn[]^-nn"m`Aېaٯڬߢ#tH'gRf:iI&] Mf3;,)Z̳~Fwrrt滭/8J }¨ƛՠw0 ew(C'A$/ М@O8L"!f؃Hu}9.=|]0TA)kG^ԡϕώڵf kU/"1gevM9t^*S5dMT -m^9"zWw\6avT*)$szTcuZV-ٳ0,6${Y ]zdjL9 p%a(:]Zh%I0n &{~id ܮqJsܽKFV=ّ0OS]Yyb1Ot]p4(@$52/QyQB^fi3JK8%gW ;,4ar ">w^ W"ԥ&il!m?t{`ɈX.SBibE@0B[݊:ryԍ`Ddh)\m1#`fg 6f.H3#ݮt × %Zck9BdzkJFr>S*ULCp&)Hg-+7 wx t ˎb *y=P ;7 U͎dcE5 =J,jY6po F;#}*J#ðǻ5 WTJ"Ck Borj8FvC*Y[~ GG*5G@"+-88esև0P$YU걆g׉9=NsPRq!KY1۠7-l(PzΟBCAϟYL7?mFYOn^RC;W)" h |({X֩#;VQ΁ER}hÁ$䪘w_>3j!3"^-˟ Qӌ>^g9<1/O U׽wj`CBotV.]}I7(@[fۯc5Ԣy׏e!խbw$mx I s%K?~ 'w̳R ɞ~zF!րm-prѾ)}s6EʺڠBY*!x]?uv+PNY5ZTxuUtH U)c8Rfƨf;N^QjԶ,ƴ--DejTw/6>E`$=\ N܍ ލ9S}Vhp z"hd[f4V;wBx>;nl\N[Q_CZ//XmkHN-)iA٧P[Qp. !yIW.5 |@Yb]l%_['RRRRfrUe`+F,X#D[!QJ꼬Ce$DzҚ"BHU4+xN*S./.9˵ڮY QB"p\[IfFw aVl| )-BPF"8]ssfMz(w`ғS,O 7D2砚 '$FOO+l;}Řg|ƸoLM\+u4#ԣ0^GTgoY## ᣇ-?0D..z"!rK1"Q)Bslil;fHIQj;3&vtbi)z tV5=Zzx (Ɂ ;IдaiHa&;*%M?7թ B=%cYpx Dmm;ۚDЪbi8i8b8鞇߅1kV|3#?_\|mVԋ7>Yއ:Pgy߮<u *_UR•X:8JWRT #UƔZdMNQgJ'WsQY }C!O Iߝ =+[>{/z׋{V }Beխ7x-9r&RIƈ.9 ŠPUEAA!+˲HֹJ/UOݓz>v%tBs__'n8Xj'ɹ5>j`bp-Go~R#}!1r{ŌSGA4%woy–_$A91RZ)~:\JT "cyr˯6Ew2JY~yJ O '+=R?{d7ˎ, Z,>|aEPETq}?&Aou K6goz Lh4:\tCzv߄k3sMLkmM DZde2[8An8xGW:Wn=SgDfeSR!J'1NK5Z/OJv];-UJ%N~|yu- sE^3xEQ@)UJͭ)0i8C:}DfWz:ʍ5K$V~b)+{ݔ)@[7j|&{ݸΆ)Q5Y,d#Q.jsq,J\W΢͵%%*YWSv'25ep jNhN UiAM'_P4KYl<$dya}.\IQg)-R*$TNyFI_9Q&\% I(qaĉF|.1<\V*V'&jҹWN`AK4PQ/.UtM}4QqUјVuza]q[n#.x*P]҆s򚳦 jfe0jp.~]O 3LJa&[ZK 3;Ǯ)߳]7Ytfub3tlJ#4<㮥92NsQX01^Zig^afS8%ϰ13]Vn;OF YzB ܞQ'ubuQ3M!u tမetN6fj 2%LîMsҥNHd;9=L)4q#EmD c9e \ ).΅`d3R.>L8@uV\ C>;N8U pQe(+9BDzFUF"MBwU$9 oCIcc؎#7^ti@ SͅQYOx0^"^XMպ5=1vоPvʂCͳ&`MCzhHA6'c?3ށz;rp4tQgTJxC_ѼO?uh>E}87t)P ufRĄ}I8&MLQsGUUYk\ik^y$cS^"}t>6AOZӳOIg7Г3&zѣ}}K2#+-LHme[0Ot8VE(&ݦ-R"HA6[{.pZ:~J._[:؟fo XoPw~"ؾYW_=Z}N{kaDq!#yBҽm w>xgvj 7b>̥aϪp`y pjG.)KqEc6 id9fs'3TsocQ sJ)ZD b$pVxp ^lw(H;}輺`c|`y4]W uUxWȼ(5jcƢgBs)c\׏FNP wY>u?V,h{w RXdPY&cw5U)+*nǣMZk7!R>h߲u @3On/AEœ[=AI^G S͇};NBz! )1֎Cf>${c?oآP#a{vƦ~x 6>||_W{3{*^6nOX/[Ufշyx,dϖQGŭkp#K{{b8w_csz´Ŋ_>!?j@~% Ü6|W3̢ϚTUHw7WaIr37 Nl2X?<^xW~o3fyfyn)J][4~n}Ϣ1OcVz;ԵZQwxFZ 5<ĩ'>{5XipʙtnŐzzi&lEp>:qa/s4J)[{NKN}&OE)f;NXITP (GnG]:b]ZgH8Y"K[^JTDXxE "9 #J%.DZ+GIk%XRDF)s䙄*isd iУ^=]z٤@ԩc?4`RµO D)f;N[IVL~IipvKrV)5ıI ĢTnbzG/i٭haо'98K2ß0g+NYqþCzjUAK˹Z0eW4tL}gV:f ,ϧFr˥ ŅHnL'l9}"El) ؠ܂s2:)o6H|U㜼 @Ꭷq!5p MMQ,3ZV")Q&jH8! ]4AD:KP4B?{ȑs5u/?e&AprA@baIݲ$c!X.G,HY:}m;.i! -JR-[ꭅoڻdƚW G#Z|#TG&;1[h4ko^]7wR׀Uۃ@yGu[HK0"TEhOYt ws-Zyr]D4;bw[Br[7{ H|7FF*j}2T7ޏ.=}}ao{'pfGw:ҒWSħǹ6'%AM8<~U6'ul=Y̓nc6j[6pW54|!5K,\67κgm@*Ie[FJeJK% ? *Eg\ϲ۬ϼyߨ" QOZb!믞H ֫BAh=&i 8F5L8D}Bh:<u[_zEٹlnuNb݌o/fb/=6Ur.ehaW6 =ѥ{].-[$Iq1qu8!%us]\ߌIc(!_,ZThxZ r& Bh\ 78Sz!M{wn "Sh wMUs͚Z`!ˢeǨ-.-e#"w`+pӠ(ʂSPE2F]M1lHqnP ,}NXWG/I1-^U+!qs~½N tJ0Fᐕ0biX<@NGL€PbLW,%P$1>lOz?'R)/t[DDGZJ-Iv*Oؔ"\}(S8l4/>怤ō^9:1H)yoIzO(MtPT ]h~Zs=m*2Ƃqց n$:w;Ӵ8#o3 gjL.,JSu/ $Wo_0Ɉh9o]AZFEWI'd5 "{4S: r'sti*UGbg'} %<{q<>o}~% U(3'YwA@]g}^~|?߯⚽Qw'197[x<]D%퇫gH0d{ilϿ? f2wl<rg}?,n|ԞҳP-,ET IZ2UUwUQŠcvKɌ>{8ݍcnz{7=B{b:={h.M/2{¥߮8|}qLwf,,nRn—Z 륄=lzwi^ʎKĚ&@;NyACb`Z@+k NRX(Ҝ>yA/qr7:rf{Gn<\sK# 1\niJ|xeg\+p̏}nRAUe bH).e9h1mT58A 1G,0AGrs?2zb-)oQFH?ky:<3D#rbdp~m2i4\ބj%i&!j |gs?WygTEހpNJ'ߥ\ ?ǟ>MyGKrr7jO&;4zkA ^|OZqBĥxٌd;*!y\/s{;(Z=Ŭ6QeĉȢ4`$K41.9I`1&4&cM5\irR ;%ג5G }v ψTF'5Rz2p;ə^?8yˤ&#t-\>O?)~PjA s?a o [R#7 42@]P:L|ddN 3E%[˛W݁z A36GkM/2|!6PѢ4'0ENcqw+b3@U}ҿ7eZ)nTqʎ/-nFYE4[!З=d՛c?RGۉ!] a )L phhņ1", 6z@W+vΔīlawZDs@[*O+fO#:w'LV\a ]9N7e`:t$-Oj98;}Oy x'@pIG(V)$Jd>,T2!:OLs(@;cyk^|hZW12G#ɸ,k芳$co+a8f@()DNTTbNg"~쟫v!ORq&ZdyqxQlfWKh:/X!Qʤ‹.Jpn ( L_ Ŭ1^#{LV sbfz[clMέ1 K! $d}h"nJ:sd[T %nr҅xy2f=>i~ˊQ5VgoR%_՚rD=w*ʣͭߊdi(+ )@3x.TrX°/JٮN [%w# 㯃mT Z<ȁ"ͭ rGs8'0"ki!EGOb710ʹR7\~s}a}!AXf hM$A$>c+zνZ(+w;Pnbu.?.  Nь݉.I'JVWW]RU 0Wa* r`Z9%Q,h->&D'ǭT&u NCI*TKU|2mh;2T(™DV)^w.\e ]D8XKA)(מ  2U$<J!c.5'5[Yc؁F @"IįrV\N2JB2|k@'+F {u s;$8wVwB׏`]3phf2U+uAs٘NFKiQ%AV)Հ#a)y͝"%4M̚Y)%)B{ PCZS&)ZU''!uRMNn9N,! n擄ŷ{Y{m$SQE؇>]~0~Z4LS"p7d`ǻ}Oi1#l7#;~4Tonhrw} 'VXr=s}@qUǤ1Gl~֐B*뮮aWgi\U5UJvIE8*wpA;C\ AZ;$waPh6;ty]hAD' D}C$% "J&9mzg>Q"ubك/cWFnXcA2sdTASf2tT@?l=kbIQh`s aU[5YhWJ4G[7 (J m ۄ""Ʌy=tts%q#qEQ/3`Ϥj묱PD @uUCn%Fv:{hmx=UC]Gաx[[f-jxfUl\ҫy ]CjkZf)1`@-A.~#.TdqG_=9| ps`fnc3 v˲dS$˹ib,VNF KYcd&`usrCZB@ Rg$8nkd>T$ZR=U-z+ hI𡶃RԮ#bO9j7{* V=7\jԛf`!KUiD s5q!:} *cҨ`>`u%Ұ;*6;l؝+w2jX!\{@Ha0.qes6`tWozadNQ>)S0s{뽃Hjg( v+ C UB] ''U)UR葢JQ| ۏ*)8籇qv+  :N= 2H*X^T"q $\ S0J1ks' Ғ=>ԥdHt@YbuIY zI#EF:<ĞN  52{¹"SCI0Es hNLx¢\RXMrm290kh8Vv q֓(VV9":ӫh{1!$'<@jS*&K Ș0ʄu` p$$ĥܤ?zPei1Q ꔀ%cQn)}`*,I}:0GgS8++ |&a2) ќV- sVT<\#GNK |=,S?W=JJ8I<ueZOa1F_Lk;ٵimV(2DnJ#Qݣ|ȜSbZt0ѐ\C0ݾ>IuІhfWF4-POu+#@UW^iA@D g ~YӢխ#]"^ c$YwEzZE{AZune&c &J*'Zg!+xC6ҫs- coId'f=|lqNP_VѬ ߌx(BL+< xt 4kOԈyxˆ^KSP^XhG*Pu%k) PW VV,0jx$JjK}]-ʀ j%J}{NH6uUHP6qv\R0LzwdrgN[b9h.B-B0Pw3 {e*I˕Tʣ6dJ"s=!g q7q[q>d!Ծ9"= rt}aBjuOe"|Ou&sF1?0}>Y-a+Ճbz&sĜ4P'NmGlvȒ7 V>kudgRAL +-j{evI>=L.9IzH_)16ʟ^ h]w(ڌKumcJ/hUR"A?ߤ. t5?DQh7c3\?Ґ!gx|>K_@~|Z??z+q4pW#*G?nsPCbnf3wO02A )9r=08%=*J4TΪ}7j6dH.98-Ap4FjLaSKaZ_i.LIv5n>蠚]sR^yaS1qNʉ8-nxJ.1 c^:nx"6b!9 y3J O\ $#{-xFYˣ'B rrkyMixǔ K?А+!kޱ_4#Q^4'B1(3޼hNтsz_ղŢ 6cY%[#wp /}Ν`_I\nL8ȁ".I-A!|D+/joڽ>[Ӷu^n-¸;F uioxvqۋ0QcOZh=;Eױv'¨!h<vrl-¨gF Ccf\vTθ܎}>֓1~%H .ǹݎ}z֓qNƓhhnGSOe$*GhhnGԓq *-Cz2.!Q(v^v[>d/QfQ KQSxG!tɣ6BB?J5-fPp(QXK*uM5/I礐A!qũ-,; ᔄP$dyjsDnH+ɵsM driw[>7"XCHH+B.\a͵c1)&ǰ[bPhhuҧ+0&5D/~?2{9w lZRˬ FdR#f2$,`ʨ5AW%]np8έRB0&;DybHmv\m텫؅rڍ; N̅ VGajrp UYPB Q8qh@ 7T84?|{n7יjnTabXn+sLi42"cBasBR #czs`#OwfKs0mog7&>0bck~篽4KbMŌSb?=yy-ghC5AoB{xл5)cU|G_7|Z݇swmc!V/߾5Q@!͝Y},q<l Bm~ixءbG4DD9fWH ϔLBԨoo6M~gjW3;+DnTUN1Jf02VxTjͫJ@0XQ* iPw0]CL[&(oN[(DC`hGH!ѭ}Z[r\JUUv|5iep*%otuP:5BbA G 0tӺm* '&!0Zq1Syb@^21ht;pk *w9b3ҟ^I -!Qޖnat,DJ6R*u0m:A,[[=u r{Q ZVx4 V6 A*4TAք)![kǚ M n8ʠK{9'w'3ZQ\&%SGga3ݧj3EؠΛLAy/xϗ?7V4/oo|57wsf77wvfkL|~%O=h% ՝|T|r`X*`r{1w0׽``Cp6dw30|PZgCɓWA]> S264iWtJ]c>uys(fতOeV>YJfLRf;#}Rfc."_Dfiγssb"rRx(K$'tb<'r )DB:Ϟ{z.ۆT8UHjȡSG}ߜj_I+CJS|X9zޞ KZ/*G%0GCޕ  uWT.fdI2QuuK+U hƮ=4֐=Fky\ɏ PJ(}gĴw=̞vKNzf>.>|g3U:êHL@6%H14ҁzi3c<ʼCF5!X9eE %;#qQ X1.`cɌ H!.B9 x}zXgsܙ=Ev{J=HDj}.K^hѺqU:/^{|too|;ZksXCP _4ŵ31DioornrOƚqy?/s7 oӷ.>[gXDSCDDt!VjD$t"jk2hIR)Ue&?AOA%WX$TƠґrǵlݵ fT(NEWJ\6X$(C5*Sj%:$M.PT=F4I.ʱ(wJWBЍQ^E׃u4FIZ418{V(KrV#m JKNa {Tp%2Ri243a:y/0,5,R~װ;Y ҴU@*HZ=m1`塘Uv`ɉТ%N.S- {?<W.r,Å+rqH"8=K&r2IH+QdGw ZSYA KeUroݨ+e5gB5_IT/32Mn !\[Grkhe{f|y^#ugy#H)gȥlOK!̙̂-&'iEڛܹדv3?mڥC̈́o}D焭'號Y1^45bJc4ՇuZgwu0>M;B,a;e05hݾgN|x*l1c:Y ?2=0ªi#D suNhXB°&܆."$Im0h` Q!m:&t\)e+rG.g;Yu`Qa͵` h*ñc^Q Uu K"xHJccAá `9 G5J0X|28EOiG$j cgg&XK,6 1za$<ZX,2Rpo? G^N #V bEJN@p4%+6LN:&vN\ (j㠼 ~z! "VEN6RK,&GcpHG%2 +T#3[gJKjTbz`)Jbә*eanae1hNbpX8ǘFLJc>bU#Ds r&8)5 8 k:GBʣ(RCcZ_0 0mTTS8rctG95kK)fq-`ZhP5tR3E1 { ;YX$Q `+Y"!wHBFFD K-(WfDpPbk)4W5N/)}X{K/1V>_Ƶ!&fuj\@3I=c2%\,x03lЖ $Ws" !oS~cJ8͢ع~ !DT G8hV i `0Mշ5hB^B &Odz<- 3t f߿\\L};{SsM&jnCVW g/W7K}0CWZ|KZ/<.| F}n.JYbȴ…2u" p^ W ^!1-ACn A_y/P vh $^9"W[!dplՖ!mmJ\0$>bgZR' xa E2~2=&`s+dGa26PhIڧ@z֌bK-¾&`B᱃zRrb,E6EPvQB]@$d LȈtnR^W8~PY<ׯK(lOH%Q @*vRֶ\`WJ(r=FVc )f|4n2?]o<0a'9};%30>zcBbk -n?Ot1f,٦=Z=tB{VnVSyՄP!V0 M͟~ M[34J~uvUwwc#4DLk=fBsZ2xRO`pp1Iy^N6o[OJA;rE!˧;WՐ rс'\@9k#sYh{=G o?Ϲ_}z- &OR[xs mwMB*.V\X\珜hP*)FdPa G>ޞ);W٨Q5Wvjל"|ؚ(R{ٗ=[Km:zѭ::9~9eqE[ ϡ6ۃ|Eyk@WU@З.P=X֓ a%U)됑GB9&Vm9{3ݝ?hڦꍜJ~ jqSp|:zONa4Ϛ40SPRs;IrJUԮȩm0Q뺵yƱ"?GS{8-hc;r0}4`4AZ!ETԔ'Z\x='1Vq긂 e6Ϡ/~D'<"j0p)[wzfΖ 1Z4Uae(1_6z[_ůFƂylfO5v=&8μFD ڱRo؞Ҫd0mG c|g0oiW|:W~X4W:{yYQ# < O3FT< `s"X eF)l6aA0&':Weh'?rM6IccaELcu[q8ƹDEw4UaP]Fϕ ١3g2xjW9s8֧|B-{m՘y'rķ#nGvyg~R,ʟr(5!Mt B:֏R1ߑ\ a)7XS;\k`7 $#X})eLFԘB~W%W%/Ʃ7$SҖVX8m  (Τ>!gc8-9F6MUk*rx4'_wB/e|?˙b-b杮/ Vw.iɞ\"UR@{1?=NuYFRmqu%:+LN!W\qQp7-Ru Zi߿d dŞ X^L-mJ%y21^GzuK1d=>k4&)9.1u,GIRh # * Kpulu.?I>C=:nw Pr@85#NSlK@U`*u!œEU@me e+8x(Ja^]eWyVOS\L}ۋmob_8>ӽn~o?zʁw?r R|k@EZ1S3&^ܒ3 Ƴs O_?#{Cvw7wW/|m\|wo\X'hg|\]lz]-Ⅴ] l!qO'vJ1ȯz[XFGԎ ):4Zӽ@y:p|?ff|]&?~[oEѭ{^tk5wj Z{lpi\c|ͬ:sgyQe|sdRXiG f&4mg>1yx&pc$h֬q@!y)/rkD,δheoxEBeOizxr?>_Qvd"Ln8uF)#g0af޻$ez` BCI_T)YYA(lR*?DJ5Rr܋,ǭ,HpK&~ò.{ طy>$lS6Sua':ZJʼkE DPH&]]igJR'1DME:ą0 7Mo:T'ثF]R{ &SJ/[q΢X扌ѫ5D-uBr7 n5͹{hgIڇ~XGFf:a\9{ov?;d u4ٸ)dfG*}bh6)X7!`Τm[kyk|c{4*7<0xCgGkAw㰛mǞY rzۗ\FaYف`ԑi*([RyJ&JMYRJƤ3sF[E< 9Z̥@<0;ȕ8vfk1\kۋjXGꫛi%6T+bI|/80H^n.<&i/{w3!U18_r~̓>ȕ€^RZf`V)Psŋ0 Ɠr*V=PvQ|f܍ \*|OM>K!OvϤ|8q&(g3 I7.d2XҲvQh|2tZ$Sm<[[ĢL48d lAc6|m> [upw&ZPx9PӍ8%yE9A(̂ d+G^6+;4qL얰XySdb^@T$`֬q}NZ: #k<;Qs=!bQnqp zWȎFN2Zoyɖ hxnjZ0Ԕ++\ڜBn,_Rߦ[X^"]g%5kGvcxo!5OBO{pn W#PZPv5T (ɏ~Sawxl1Ks2]A@<ȄɭR اsk<hSt^r󈵗c!RK$ε݈ 9۲BȌ%`#W<Q Ig-4Vy85h薫Vi\ñ N{xُM(h4;2B靉Je.C,!.֧R.cJBB=T]Rxi<@M}PJthLM/صż%~Iiqq=_pkd2c\dҡ@)xһdv9 oj2u7"mss79!}Ջmj X&%;zB@D,Yi (vNٳH V|σOV$2q,n!݀ X'Sc4?;熤䏫@_H[: J0I=Dc bcs 6wfGr)ȗ1Aoc0ulCֳi8kR{(K@q prXL,dȭ4]./JR 7S|~@}Z6 tt3ܚ5J՜мw3X[R#wJ; EN!AL!%1#m2:7y}:A%8*:KRAV;eetj`\J49JjozU>D^Dꀷ5TN!dsq`\%4;`%lpl& L2Kdf +[sekָWT< ڋ,؜C?ZN?tQXnes8[q Kiš +&$P23ȁ\! FL -w2)|ưF-7o"[Z:EBC)tUX 賨i-cHClZgiKΕ{PBPZR).84FtvnXAں!r\rn}9+nsssssV)lkM=Pf[)}R'ms,Z{Rpnr`YQ)PRbHi>zmHj]wZfcI I;ٻ6nWXͮH>TZIq։\>J.{6¡l+mPA$1Ӎ>F.,"@867a>)i >e,RIiDl Pb@H*p_nqPv>8a(< nAۏ#tw F:g)|ܮەwvnW+Ps_SF{`a j")O1hâpCP`-`;T)Pz0iΞ13;{p?cGH`ߋ*kр10L=9 & AU⏪_(tKYfp V-N:[%$),rqHS1!Qg#5`}@ H:D<)4H+n%W2p.g.U]s\z+@H< 2`b=wX`RRJ#GŴ n։> Hqz eͼH5 ȣQ1 xHOn+qA*21GB*0|?J<_ 9 ѺLc( "D8ׄ+ H:q_0w sr̥OVII1ߊ%4*dR*8H`IN4Ա XDb#H% svZ)@"9쿀_pc8 'a(=Ip< prmQb 2T`'P00t02³~A&p6o#f;}ߥȼz~مLt ~$ ^xe^/N"+':o^_txLg4yq'-u:O%a|phmВc%ugpOL1.)T:Nn];]XЩ ,LNV81*M򑳱RG0@߅fhޠ# BIk$=pd#ñSm^:ȥ=mJh"Nv6&UI߈D"mJAp -Is :Ō+qNR27ֶ6DlaSD`x H9E/%cs GAbۈt@DB5(6ن#0qйĂL*7 :6EPH< lHD"!!H+-69¥icQ*0Z0vf;p86ߌ 1&)Bc0,˟9bTUf.46i7X;0"a9yoBrY1dkQL^Xԩf5ǝ:28HQ$O{gphcJgn',O*txz5 [@u7z:UF?IN˜THN\JX#ʀ oNV$PôzFy0j:0S2TEq`G42S4<9+|S5cfZ᜺CJbuI3/'V;WĄ( xZ+QZYp' 0TIQLҰRP>dSE5@b:ݱkKXfB^)I=)XE}rD$zu#4kQ2NwJrlf%S @q;:X(2Ta$eF"W #I őF.3R!eP`tHNQˌT9@F򣍺\fC.#ʚrze- ZrO?(*1mRtWљYn0ڧ(# dg"3 4Yӽ茆(80/m5HAQCKp]4 ζ}iu?QpVwG -A`C^6.ԛ, /g=a܅GS h:N:EFQyvU&1>2L(|ԝjFL.|Ʈ_PjOXo܍&c٭L1Vv[W]k(s^.ls.mw:E2wi}kTTc`ղoDeʢHsaMHUTcVzYM6+4*ϮdZQ)E6ͪ\k9z}6t;\- aGT]Qy.HY*cW-ZEQ˻*L#/Y ކ u~ݚE5h-"~MK9ݚW *ox!s9gʼno'v+MmMԸR$T-VԨI*dFBƇLBP.!\ѭi֬ux5E(i3}w+Y֗g0ڧ(J$q(IIּIBFfnv9nx#.} wk^rۇt>`icwv$qR^25pܻܭyU!'7CZ䊅5djv܊/lK睯7K(iu+Qp#n4 Inp]>|};MBBr|Zs'tK=ӚN[k\ŬIZnYg$tj|$$27;Y։Y<=-}™Ya샢!Ew qόn]g}"@ʈ6w3 -8{ͿT}V]G=#B"UAF},#9Rti[=Vz6ޛN4 :q֑'w" O:f28b4R`+ $hw2>ꨘS:;ɺ8)Ês8MxF t`t6%E+|:-FL+!s0)].ʼns <IJ|[R`o`x8oah'|cq(d!%]An0 NS<{Wa\翟N/'߻A>GOc{weDJ:Olhcޝ:4'/F)n>MOoS/O o?W_^8Mͫ/d/x>i+?gO/_Wk}}$ W/͟ xO=;|p\;d'k_d@wCov8NiʽP rBy{?m4k?N{1P 'I!۠48Oeg/TAu*>KL01lti՜:x-–aJ?}Xi82C޽6A4Ϧ`dš-/F8x8< QӇLÞ?Y˕7OcO\*ޝ^ì?^ټgߛ0y<|;I/7?Ph4c?2hL0_Oo3 ӎNF- DIѧ$? (%?_NIO_MOLׯ*T}WRIBV!fY b˳FCN?OUQvg҉e6]gŲN೼[ جV]1e^i:cgpù5clvީba*eR\p{AX$fvE 9$1hܬ}{fN?gdRS 2Dok߮6.$gMvr;5W!.aIe|a\/엢V;0916+͔(𥭦]U*@]ƥ9J#at D%!TEy! e<Q8Bb&vownVq= ~EVpؗQ}daI&+o"묧< E֩10U9Ze}7ҦQ AϭT*'|5c*>J1!$z?F{1Afʵ:;Ўi&'9l.U=c ޢNW ^8=CC'8ŔR;0yEXϾ'v:6iM& >Ԁyf?Zݒ䭃uq2&s*]U'B/5mMZs70l]K۵%,o^бI:rtoݍ̦WP24Y' A"Zu-i*$%RD%7-JNh\yr]RnaE)F*jOS) Q٭Dn[[Q*wKa7BuBڪEG)k\Իj=N\Vr lA\ju(S8'aCJ*-NPŹXvX"%H7C,(-ʳ:'Z# FqXQ#Su0|_;(;;onCQ Tw+u v??G{hda#v<9;q#{$DﻑJ* b*jF6ߵ~M+`9-Uㅉ'~18JS&>r&-EmYe(UW`0̼\vCFq|u}L>QU.لKifםДI DaGG8E>=t߉,qt\)VC.Y=\o[(X㧲iTD4DK`+j녶n,qSe ~؞@P眃q\^ sn.K-KIkH;#L w}ItA )I r: 58<ړ&j$_nxy%?aR [-W KlnyDx4t (<|ə!Z|?u,KNQOٺ k?RVvi h>gn;,4|c>ӫQh.Pq5d]Bu 鰱燘Y#> n(;IJbw XއNއ҉ 2*=A+H 3+X_{fȬ- ˦Xx5_fjzU"Ft9`f",_bR!Dt ޱ^Fռ c'edDcDpDAD }ezǑ_1rvήrx+^s0ٳ/;DJL;I{Pv(dS(K ;WdXU$ <{TsDo4k>owIt[LISSnQ(J{<A]!U{|3Ⱥ;wHUC-HW?+2LekwN y]2G!W[Vx[֯~jYw Բaߖk2jY_ z򽔗qqD,bK5D9^9^&o[ RSXZWÄ1&mXC!Qm-a0E2+}9UtXNI'Wv~yjCSf)1s44VHr LƱVqS +Dsa#Y Xqe`ֱQ6?-dSZZRʩ ZrRq20 Bb{<.) ܷeuي; d0Jan᠂2I%?~r. u Lxg~F0G4ܘd;_liTT1j]-뀮v#9;rE'T,ϽhbD=db9k2qHBv-WNn婔4F+9 S E]~^RTPg4{hrMR%a<Vk3c12wOK gB-3`L6jϙA!/pclXkeQvd:`K>lJ5[EOA* uH֢AnU7 A狏 vl [k( 0J ܃V;Ph`YI!'Yݙ7 BP5/u3p~m )HF'Z *|ZjB5s"' =Ѥz5lo唢tOo*l/WqZ<;x2_&!{H |3 Lo).:OtǓнHP}Qa/C^Ht7A+O@!琋h7Ԙr8XZ*'b8H([a@tc RIԵgv⦲;^eg:rX ؇ӂ}tsP `8(ş!)GQ_@O~4M=S*܅Y`̻reѹ&GmXO.@?RN_e]tJd>oi$Lr}Y~yF@ LЖ}s͛/!;ߢN}e \lMj!,B'3DD(Jb*uq,/s X~3?CZD̮Fqۻ(vz.x{P<]O]ύ}9餄?;Bw+ҹB vykt%vҤ(mւat5oiUS|BoWpSe%st"Z4i~tf*Xe`lCͻJd<godrR`pmMrL[;Wr`CU 'F;`Kx2ơU&ͣ:WƫwDzaT|@NSufQSx3<J)~*74U ҕjķz(.\BIJu8C̐Zn 0nRw8ؠXcx&n_L>-Y 2aʵJ>h`HC\"\۶Y// gSi&K (э1A@TGC ɚ=셸%Fj8 ]_L4A< ]/ZWI S 5AzuSWtnXKoANu'\r: Lİ(fPJSdN_ݑSd>*Sӥ ' cŋ9zȚÙ~]d:SH(K#Ƭ4w6 u({/9 YҴDSh7"-<]=M >۟ FVXS2G=j!YM{ZkIXg0\h `̓d V6ՊxLlir>0r^Kv%|I1L9YR:&+}&=~yqR$tB'Y`="Â]/հ9~SsA0"oߗ(`~YA0B.&2-ǦZg(t`/ʉ0:rp~Ibl,1dKHhB% O/9_ڨ$`[a%'ϏԄ+Jc]) XzC"'Fzh1 "]=k4>X"p6_ Ow} +Q2GOCG r@jvΞAv(z.yӗ}N)QTJ㟆"?CH?rgc |/kh <X.V9C-}{^8áަC@Ó'LPcн;/81 %Y.dT$5' C 4o,߲ slw[;/ٰ*󞰂PA (~(vgxjP%^GGz&@̢|z?8ny=sӇ Eto`]} tz~gߣ%ss[Mߟ0Xi0D2|ѧ#b JoP\탁}S'k=!t  ")'[,:CnҊBzxvN8 &kqLӷ.9dvv& AzA=hepO6iFb26濑NԺ,$r&5v%N!*]%TRO#8z`iʙ^P(VL% ҜIDO Gލ>糕K;ѻ[ň!6r3JQ)Z-?IOqR{-2i-q@ckj ,sf۟z7o$nIjꂉW^/OOpuopⶾs::.eeBX$w6[f屆R@PUF(-@vŴ1W*~% *⬀Zkq"0bA,":YqRemmC1BoQUwZ[Cz.ay@npRhN՞?!$Di+PJ~?@R9dpm"S.M1SA2 %Fm,s7sVRƓ# rݻ\Li| >QY;O҈*,ug3Α2H9RbbKS4I7Xj&_U}h=eQE[enYw_:hQRJM=[Qf%FeVb}kWA!z(*6TݗOT4˹sǿ]b]ٳ^ngvT8Ԏǃ Q 9: 9 y/wjk UTunIH+*oR-7zym e`4v~rv>X'ΦFKgZ e:+1;kR_`HZ欛54_&!KdqKyXBXG(@)6IB}S:G"X&ig jdJMJLll0P6E4"I|/O8)w;{狭.5kPluMK~p0^%QKDǹ;MgTKka ,vnn:o"wYbW* ]SMOnNzue0z ЇZ>غȳ.?xyr " ;B^ߺۡQ#vvMM;_m[n@N[ 5rlCgV>iYhtvi轛J=R")4~)ĽۙNF)4+^ !8 ",q`f6l~hV G3̔2BpΥ>Xs-%m0o%yIU \ȪwݳY(A- PJe>\+Q{CW\h/~ጰ<:r~ZALI2qWvt6gs>r̸<Q>19:BJqB??SwؼyN3Y"8t8„SQrŊa9!œ.Bd6VHc0)9bF` bbMnYih44TDj-؈r-Y $1ACp[i4ޘZrcIGRRs6a)s۩&->ݓ<9"D#Aw96$6q(Z0!RL HpLtaV32>Dw>^!RɩxspFFٸjߓj`t!EDfb-l(L”c5#enx&$ B"GRI.FHRBJN#)쐧M7>r┇'>V>ֵZ8 [z{;j#% yc @0a)a1#.]2hO` \<_]Br# qa9a &mI"*=7\ϔQGUӄ IW[(u=[Ύ:)ՙwK@<ᣫtǧ:`NE0:ԱVt QKOQ  c2҂$o;jR' JUĩ>*:kЫ)fۢnoQCg&0`&hDs1e&PЦ ] +Og+ai: }k\ gUk4 늹: *j{G9&d~{`݅ E]$'h۞/{1RJ߄~?sA8*/6n ,˾- i͞//cK 9TWmC[!͑.+ޛmoƊ!F^)x?w($&яAl޺뇰4 ~f[3 63BjS?r$ DF:E0 )z"S?>֌Dȉ8qDQ`"L,i(F E֐0$H`Xj'=iJ"ii gyL!,2zQs{) LFhAMx2<`WZQ6LYrATi-ֆ$wM\xyG!BԸb@vp(1sHBSG\{ҞM5A:;TϬ1`uzCQM=)l!jM=caAak=]u (֤2\Uξ{ʜ-"u>RF\W_Nsu9֍+r rt1G6b d 7H׉ GteLcXjYddLDQ)u{|mmVi3W]p}?XЅ T'[j࣐й [zeL`m#Btd5KBRAC@1BE̸hG2ъI(YC<~Oڝpl+wɥkB#ɥƢCy6/u>)jllwpưo"saӇq<oB ?܄zC.ko8+:4Wԟ$͛O0Y*>6y |-5X)iSĺL*Rt?|sU&k}ntWYS)4K.*0JV))Ll ty HWC!Kѻ"X)T^'}r ieA36"FD,0FSST GdvN5Ӄ$V;!vhڠIstθ~ T,!Mk41; x%ÆE3+NVK"4&@20K̽jH>"%A F0 *”[*&,E8掉h"e1$+oF>;r#&mZض2?U3UC^[28r"0,di-ll4G8hb͂H(qqx،QF y _&K. \P {P. yDJCsbVA)"bY{MGx >NeڮM3 -R.3*/3KIEЭΝ'ECߓո" q&Z3M񌩇Y-UK#ܽؼ ܱN^@^%.amьmVhqE ^mk+hŠpu9@U(`4A "KSk5 {K9MEw)eѸK0{; pa{_kq9FcwUD3-4[˧-;]dY2'Wą(e{`(ڞk-G#u:BACEG(5Fsq!fcKްVOr]$fb d& jߵ Sp )xOчOxZ7]Rr7ɔDaU#o{7#KC5ELT=P$In/v&7 {-nX̉5}J7ue4V/4!p*-G]ҹKϯQbte<ȫ -$qg㕕 'P  :MބsIGɎoCxtRĆŽqwHzZy|.arql zkyny?\WR [kQ3l[{|Gخ  qwfNNd%3_Jo>=3ZLf % 4QLF$ LplbMGYHK Zf61tq]\3Co)Ub(#=: /@)( 8V:<Vmx稛|l]wrZM¶upC-r6`=x}1)`X<u9R{I9BoMI>L:NQhJHg&Oq[K GְD^{P22ۍ/Gsq ןb~OHXr6z><=S4&LO{7I@V-hԎ1Hq<*\tgT٫^tz}(CM?wp#}Ix/o^_zza2jX*jBg n놔iHki\PHx~gwn58|[QaX3vu }{~wG~sI~=?P7߽9+޼=-)П~=o߾=%ѳg/>}_W ~}7pS8$uM.?f;ff:Vؼ-zzpNE ߳S^h߷[qJaOoW_W07d8]qz ҇҇8L09MݸM2`Nl;OC(C:%*f2;Ͷ>mw LPǿ-/:V뀨.q^f?(ąga74OoO#_Aۻ>Am.z;o\4m=IOd_L /;~_6~o`|yxybѓ׽Π eӱrlΗ$+ Tvnp_{n;W}ɨ <_OuN}*=qfl_Q~0tM$i羄̸vt8/h?$ms!7Rk8`.KYm-Ld8v.GW{ 8!XP ~s}P2[޵6n+bCrxmwQ,Ѓ]tNHq-'{)_Rrl".VN]`_hjp8Cg ?tj-̗wOٳU?gʐ}͇\`^Lǒ,shq/cR=23󯮮j:]WO[^W .,!o18m ǴR %hAJX7ib_ R@!ॻάs+ g;A#!m ^(Y~1qV5o:5"*{ӻw;>Rg<)vH_K&|&Om)S:DH$"]{$ EDYPsdfp|4OJfg⹾ލ.lSU2xGb>NV 'lӠ'v#"P]̕wo~e3wZ~2aL2 @+!CLT~| |:oڤ3t@px w[0W>v,  9z>ᕕfpA 6.U-O=\ ?d\\ 9e=1Ko;~ =‰®x6.:/~/ws[ M8[D|szpEIs&1RBqH݉P %2t&˘F8rmՓ\4/+L-- ˥_--[Zgl)f$Q҅!b(\d!aȅꡰHQa*5Q%PR~O."OmT1;sP?(N8J\E4ϦP! rhx!Ȅ/ā sy*lT 5ϳ l:;>u:>ܺ%z*-;!f14uh(PҡTgۻq)Ԑ{s_+5t-}?e4$Gt9;>gƳ۳sv|Ύ϶ùQD0Jyvs$DiȘ B!+2Y.t•#%``le""Hz})]lu"PV%sKVI*)j&@*w8o,)ظʀ! Bʢ@Gٛ٧\6*!꬀F퓺E.xF!5bRӘrؠ(SSHkEl%a;#؎qZ&v|׺.kdxA9=g>Ovgy7 ĻGkpf0uƖ<zA _>?ތ&7mM_FQWKZ~zu.=?}%lr#e[dc9*Y;06VBFnk< Nqn-fvMC,TvKdVmN_ H˫DdE\u;|/m̻C^zѶ 8 4)/fEr"_(cJQKPǑ} t㿷ŚgX!cPu/~lE8v{Ȏx— w"(vrS) ;:xvu,gɟ B6ńt~LAC🍢#!7al1PKr0>݁_ [u"$nf! Gsz`?3e@u ٌن`kmP v䌞{qʶ3g3by=N{0}Ʊh؋FX|B=ڥH/<\}`r[=BˣKfnȀ:Dl>v 㳦§n9|q9x%cNPڭr(<<˲E-Ğ˻z2Ҩ.s!lk`T?x. (>t^g^?p3 9b>N&syҫq˱BYOȦt:)dReJ=! o6KY%^k^C;PG^Goo^GoHLn/:u|MFMXJqRVª#}B^h:#wPG, OѦ*c(ڦS@QSc4Ąm,R7nk$"7Fe"THrWFvH1 F($%Pe"jfy9(/so~)x*P(gZ6쥗9RLl_n9_֧4^)de ,uE$IWjo`gVu㉗Lvߪt4bțpb9.~ͫ8.Vсo^E9ݺ*06aAB Y@8 /s5 /^n.*o$Mh/ i1M_/ovPAʜg2<ˋ${f83+:( @m%ڢˀye K>}.Bm]R/'di/^8 $64r^[P=/jtR-koy^ӨutHM#)7HPgVxf i1bzc3=hJkjk0YTCz։Sē6 <){$ Zj'ѡC\h!I8%;oY;pө[vqpch["KHބl*Z)2!ZbᑧxG |z5ktMFzspu(Ӥ6<&ܮQ ?f\MiA[ jJ F1aﭵƀ`ūG)tw K$),?M&! x'-rD(,zL]KC_ZXqA(bI5a4T XI8b4*Q(Do-SKmQHgt)^RS'ڒj̄  DL3 " !#XGV(0$ Dd0_;LKp*i)K'x0Yu %[!1jI0B2C($\i Jʙd/4A X?ϷPc<Vh" U1bA @r-!i Uṅ&2zʼ{ˤ{?3le`L=3K|uuUai2I c'R4v-sx$rHޥ #^M)l"2$y0*e2yDp[[Q7c$vhoeNjpEIRJ: ]syYjTv!ܾf=?e(U[CaV;bV3ݪ7 -PǙٹ˷{=YrV1yYZn,f|>,\aëeo"<|bd1qk!nUb,rt{z>&Gʮ?!I չBs*:=k5op@$FZG`rp&^xfp4TCfAod ~cbo:_ơwmn<ט^c&BE+5QPf<\Lú ,9긡*7wM\xvG)eׁ% 5RRBQ'NToNYpuIodIhT>sAc *G*֋%=bs$:IT@>y$1( & &d(b#!iT$eyh5v1sL˹@vf-^q.%Z9իnWgXoe%;6y7} ukrMCȵ0z G%(2 !P.X\3M X?28qȩ}ƓKeɂ .{m:ǙVK ǿ3{a{h*| <.JLRc c@iHELqd FVV%h c3Bj`Yv'nBY r7 &Z/uE1?y[Ca lZ: u _[H0$&X1 t#-)LZ*A432D"yUTiMEO:0uAXOWQxr+(:Ln:ٿUhP~eNmr%`iDJ(ƙᒮI]q,Dva" cLrb3v#Ek b@#0BR!#r F -Ը/#G+PP-Rnyw(62WP" ^[6p\FZ{a$AV*H k#+ g\(tU0DQY#!ۅ $A"h@&%7!Tm#+,}BUTJ%'i\D${4 Vʕ88l_KiEP)0PBQggс!Vڒ(FG\  : %-#C.sCMA Gb[%GbOJ=+6A=`leAʉu8D" -5Vt7n\ݸ"qEݍK O!t1'u0Fp0)jC-!RlqBRJi!9r.crj2@Jd?EҊ &n"6i:*EZ:륇}4U! !8) $$CauaSYpKI A3q -HKw1T(VhkY)N,˜U(bezR_PC\"RFr/%+@+&cYpiLy%; JL8c&Ȑfb xIİpe0$K(7X^Zҵvrcu6ӿk UpFd^<-0`@kd @@eS z*l:h z Lkej,ȘR7'qTe 8mv;|2lp/l #KdZc-X@%!9[`?B0 AcDTؑ{,UtKm 8 9"_naQ 0.)qPH7zEE0ȂvΩu+J K{A>,Hiis̲*/XZׇB*XoMS/\C/=| $r;'oaoc7ӊbO?~s1+Ն W>o.෸X!z\i Ksx>@ۚ 3/?`F%#pE.}  *?Kcs*L5Thpe0ѡM!yv8oӢKMEؿC)QZI4LUS=bPlI( .)/KMybtP>avZeംOIV3T'dM28cA&Jd9b23m+ęUX!IT ؏3!@` !Ve:XϽ!}VSA\ĕ:jFJ] `ZFJceA>ܹ CN*Jt@/AֈZgԔmKUAtڀd!'14<$"%Nǜ$s𞝁վVx4q)"."h&vZ_js)#3ؓrp& C 1G42rM-`;2P\5D_!౴Vɞ%dB#EAhD𥀵$'{oBYz&/HAKv,#*(!kj5560A.yBj]^HAdqIqL5o8.!&Z.1uܹ%E5c%0_s`H1w>מ5_sTP#DܚsUtlXT܃P !Qe[&^1ndybdSJnrMRzܔHΆ}'#@xyS>ذD36haI@H)BVQ(_:#۬,Xb+A8Įs}U9EZ˥]6,Q؝XI*ޕvz&%滂w}yԘԊ?y? ] !MnE\/j6w̮|g!)k]W/T/}P/,m60ihupuj=%[¶3|{CzJS,H]?N!rlނIB[plzwel(k}g{g_ t7v`_K.W+y}Vݻ>B0c>o)Z㫵b{ﳦ۷j$}{cোh@l$$BM? @ 35u7қ*}Jv K/xaZ+';NWa<83w|2ƤV77wWn?>zr%'d58ή>+;KB; 9 vbڂtW?8Ik'էl;K; 0a%2ۜlYm>[ރcf_Eu?Wcm07x&\Th=|k7z9z^/8>?~=rvkհHǸEg6Jp*6Nտ)U//$jRs[9\'`t\9V6?=:#x[TzdbR{q 2Gs 1ޞ;>Ksӣ;'d`r{=OsD{ z{'gLj_>60ŵªIpk?=hLeL@%SpB)6ѱ^dHaeϧ9q>sϨL9Vv': #ƺzo"y)!+v;#X_޻qﰟgCX!^;]īCNE] it5Ivv_;$GPk9K=&;sUo *qAs}i ^hw0 (8NkڙMGfKN҅%-+U` +K*qq nH-,YnNB͔/oxgI^<8koBnZ!>هŚ>g#01 # ,MCbL-vإ8lS; SO;}|Vgaa'@qb@m"cޱ_n #`P rF' t{-dk˫$[cS] )/O6mIiq&(|G ']n@0oKX$E$&=dxt<x\mX==7Wa*›h)$$8i)$SAx%VU4Ǭi8-S q,0Oa.5e #% "'BRTp b:>4d#kx h!|ZStnM[ǦC!1HWs `+)uT6CZ=gԭV8c{8h7"+Kg LQ(ž<8ǵfTnc]M::^^h#t`$\!嵝4N̻}25ɂkӽκ_uKMD}/JA$^g<El5ssUY ֍{ ~3XXX0\V}"nco}8dpoZu [$+L``B b0Z`_9{q V+?c}>?~5<*Gq76VRLŎݶ1+4vK: IFCp<^v]E~}~\嬁XG}tk3orR.xmVӒ"閼~ʳ DUF 71x_%c |RƸx$Յ wO.)6b{lXw.l&n[IȢS!̢Ma1W^p65hWeOٻGwk[ o]w*Mz*lnn2ҢV9M| N$.?}mQiã=iznԅ[L# GfP-_oqb#XGB (Ly?Rڇ JF S䪝H@-3`(a>:gr嗏|~ "j_I.ݮtig?$g`ESV >nvy*N5\UÉT둮ªe%~H.2E~q́vUSnNwtn',*67xqHpv_R] y/-OqAƉ%˳_ %ut?Pt`@V5/l ZH Bľ<9cԬ~! 7/TݣyIOA0?N CoY9:9R,$vKk pc&-M|N7Gu(Iwm_!%kl0އg{'/~PCN<%653<, A,3=5ꮃ7ݾRjkY0*2ՍZO U VseRWjJ/鶮idX^7Iլ]!\ZPU5;{\.Zs^ZkZWHc11{WuĘcϒ&ワGυOڢ?fS(Km>?bPIh3} .~̌oTG] `X*ZFGCZT n-N5ܒWf7j+i{ʋ;oĕq'G$\O"wLǃ|Ċc]v8b u\ o]^\zqvA`i%yR̄Xv0CFJf,0\N9;J^0(u;];qBϩi^/;3h7׽}={Uby4-- ڨ?z ?sn'W=wVLkA) e%(fP3 ˬ+& y"V޴`JhƋYd C.*1*"HKRќFN ٨q1#hK Pw2w)f Lj<qĀ64OL^EV}ׇåR<{=yn}(IQ۟ #ssw&bX;?mbw0C:^lx;5XGjۭ!{9kUP ~%>H'eHdl``$qlOZޠDODzg:k77s*X޴f%bKnVZr]_K2Xud}3őקZ9c~V!h4'H{b))Ȁl0ǔZH Lr86'W.piQ]7@is?L?xQl 901@-0ZQ00!`W9jIKVgGba<H|%Kժ.MU* J0{B-Γ3"4`dLxǖSB |긖F Նa`N#`AWNtU9U_2v4+,ʾq[BMCE (', )D$e-ᨹCaVe")H*}ԒNCm]NWE=v,^p р,H'$Óख़ A!)BTNӅN@-HSNb#(#[asY+RU QI%%,@. ڭɿmpg=kr~~F+H뗛u~&( o`/axn4H#n;(16@rLi2{XĖ);Hݶ#e)Kh|q0K :#@ʙlDMO |-L+߳Vܓ/@ o kAw *9|eҙ;L+!sؙ<.u0_CqW6# 9qTܘoa4_6bz|u;}gU^@)քfkr(@?ڋƍ=GLƔ)޹0u" 3VST6e+F'EX؞2I"H0,MO4hI(<n'|SLV*" F:1 L#X#Cq .\x.\RV /)ltbNp  ,H& ۧ& ^oҖ.FLZOSAWO1>ĈfkTUDg$飽BH%n I%f/vd1 ^?%kQ#FAQ!?fP").I GyQiJ/_)kQԳ)')BcK#k:~e=f朠A97b.ժADldDZƐ^CD~vqS ɿp`nG7"a< ?7ez xwN2>`<^h?tMo WV(&9 ΰs-kf{UZl[a,Tm:t$i#\o6H]ь0)>FinmNѓR)S9۔Q|¢Eaƽ%!GiCYb9w^د0tq!'- =ĩoGSftwJWJ㍥w.3 jFͽ.iOn|Y3a8M} @) ' )Rc TJv)ӧ.oyrϟ{O 1nePp)>ʮ7\x>:՗_ƙdB+C?C^ڠ755̓ hA=c!J Ub ^yC<|g)J;j9XJՕl@W7Ї$Cqb]Y>j9]V(?V$f^?)fū_ Ʉ-ʩفW )I G;_\5kN= 3Tcuf?F\g_$+leώܷ'=a 0jJn3Gf{X[ >x U7iĕB5#~zv_!šݙgY>?w#2AԚ@qԘ"0XBaH?/iěgM[\۝;d1eef{Hy7Nv,{[c#/g"SK=gvnPD] hlOlfoɦ?Oԁ@^՜A>KJ>/M1}BN1#1 N.ZKfEDT:\2\sB lR?ǹ@6R248I}4^WEck AKVϊ*E۪,j%@0Cu${+ w Sd@ނaj`Ac(3l` (`T,ϝ v>Wrܗoa0NQR2[ߺwwW;sӲhIyZM?'Az?O:MQckqc !Di^l{26Κˇ.PYeM%Lst}PM^{k++iJnla*bGa7QX9n>nlH*5âڶjX4RG\RXcE օ!:e!>\9B&ň/'Q gߓd{, F?Y(G1Sƍpeu剦2 FHÅGYd# X|1D5Y3A]  R6Ąhy;7(BǐAv،UKnӱ>;!A1 -7[:pK/,i_Q^w6٨ԣC&D!!U:{L۸ ).lL<Ш +,TTjx)Ŗ3+%T\ qd  ߄'bw`^_5uaw={ֶ|ƌl?5:`qW5kPiߚCpeJL5k\wY9V.b<9&.jL\sy}-^͆ax mտd:Ӽ ]>5|*ʨjjB+crP1pV ⬦vV"X=mKxއEL./)M֙)u9%RNaEi$R-cBS%0!vH\:)(&o Y8>A;&JID'v-@>(tí>Jٻ6dCxz /@8}Rl+[MRr3Cf'ݿ:jz&Wwuu'48ֶmF+MZLnZf2s}Lk<<Ք؅EpE|=L)[yr 7g;h|i4.gs7o`"ot㻦>n5H7&c=](suԡEmoXrqwBFdF+_Mtbs~Z.w@Setn6mrJJpk#G\("`i6Rhosf55zK /`zirؐB/4Ow`ztK;0Lw:--d[L8Am.d-%"=wڸV4O`$_n>wc(#?ZVpcZ0 Q~o&y+otWgwd:iȮ ~* ?{JP*jh%iYɹ ZyS@G)qz:f8JQyL*Щ\<\^ )%UZ Em$Pb,E3y2Mf$b{7Z"jD$J~8sפ񌪋َ0`!8inC `۸дJ%GNchg1))O^ 2FRܘ[Daq16r;z AҤُ>i2C#}0-V;ܝ-)0#QY6UV,M8:"6gO$AF˗K`&i|MJuI¤܂#a& !M(r]ߪQ,۪^7oM^65 xDmbWP[F"b-󒂋91Hd!<.Y$3c71bCGUߊsVDvsѸ5Q3uÜ3A/3Kcw3s;()#Iǀ)Em%Q v1"3؇G)U0+PVV7.1%:l$ O uI*SIDJ WgpOQqjF%tE$Mg](%MWx׊".t{vZ.E :XgP橢qne"e1h4+0B:nLA- cKPJZyN9ƔՉkDlE_.=ocU14 F:kb[%GQ ,lm$ #X'&Z0+}60lIG7 VCHc"9r1إ{i=LzWFQͥW6]nӕ3QNUgX̨">rJj,އAu`O"|i Yl8 .j7leP5GΆY݊>Gr'݈7cܹKXn+HJѴ]X;0R3iQԗd᳧;iJ I>&/߾œ)Kw/\埘v} 7}? "2E[N3:?x|xh+g:LgFvѧ̐߇=a$@-y3fLnHPAMP 7T0:/"lN+Ax.ɬ"cBu6'RA4)2(4̃yT0,Trl MyV6&<&KMYj0qՂ ܞVYjWZX 3)&4װyU/ "FT:с] 3sj\X QJPq b3&Qc^ $|ي(ïVĻR.Q ц]vodΣH Re$8Z #5NI-5vA(RG 9E@qA&;$#)6#Z4Ȩ@94iU}*@IդfRR)]Hr}*tD%o(WkVabETkة{et2r\ kk,T")yM|a;Z ,V|tJ~ q(aGG :~JP>5ʱL60P)Fڱ^饊PƵ V7,TgSYY~|j 5u (-d4:Mbog k0 =I eiAнI_z]?ftuUO2.ߥ3tf? ̦M Lr< ܃+L:ݱ %)*DÈ?z>NAtɹ_^qe . h5 !=lR1Ղĕ0+pYss΢3sw4&g<NS.XOͺיuUtG g_'WnW_tz2_R]|g%%/W7Ct7O2پng@} jrR8)<\P S%9CPP3\:l+MeUm@UmQQ^yy&JʉB^b MMIj§y'ix=|l`k 2 k8!<DJ7xeT]uWU%).U:UNoպZ͜L_o*69\3rNM^/Xd05IrvU0aѮ+lw~?캍ܴS$2fs{G%fK:)cJwU i|Rn?AlٮFI\8A9jgB'l=^:b'j{V6{OZs hWo=O/s' @*ή|B_B`_21)& ْ)uo;zO+KgO֢GĨ5,Zp>ZJNLk0mdraO&0~>w]`ƟWsY|큀9Sm++ٗ./oBUڦPD!6?}pn6rQiD\ͨY& L<'H֦p`"eTaQ& â̷rKx,+va*nQSYLZ#~w޸'^ 1NrYQy-)wmW^5;I9nY j#rpK'3M~?U.hOz_?z8H^P! gfqiFG!O)v~Zk{ʚ>s {1wj9zvPH7 |Mmt-~9d*u[(dqoeeG'`v+ ӷ &I'mi nLEXIu VPށ}Ri8E92cV,w&\4t+0786,U>Y.XcZdvSlrU{ I@>? 0k~ȩ s,pRC`O-} 4 DjH{5 ȽA1 rtՕj@M)I)`܍5& ԑ5V:\م:%Yִ~aZ۸_anHx6joV\qO OkTT R9>d*NbICL48 4xTƼT) U^kBFft="5֜2JJ+>cٕ4iF&OPYW =0{i<\>,=@Aꛙ|ӽڽ!!fTii$'ZzIs)0+ A`Ѱ5>gK_Q Qx&a5FA<`^?gQ2K=݄\MLܥ :!GjI#ű̞;é&.y(M #%DaAE{ڰ3}!mjoqO_ƣ~俧Ne ` D2]hxJu$y2A\7=-iK mYѷjV=* *^4ԑj^Nߞ&E&6+;`iR?I-,| ;4]ˑKhZYW*AvNlZUG!Nen,]? ~mobj(k'bjC-+-_TWZM#w_w-e9U֒ B B\[ F'fQUR*6PhaM iϕE+U2Zu4ZL2D_ z|N.F4eӪ<(VwEZW=\_׷>aw";@Ӛ6e;\a}r-EZ,ZGuO2Pڙ+[6Fw| 5Lt7Ϥ2zW12'"RLlkH/%]E|(ê-%7G|TB 6' 2'sbɁ 񷻐q D +%C6^@)8 ݡ T< I~dtCRN Lt-^aO 'e&}h} VA)FR,ҨN GTYR&1imQ1&(я5ޘL UYj%7[$(LGLIISKU*R7)YSTyN3O{d8""}/Yr]jtۀ@O&+r]?GpRp?~LB)LPFV5{'q2L|fcg4?)Z̑'o..(9gTs㛬Ӌ`'WndΦW5L/Kٛ^Aooz+ɲ =mmșMbh )3Q5 Ir9%X1% ,(E#Zeg 801f7Ww6Cҭ{?O"VH&I B$ UD M!K4i /O6%]hIA+A')w*?X;џxI@I)y0 qǗRZP1x2ȥ`:Y-(u\ pir46H,4)Q}!X ˊLv-7\ ݃hZ6S~cWg_bwwr\ܽI%rj}uZhZ(P;b9NFZB>9~LdWB'xx3]6Q\%m¹ z\2Ѧp~XlO?Πq1=H-W!b:45[bq^?b;7zM`r/-rq PiEKe*ጧfm5ȬOs|Rِh^8/6Wzg"5*d-w!ㄩ㆏CGRb68A N[C$M_ v2g#dl= \nYotϳjoy;y=^D9'g[;W?mFeNA\dwҽΙB+2MZ$Tf:Qc)O¼a#rdQWqMl#|QGɮN[|JE fB9XS&$=. ZRs6aRIxYɧ?2y%Lrt +'D, TYu{w!H5 *R4 |!D5qL;$ R_2T U z1ZAWj.Щf $:Y8=ifwB:CP.:YThuQbGțg*>Sq4,j o8>X(0\G A-֌  W38jC)VIQڣMHUmjB$4A褗w9֚j^\\֏2zirs1x^s1Zqɨ=lS5Bʴ`{p+k;d iԭ7\(b":M:.Y:jmUCޕ-?D:~zoU}U{X'y`?||-Z~>D4]y7m:mB=]P m#nk\!"]N6қׂw%T1Eqkp/\&"FCw4Ƌ״ ΰgZa\!cmwYDq鱝3ĥ?G h /6yn>HW>0rco>~[!/ٍpw'LjЋ;6*ڪzv皫bZ4ZJqHZjumRd^?GvB!)ƫ>6e6\7¤-&nt{ƈ؅) L5Fa?rg0cT̻PW'#pfh .y4kJ|V&" =J1hٕq?=1dW|p4]<^,_9oڻFу9D+LM]ĄK Rz_C!0"HѠf N_ʎr]cDR.3e+v/ ꉦ: !j8'MƥȝELN`,":إ.c&T)r؊HY؊*Xըαre隣GS`ڍ,֫I愳Ag${,q7vh?-ƱJ=A8}d ײ7İt n4Q65~׏YI|id4Fcu(=0}4Z B'*`:#-٬·kat/ee-M$4L%(/f>}*X [HL 2Bt)o;RR\NE-* WNJ2d,BL9 J $xQzUyCJu^PtEӆJQTSPZjT3CAFC-ySKquvhxȣPݭ!ڲDJL*=!q$w?{={c Ĕfi:>8,2|ç?B7>sJ<̖^:7,~޵KJ -(?=YmLB"4V*nAIHe; r`%;@V %8-g@gY׺h#Ē >0W/tZ+rGN{д$'3k>qy6 z7#PAZ2CG{y eq-m"*oB,ݕh:11pXyAk73OJBJu xN%%?vr*@=~p!BA=FVب TyǽR%ySv: &%42 Ig%,Ə\19 ?[CǮ ؖ5q+Bajxj1/srO}Oˆk Ǭ"?WM(A;ɉiׂ]UhճġP7D ZТKsbGS OzBNL>J.fU-|6;-' >;;E.O Gtqh*JN@RvssѼ\*S/d^TC8^refi|MM9#T3zQRē*Dp"qI[еshJ6Ncυ=TMRÿZ zyka_#nŠ_=fDiw--y+BSsYP6ǀa`Ero_C{a)1WC:}Z;WfGq1y۾UeЊov MV Ձ$?x2BߪҔoV+8s(W_+'@+ԯ߂ tLprV9yMWHfJQ_s~~s?xH2U= E(QƴO2Sւ쥺lA=o5Z[*si^+f EO4CâR}mǟf`}pePƓb]T5j9<:d:@" @ZW͎d:Z,9NfYt6FMfӛ?sS:a[/c$DҳyG'?֕?{ƑB_wC!8sp{ $%OkH*vvjHJ>Fa GkF"Ù_UWUWCc,[P4>n5bY+V˛;Dɘb*A⅃'V(3^?2sGaITcN1똷caBEӜ";pؐ,c|jH5$ƄUg$!l>l 6RD_|$TIȱ@;%,Ph|a^_Bs0vOnn{D#ory=B1J1F/Y|33~f~2i|U׋ݍnr#ǸS_J'^Uj#[`E_-Пg %ك N$'sPc Ƅ2YY8^[/zR#I$g?OoOYg _.=qȂ@ލ,Oӣ7IXNH[:@t47/7`@ ~5CAQ.],\ֻysV chdv&LJ%рey&ˋ3q>0/eޙyD$SCcҁ=^-];q^**3xđ$,ȟ-ҙ/9ZᴥS8$nGN wq8XȔA)3H9pk)c1S|ZjD)sjqTI3a,9 & E/ˑv 1q0QBG@^8 (@ǝ3 XY !#IGtc#aB IE* @'%B`QJBIm-e2!0UX]hb1XP egPM9&1d+y cWCϏyoq 2oԎlO_$Y+sGaYFJ@߃T؆ E(H+';XRRkaMa$Ď O6YCA93jTs|%,3 C8Kc,C>k`N?EU:n5o߾\dj4=G4 KH"cc'` 吁wcGō2iSQZB)e\пP +qk& ☻<6 ʥwFe66,!:XRՊeuYx]V+^W7wpɬp&z3"$0$ A[- r9kElCG/JBПWleuFVɾ#{؍QX= w҃\8yӴuƫfb/7,e&)jqy/(LT1ٶr)E)V.WP\bKh|Yh6,n$X[Vią8D $| Z  >cBf1v2Tz~e+p[RDU>&?,|CcX~on_3N؋4Rbtco7Dpd~aK{/F ?Opݯm0cT1.FH zپOnJLGHK){VAK]RJ|BW#II]GcK#n`Woe1ZnW] n\(r~wLN6S^թ 0N&_(ٕwN>xND5椒K~ym||i^V Zl4CKlObYNˢ1FA 2%wK$Acp˰j#E{9oRX@V?_LHIѰE3NSQ):M|0&GP>|tjY( Ұ@In7"Eω[yH3:/}t'赨y>4}h9PvyX:cX-ELMzYU&{t2Rvi%W t}5k'wE>rJOEdJv^{#Ն_v.D_Ғ 7ї/;.@ɴpe5FRCǷJPfIo:^JԘvGB.2c1!#rJC<ňrRJN?tPJBY!)b6D`Ф`E&YzE*C]Ec4'I8: [,g$Au, ^h8,(y?j&졬uY BVYm,jMye4yWk&t%m;ڮgk/(a$X ޶vT irj{M3Te~蕜w2H"GDrX.RR{cH0))ҵjJr1l*IvS7Brncʄ'ը1e{o< 1$&,FB2zLJ)YyM%UBiӅ)@`MZ|%ջjĤ8"?$N -NI{n(qW|ݽ2R,k )&8ybi }fFiOCZ f'v 9eVR:)Kx3&,o&jy1XVj{XD^Yn uKbL0IDb6ȢUNq 5NwpjB*eBW,Mp07벨r/|+w<~*\xlY_' |6&p`5z^0h`SRp5!"~ʨ O٪]xG+MQEtH7kT¬֍92\G YjlUgr$s2Xq63ۏ̧7LJB3YBG+ Ύujމ߿)'JKğ+A1mAtW:#Կ*t1_6НizlPG`V]Yo#G+^lyD^; !ϖjJ#Rc'HQŻȪѢa@d"28TGs,TUƮh, fU2|߬*uG,U9tOS&0GlI%]m|~@FYgAR`I!m;o7vC8%o[^Ͼ<ȜqNCJIe|m%UbY&p+5MQQ* |(#6IWĉ9B\ὍΜO% g&*Lyʘey%q8mΓ&Hڛ7YC΄NJ8ᓵ4|14MScEINM{ Cp pf )BZ@4xCĎ!] TW~_~OÒvNrD/O1 {|w/~('Mg/VH/~^xdy37)emeR!Ffզ^vV:;۟/ّ/6Bdɶd߯!94U鵸0 Zl~dz'_󟬞H4h&rR- EL(`՟nd}M7U= Y@ M \[mYVva\2mmErȌ2h 5O~>X_?pە[(we)5]7zdͤ6wo Yl93R.;c#|wmm:;6/e`>\t?LSڻYIuk;7_o׹B󛗔=Wxo>rڝpEPaE H]~B%6%(X I*p%(^%֩sJ T5ZmPkAP֌v|<kb|SZW#D`l>oihweߝ tu/bVYoEE隋m\3s99E]JPi_.Z53wF.aߛRjI6s2on2b1E񻘴Co  >*ţwaw5Q ZG!wAf(v -Po 'm%?#!8l,2j}.s(Ü@tJ~ث2z~|VσhGzT:h>#Mj'N7Ś*#C0F+Y|tj0XDgO.'sV^ j>߻>>Y{ݒ+ɞu&A"ōD%JR%V3cs?~[["%$QŁAB$҈hLZFb|_?uIAғFv4z +p8X2YP qiyjReFa]90d`)xk8 N=JB)gO_j DEHtxS/lAWlAa԰nxSA4Bڥ"/X"BimӺ穚VMCh HAV1'=5:,Z#-"TVfV)hYq#Ē:OKK>)_sK>w\--jmW1R7Zq<"t5̜/yvLND.|r u>sЈ׊1z3F=6L?92ꉳBd ڗ4*x 4[ꌖ2‰H~56R N_a-3'-[̳3SX'unM[KϞ޶٭UM>AQI{3KgH|8{<P-}뢯!gf as>w ƴ-=wqI~6d<'wwj93g+_1̨.FXLj:8:fz1><_O_mJ%51-6Tiqsi.k>'}d6cii灂)V:W_ ǧ7+6kZXĉCks8~w0hssj`]'cF7Swu. "رMʅ\J}_ϾIytm՗ mPc=u4cHjjrA-= ),M}QsqO!)K^k~''ҹ0b9n~~7R)B+]#ɂ3 1&@(,%TI#snhr ?hܡ y?qJUO%\RI4;u)nAWpg{#[H~1}BE14`lٙ^&p+VflY]V5Ԟ M FW}hbxcI"<[?iޭ ʰguɩ&i `Q9F RG:C ŜdiTDGDt<ৈj&a֧a?}שڝnJ[zRRE_@ePTZ4HV$ ٜj!ٓ)ߍ(TvH $备/ R߀}RߞL;Mr)c2R*)$Nj뙕UQ3nڱBKxv}">>7$~zVzl_k`dG' exNZ2B&3ѻl(_<=Z~,d)M1 !m-5's-0h_:GGǜ.eD xqa:F^~g$d4i,~4YLR82&>YɬisyZÏyzyC0O(ݶ<<6:c9iՅ3U~5m~>_8-(Е?^|R&;;49S866o)']!v o@ >V;_D9'N|=bUSVl/gжh sv7)e0׶aGӘ͞1t2|Ƽx!As̮\mmeH];?6YɏfrWS[)T_zߕ]r):oŎ}˿?0Mޣ?}[8ńl/?n<&ON\CH’,'\;(3 cL:sHK|(+UC(ts5]lQC/y;fTʢ_02 y*!ueAYvyI᫚5mr7X[j L_o@TLrJ2-f =]mQ`\IP@"j P+Y0$!Q#efՐ׸~C^5G[4WkNFS/Ó-5NC@ ;TY#mҵ)1y5tuƦS;FzX+`뙕ǵ'&89\Ƿg!꒙K}NC1KكzA\w]Z 1rO% gh ؖcQo&%ÿ3@_j# # !)pؤ7Im?|۳{[!$38g:ŷA`utf0TB,AN( ,: [ muƊ0qQ:#OaaqjZO=-CdBj4Z c6g9TrN?Pn"def#P5uU@P g6n 1rYT^ uf% EN-rnQMV6Y@ n"k(:nļAs2Rx |(HМH~%Vԕk*~Օ$R9~R6$E,@:v:|KmMɵ7Z[Q-Jvy-IPD xA(A{%@ڥMjT:t9P5X]RTv[i֪Kx;d&7wU9NNC|?Ϗu6ԓ/#agԿs,3:['k_~@^Yg,j0I [&̲I9I+TWj4zhT0P^b8/=L rgC3r@$=[ӡu˜j |QƧh(x pAE^hM!TDn렕=0 /5cDنs6DZ,nO6a2)Ǜ\^8݌l7s NcSu((T4ǫR ^_}Nq\rQ&^5E_T'ev`$zw*YLc@ K AeLH3!"IJ%2:mamz 3eʝIDlzZE"`R[aBHT)M Ii4,ƻLm0:i yѹR]JHS#㒤1JFiQ0g(-U [X*8bK|`)4U FE\*eP^mgt%4(.5);撂vQyUU~0{=+;,āA W)6Han&hg@[a6owgp3:$}{C{̜F̤Jy 3jkknl-/C6lff׵ەAO:`? cYsjMz [EYF*9"6X*AJn-U}ʂBWڲEQV{P΅kd({ DSlSNZc!ټ:dKl\3~ i VX wCrm*8r"k$9'c3 ΋1ch asBP[VjR8-%&gU'zxnp,9_G[Ǧ1|+Fōt<|8PYJg-0\mpc7,'ln:PFm"G\r H!j) <רb*,bXzCQ Z3yt6 [4ύQ,%,/< qmQ jN*F=RMU:dA9-Xl-N,DotJSndFiJ1(t /\qxٳy=N,eXdW3{# EyqKl쁻I$dE~rR 6\]+RfDw~Zݝ&rZ-|{r@|>ZK{A=:1 }y}DZլN=aC}"4Qg3za f7Ӭ4j/^_bDd>Dm5ey?M-w ÝCն=V;idq[}-Ɲ0w]$l[imǣC)pipj;:; >bv=N kd8v.~RUp$k}8(`toɵqh7|^Q/rwV5GB e5wߚo Ca}33V,m͝S2mI) u0RIŋZa*+ k0:ZyAx^D 0. I6E\R0r WB(- xQR덀~Ĺ+ eѿ|c?:;`?i=s-PTX ɱh6tOZ^GK-B:q9BDMR8,-"r Oe,t}e,|P"`k8K$ Eƌ1XR#,aYƢNPEP8-*&5uSa,@mB+A92RJMLΘD5S k ~OFI*t]x;(运+zvH11M2w܉>I}apu*h  _jێ0ͅ GgWڂU0n8>;yإNN3.um0`X B^m2dv:jS}OiOE.jp qwc}JsLNL#P^H_`}eG޽& Tjֶ]e_.c~?plb!zł\ʇ}z8g3܁;DkOrt|P+,:wJHְ vV>b_ތmΛ#mn^=|n>Cΰh !YoC=9m& :щ7[V\JFxwHsLkm@c)שUpHg>M@iҥQUL2#*.!=.& GJkCZ-s7Ώ җt-Zgdx=cL5}J/uwzaպBx#[quS T"MH}0ޕmGh53^jq-}`n>CN%K7jP5b/f~R=%5<3O- 3z)Q z%Py9GoNӝ[L0B 3,qHeᶌ 3,CXˮ9ʝq4G9!%}b(MlguCr`;$-ql[\a;JgÑ D7-l0xRk$p5wbiՓeZu= 2:v}L^SL8 Jf#ӈqT1NF,8#4jK\ E+9L IW%%"- J*JB"JnΎNX659.&eBXP* E"[r$Z>dS`|}t 5.{@k{S?h s@dyz{^_䌱S⌱E^HnJ$"VYɍeYAͨ*I.A8V ƴfIK*6*6?/1?[Qe:/9*,Qkxnh*rJUR&'Yo;ݦn6e~Igd?R/k}d6i]tzZvCb^)adޚ1р7>AuwPR"xc,Oޞ(擎$n5U*z'V.=S&)׺d`$Ӫ'$f=YcQQקkC)0T;/ɟCq3`\xYܫy# ]џ c#3CkQ}z\b&Y5%0^7Z-|1bK ), T-eQa*8崨d5)5e-PaJPkkQkN!ў#{@DN!Mqcp\7Lxf~QJ=ҡ nTC LIWCq3*7i8ij>ۀ03Wf2:G-'!R2QE+D2frq&0ܮCwOy@Gz'pA8[ΞIw$l&< gd_UrD=;*3%CJ“nY9auydy^B蜉 .ܡ8!.ܾLŚ-Cc[aRSO'7}Q5;̫t<:tZ9vu \7n&f0N_ÝZM&o?6}=WN@gtx5KYf:37:й?9?opz4w3瘀ˋvtW y:-=0X?D'y5tauz!- oi|Kúo)BȀHV`V+a|Ӆq4mf2\v%tO\aeo=s9~wقT8y44p~߱ {y56?F8aZd;,Ja ET-\tF}b1:uLbmny[§hԜ۶Ԋ99瓮uԂJ\΄˙$V XX&+2f2Sʬ* Lqxs=f"; 4>`x>0^Ok)RQb}!XҖ0ƹK և.q_&DOR-MKtIC ^sqdm@r'˟ŀffQGc5x^i5o˙<3OŇ.\}T0^_7o]W]}O[ خ 2 UH ; ]%X) ɥ(qG83lOj0~aZZ[ft3w 1cUAm>5?R1]Zufus)7iP~oSu!,4!׭C*HJJesNfֽ78K3XY$֒%DK3PiIBT2v9m͛+]גZEAd,/IZWuFU"~蓊󡽟 FGM\>e# GdY͕ٿnHdBvfՖ Z`b;w˽Gkp]=DXrq)ocA5eA<li䢸!q9Ɩ1LY2(.Yw<_~:eD ݷ̃>eDhELl2P."BRV= ,yв/|Z1r$>k=Bc+;XgOcՈHi9Sn@#wpw-MUfM}7bs"F)mbq6 (*˹"YqK" /vDϧ0o"0UAG`Zdt6M ﮷i$].&io6pc127O0xRIyUGQWg2e%{f͞9;K$8ɼ}FB=8-O9_P阢H>_f SܕAUp4@_~ Ҽ+eRC DUVA&V"Ƶo S.b>,SHOO۫ZCª/ː\B*h H'-p+8Xolf#fv0^RB'^ˉPZ3{\@%6 LλvU)$Ku8/<IJy- @ )-ۭ^?NGN*yD O='>zn8`FiĨR{Lq)LGǶ>/<-"TeUDGo  Na36"Ag1 n6v]qsmXPǶ\1tEYom/e1!(8w 덡 qSij1"Aj)PjY1$Ucϻ`q/JE@ru`t>u:xf1>H`ZLbW>&y6/d G-gVi1cV♱8`Z +=!a@=6TN¼QHSh8=h-V"P.0-7zAݬ&GQ ByU(X cBFcH$ʀT2LFNSŜB% sdۻy*q9 $!8(@݂\F{NdRgAR%֚z\ۇZGB=IAA%5 67DŽ)<"Bx$CzEԵRLAlC.Wp^ RԊ-gI=eΉ)^||1v`=VBW'O@nxǔ!Xݧ)i[1Co?| K2ȋ>A^LdJ$F>?tʠ2ٳU3U `OWiչki5kpv{5[P45=rޝ>9iӍ ?|ka@ovTlfLu(?*bx.?>G7ڹny"_kWc%Qbͯ I~x*J2w5N%Da!EX;a{wу) ְ(b1?k<(7Fb=o:=V: 8V2HSV}=~4bT(an L&襳4j1_a ukoIr _-\0RGLMm ?vr!C6:LClF7ְI'wXdr#On :jF֨N!<ֳRlYe\"V/S 7n : {s?wOf$ۅӍ)N0/NX՚X<-pf^U}xҴv;p'Kz)3$˥;B뭦ޛk*o\ҥkR.; w!_8m >j*{/EO)CtnWX!Er3挝i- dAh cֲLrZP[vYiwIn熾㵚WG"E;Ί(6A̝\W8"Hɞz{lqt/Aq%_3QMT bqPYst2O mNHwQ0q"2ptz:R NF@# `(04;F[y=u˚յ⪕APu?4e_ EѸ*lG\ TU7fn,.:e2C3tAx:TV1"zxK=2/`q !BOxVX 3['aqL85ʊ!LssV̀4JK82HeX` n7SxdւUW(wUSQmp2*s{ {va8QgiX]Cо;} D[?"㸽_9/$<_=R=1j. \^27s{zuZxVջ/ ꑡn)th9O(_] ηeB]}7"MKxϟ_,G-}dx "f֫%E?z{y%z{OsW՟wn𢡄MwV`t{  (A9H.3ůḌ쇾P0'4g;=0-D0ndީLfJgk *MQoVP]<=Q\Frdk5a iCJqZjWXcNkJOl䬵Mjv}Zn,g:`p}Vy Tc )Xќ֔jjKyvkǰU`Y2|Pn.w$R~$-4*z56)Wڍ A o0n<-3;g!Sb+dd0Rza$!S[`EuA♳W=kmmB-&΄l2fJ"%$xCX7]"SY+cXv9>K.dICJ1v b*v^('jɔͥZ4 ^s >^z5]XU͋f n~/+>^$Atbz118֯Z:21cWf7j8ē/t˓a;EO0 ${~!"-A̋L^!5r -i3rxLapq}FzdonxsMvl|f ߵ[۠,B쏃|mRlYr ֛;K!Z͓O#廫YIo{'Bu'5EL1c?ñYrS i,Vޫ΄,|9#mke1O^<kOiڣS@x ð~[>rʀ2D'-2U_$=戴c,aɺSxYxTfa\ = Cc 2L5%R I+h׶h +B# +"Qszu.fz0i<\XU]&ZUf n:\VԢljJ~,4:y}`HFC 3)q+!r>j v2dG T3n0V6V"b5ȘXxVPӿFB$;T^H ?HϦSsmN"+ɹ[Sj@PR)TEX"Ph *R~d2r (R[  G"5ٞGah%k\3nV[XLٰ! 13G1Dag:Rd98tφ]_Yf #\2z˓Rf3C0$~q;ڃ.ľd d=_n0n 0p7A͘qXbaNz!rI.H@kDJ^-wޜjÌPB(X k!\cD$9x:"89ޜj0Vd.GX?]|7w@:U]us%9O/_u[dir trV-wN1G])MBۻO p).JMҿ}]O)eUJv=~1YY3[ 2+D:^GfIbp}ˇ8!:m2چ\|j (mrmƊQUFT8u~|v߰TIA'c t_Ic' ~T ARAe4{y73gE-!xuJS#%hr:,loi>a~ӻH΋ad?65M/5A;Ry/NG阽Xex'a+ʱ=$lAFg<%I^Eswۻ~9{ns?FWt6i*o8o0y;ۙ|FMژJxCPV q*+5Fqihsiygl髧|xA(lYIzXHrMH5R"FʝΑU!r΄RJ2 |Z rR`ȁI pEHe,"א3V3D >1hQtQ]lC UK H5?aLsT_t&׍i$T<(È1; < #"C) 1JIt$m *-ʲ4 Z*9lCZ:Hfh`gT܊8s5EzR k&,}H6r&a "8[PD`=Z袍|QwG` fM0|LF,}Ȁ}qe}ԕG?{xL6/~+ޒN>^^GOQ0 27/{h=NP\eA<2+l!GaӆY{KK)x,DLEN3+-rRTHt\w+ 9Mk:9IrYry€ V{xYb IeAE|n|W݀lBEY#bߵ܌ՃT822/ICD,dK3f*Y]K J:bet;OxbdW65fWWDo++ 7‚mYkVMd"!ʌW˸5V9uzs)zYe uAh\-]ky4V!J[&,3Z1!-%LG[]񐣚ZT>X-} 8bLmë[RJ ]<*К[Z*/Qe+Xp C 0^`wG,D EQ{b8v.P\J|x1&nRY=LwFuƧ}':/рJ+,mgyI&8 _B(\mu9Zٟ|q5q9缃҉]},Q.Z Wnq[JX-eumլ4^c%| &KtF+s }4 k%ugGAZh^:;Ԓkuo$h3U@+T۬RU vC|E`zW_JY13 8_uQV BTvRN &~}[-lGTVJfdݨ^^Ef% k`&l1t *R '+,54|bkWtaVhy3݅*0)5ȯ6.<8)E\f k E ̄lKg-ޒѫ2jXK%VA{9Cd!+(.&LNe(ҠDH+ЭW;^54t~z:\ƩŦ{l 7AgQ Xd>W`uZdf)1ʠ/1OyŰY+xlT`KcВRٔe[1$q5+;yt9qJ ߒ:-2?۝U!˕تjjWVrwL@cKHC"U{?aYJqh5+.ڱD XTڗhwߵ@ ht}P>>׵t%9G.,FNa<uDsRWQ~g&R_k_Áuk8R^>NR IA:I1H')ePAC+Q;4L t4%9PY>H>Cɔ1bH%N` wRf=ZjɘAJ R2f1r2 q(V*FZBrZV(V`K^u5Rk:B,$?{WHA/F6zt4{vK=:z0}i1T-۬`8P-IYzNL;kFcޟ7xʈ+CPښuڄ =;%Wr"C6=g9bbCNL8X-A9ge(,:lɽ$~V\\nkwRj';h0'M #؋Kƈ,)i| 4h}  \K ';{*o|[x:̒mH(0b1e4RR3P:!c(rSM/ S8"D#<ȇ=Q4h1X2NYx͘<(ˁ'e[T &؍`Ibj@B )mHdSJH&/ F偩cAsŰQ‰iSa"c= YC,v!eb0wKIKRO޸"@2 626=F:,0y'X. iցR| El+| `s TI {91eo뵽\u-JEA°&6U3 AS}~FAnt[k3Q]ir̲S-'!*i!/ ؀$`C; - gQ [^F0e[2=SQw/_W;W8u)d]^Mqev[QŘ5]̮^޵hPMRS(V9~MB hTT)u,tHgKVUR9Yӄ5Kғ9qGdC@-cHƲYv8/P8lF@s]x&?;rĕ;J0d\7G+)#DZ(,NF\JY/D1JbFD%A$T6G$I&2$ 'U'FHoF4& Lj,KO8VB_u@W<|8+{j;r_s:uEĥZMx"V ! >p4 tyK3ް]{QZzv0DžS`yLj!תԹʕB`o!|7on?ϫ+]u$_ݼÝ&=d/j6 |_˯>t}5o~_3?N6OjadƔ[OFF-G έW[cV sգb5/q n9$oEkY◘PKQA:?=X oj OnmP/,4'L$Rh .{ ijA׹oZ<*5{VFHX Wßn&B2$zV+.4 ,2c\&֒"y~sA~~/ߍ4 JAS-*AIK d- +i4@维0MXgs?Quj@{5%tSMj1̺-$֞x!˱/;T=Ւ(bb{|ݯRi-jBV[ٟ寿Fݶ.M j4MH{J̿$=&eּ1$9!ɍ /@%R Ri;!OyF y>*z}_$RhN$@P({aiCA0= ^%x!! %ñh4IGuӹ!vtacxČ6Crbfj3p3Bpmox'st3zpQM8X9c[̬=\\]3Q۳Bgӛ۳GoWo\]S烹^My<e^u+*jڐFlJӤ ādxx?_׸_u\((kt ¦L*߸DSG!V(e1qkץSI\34ɓ`X $"47Q+qC|ԼSX48MgTɴLKS7eƋ>pmk|EşNH~Fnl4#B>WH8鼨7skΠw#d'XPD8$RLOs)i2 Ig U[χ"N>԰4Co$ eH\~ψVq BO8 ӟP}m{G=;Pkz?סH0yZX(y~xEbB$WkSUAՂR4xI""Gn&zd7\:C{ΰ=|W1|:pN\c)<`Pj[spRRϡ”%=F` ly!= QSX;ی*Ķp?4)gED>pqYVAzG& {`% $A)Q!Aͫ` !)*</*I xAtc,ER=IQ/~ DֈM1v;x f(h?*dLN-R*x,,()$QA ۯ"`fɚ6ВV9Zp-XHb=Q,GG= /CEG)==yDw;ԈP{:5(⠎ߙN>G(Gm~Z2g0=+s7!k^;y`3JWiA.$c1˄ӀY&rAL#:IDc`Jj"5l+hM9|* 2x&,\XJ4fa{I6eY1h8Prf̬&8&ZS2G7װd QL h bsyǣ͔͔6֦7bVIL>/& 1#{`۬a~dї^yt_~sI ~f0}VՎ6cێ~z] 2r3znr5}Կ=d׳q׸HyûL9Œ?OjCK,&mL.@͢WIXgQǿ=[ٍ?QA:c03wO<, &ፓ&z fKpo`#|r7'xNV7NNxˆ" i41zrG=Ni+CPfS?gE1_]-<'g,b6_Lel|c=P^ʓzext+cPf,O1QlSy84o~~ډɩkL-nʨSxҁСu|hjJ NU0'lX'K&^G"Zrc9L~y` ?y̳IADacMwĜRJݗ+[({7[3{,fTAibj7>jw1k/,nUpakƺ[I\ʞ۸ĥ칍L& pncْ`=\KmO@2}BxEc|l7gE>|vqVIux!//l~#,XB`?FW0Ժ[lXJƧЩ Kj ⍳w41 $RŋeXNy[e->5M&)|<=$c5Fk+$\m}1ra-Fч{] v.gwTckjEjj|֘hln L3.U[3:~z'{nm8Rjts,1m_p^#jOM&7ٹw i;`0s!XΝe6VʜC!;agw/tZ!QpCiuWve`XKdhS_>֫ c]5}Q+O/ۻMH7O+&1??ᬱi# O~ 1r%Ρb|Xf$oMa-zq\00x87=8fKF?XzI.ט"nؾVw` 'w;_zwPEf*ޡ>G6v:Mє?ېjjLUUfu)FxQ R1F-[5އzfx=zm,1U7ƺFe߱, _=@ Ї Q@fM )6AF T&

};ONwLG U10E!B!FGM̏?"Ҫ9$qޞG'0_?_'8 YǺ}񩽥\7h\ zaB=a9jQ\껂PiwtGt%4HJU4VؠLJ0'Ua&B&q-Eׇlwvv^s9u2}pWW<M0s6+|:*8̸5\[4`D6ӊq8f RBrF y*h@#WQI(-4%|K(pǠ4SA30\(Fʋ" \lߘ#9|D5{jD7!]? ss؛Njޜo֟~S4L}B`G&)<(1 5I4@W,cV8p0PEi1ܩ=%E1@a PBY g\edHB*VqKBA*0\kS0=$BVDUQ*`&R1o,(.V{/q\rRQ58bbp2G]"2t|RX0+(D&Js i2Rr*sR" L1>A+ PiѸQ! D-&Y2:6tTsV[3DXEp;d\]:°\^ea8a륋=Ό{猶z.qX?q1GGBt~Jq:)ӸĚ('iMpda6!iy1ߌHOS< D'9`Α#E|c.ܳtv⾇w>+v\Ѽb/){߰4~)2F58;4&4p$j7&.C0b76&&Za7zĦGާ \դBp\qeDÔ) N^T9UJ[큛ל:D7SV0m^S-@y@Z_:T?ZP* 720]#\a!ZLcR2CnС,EDWeݲzªNZ1[q3Bl'lA NZGyG" 9gn(M8B_HdzGOt2#tgPlh; Ζ:o-ub6It]4ss+78Kpx xe ^Ǜ4(s0Ux(ohH86kZY?H dh.2n ;F!n Tjͮk&QSLG1o`:Jh<̛,{.s$ʖX`% ue %E%j]i BH:5JS*ivt$tz :@VS@ڒ3UZBUq2Li(6)< :䚎VKt+$R!q%H|Q, Dlee¢<@oѫ0՞w0ӆ*0/gNE~)-ʹQwu|M6wmi;`9"ãAiXgS dRQwU He }wRJf]_4y=NR./qlSc0*ꄢsOPAm εs΅?co?X$ ^H"Ca7Mڄ_D8qKAi)q;+zI/D1UQ!D57Q2PJ& q6 !p׍|eАcF0l7,od090n2;Dk S rs Gf^y> 8KDAx(`rUoȏaӠRx"LJg^{{vk3IA8%uC#o@jCڨa0a VGZW{ᮉ5 yNCil|YMQRœIqMN'-='ߛG<5#rJhk8=?lm"-7O!p8GLňNr8GlZ7Cȍl؍N]||bM ZwF=E遘S=3o=b??}oi&bM^;u瓻C&yH1fc?D3==13|G g綷fݽo HmcQ&ӟ8gw:j5}Ϩf3ɡx sDnI%Ņ:L|4/јih$x pCt:>} ;6;Gab/%+d)sz`qcV8p0PEi1̸خ"vD۵$6̼V7N˰AX7"fie8x( -ʋ$ggeXR"a&iPQa2UߏPmuR;PN/?dI.h͠쩶q d3A{܅Ǣufn* i.j׹Gr܂\ dR'pؖǜ$ZEg%"4kuC0m`vjda+$ji (]Ҳ>q#JڲcX٭d ǒ6?l=ϤrZ[ /i7y_(&Z"ri/BFp%fs,ܴZY-+6l&B<ՁqG.2lфa#2V>;u_*xOuD}#c;osP"4ӕ'C&i˄ ʂDЪt%Bgݞ߲]|lJM]ذ.T0^ oZi췵IN[Vg$FJ+0YiηŞyC_JQs*T*(lE!LR:煕䜳2@ҹ*B S ;*Hg 8i"[Zz- 'u: u+ T;bBҊiG7pL3 Y%"Z**U`Q _㰒R1VF2bBKֲ(+'Z%bAK#Y2 fTl 72^9[ 방0 BECx ;!/..KX)oLe6(5Jٖьl(cGW¼AHՓ?p0 ̄>W`I4ad-7,`:y 6Rؘ VF8j6fz/:]~H ŶU2 c"̌@h%C\TRoRUܑBb3РQURnoL$CrfR%L'"c~-KsoLȽp95!7@r7f VS)omVȗT)fHȧnNQ^tˇ"DjElcgwhEv䘊$[N&3p83@5*0D!#Nsn %]rč8dL%P(WodtTG-\'uȞ26a!glX,|V]m2ۺMEn.sz5)9niAQKZJWiF3hWq(gՂX)էL$P3cݕ|m (s&lC22sRm,5cW1DTSMM\'b{DUy~ax­mkJnphNJ_K%'9ċ{ZGsC9= A<aH:fr)u[o:}'fXTJ6Acp%+y +aJI_ .t 9WRzU0F =:/07Ϣ@vqm׫y.侲h%{E}hI#21zل"Ūم=x Tإ݆G#uOq rܭ64(tzLikXν(~*ogKy9V*mwQJ}IЯcLKsH|AClUi&* ~v<2qsQWP]JɭdUmAПU@c'[ĬM Z9 y= |Jazw_"I )3ڎi7ڋ#|Iz6fq[sWhJ 49"S=W|uP,Fc~C%=ij05_: P+ %et|̴&hW\ RI XYr'l1Zγ<"#Ai+\)CBV B*CO.WF/g\ t1ӠHY^x) @ JR5)NfDjmKq C7*rŠw=_g9wڦ3$%1a nȮzMgfK(|~ҧH>VkakhDK^0 rْ|#vMbKp{vziijقϺ ic!}H9|Pk?/sxS@ns նc_\u>~mj0͘/Dՙ@JYr='lGׂߋHPFnW~ :3d{`f7ɘ `ܢO?ؑu̝E[fѧ|*zWmt+1kg6F*xk`עnUOt.t.p0u7pAvs#w+:Dx?K`Ԡm:}tѷ9X8s2ql`4Ծ4Ҁ=$I CA(%]jD:) jM6 *o<Oa2aͧ9IRS~cc؍i [88ɛx䨰6gBN"A+ŝr-H]Co2Ua?φ/S:kD@KG3gSzF6-_`B]m{Ժ7x h̼T?qV hm|zA[Pr ilgråһ|JZCYPĺޮOW\ XuNbqΧ)qHjU ֌񲘆P`8]W`' tњ}厕{w=6YJ9a€;%U`ƾZioNaoXT۫ꊉ1 e)GuS6@bdhY)0d9Ҵ^U(VEpu+,y!m6L/ ;l◱JiU[+IT~h@%0.*#`E-7j {ѹRNx 㜶&e{0\<4&x }Kъ4Pw*c$F^K3 [OKK@FNvNx%2%s, tɹ6] |2b)fS#9"% \>41H9Ӛ]*= 9P b.#&J[wc1y<&jXՒ-5igJZ!:h%REU`@!g4 %[mjeɶ3J˥Ġw<&Lg4Une)d?,rzouSEW˻{ĩ%'3q#~_ˏGm_7GČv NI c'=n:x2~Ho&>XGTAX8<&+ˀɣ!8>~ ģC4(]jeDTm<bc #fn摱N&=ծZ&Qv17xoξ?kҀ2Z防Mh$ xݱ0g}N TfD^H[Я2癇(r<Lb]CXQZMCU/xxĩ<|y4<="-A.)GQq'yz# IR}ذ^zDVt!Fv9$99%#oo »RV/Go&3xF\Ѱ\)zn0aM~b(隖\Bɋ̮N5֋w};Sɞ< ʐ{d D/EK(-S<@ "jFOudh!m߃'X8@$ /Q5s$p&y0eu~wȔ29P`|!RB 3X afoܨʗ&k^8'_0%a;-_5j5] ?zw3Z{`U.smNƀ` r9\x AO~Y^VӞ|#4k(rn5JiyE(\Q `iԒ44jKѰGFMLKnUhMLrv*65c-<%cb&Fǜd1#A<wc$1w 9ݧ$h$acqE1?m!mL%/ikab}vR IUJrPfwIX\yZ& :#ch.`w=Nuϗ0E]–zEZu,*#½k2ɔ'jy&r&-r$X \44 ZBLl I ר[PF]BЄKV\}Z9#J2M[Y"'x 8Frf0ʙHlb*sePjEBK^n,gw=cVL$ʙ8f;Ihjoi;' 4@Cږ!h+·8\ΐ<-~yZf]h81 Gnå=lq(4f3M_Lt5?=P,俷PBȅ 85?O.{]ӺrZoh 5@Tr;GKϓ1#~p>b9nLh]duQGo/nޏ'?yţF_ ]َ=&hl=!̷ר}*hBâ`Մ$?gul&|᭛}?j&[]imГbnռWo]דnG[ݬUo3ҭ3@[;0>!Xϻkv-͑lzm}!pl<MNSPؾ<"*4#|]mϹ4^W.N.u~e$D#l,%̓o+Aէn.ͯ,~26[׊G[eIAno- ۛ /fǾm$'ْ-R[n "USV7Dty/5;s<*xxPZNҍL7/F-+_3=(yag8>9Bl+:i8it8MؖFUjzk'۳m预XJNץZ~(<ši%) օJJ[Bנ.ZE^39 4-.ap6(?G>L`jA"'o2P%A]L!]ҡzvF64Ӆ:3*_ ގߵn$wӳQa^ӷjF;%8@@xݻ8ۘ =+r,> \9@* ,/9$g %9קy7R>5]کcw(Q!API8A)4뀢C3Ϊ=- " h_ C x͠! 蚂Zĕ fgVi)oWv (!9hHLm̢# ^92KsX4)(q6znڹpDz!a/DܺN_^⤴^qI%S ~Hv}XU)Tٽ;S%$5 HI8U@bk BֵdF2VR1Ys4D;!8hY~J> fR3@*@0L!3Ph_K$Ҍh^8GeS%D S+Ɗ/>FaWG֖Q $j9 \C3RPͦ"|4Ā-x( {vL.|UO{-- .!"},/ H΂X}p1ň{A`J%,&t 9\49`aa]ćQfݐܑdJ.7R K[]'e0W @2QBQSWu '`;&v_l` Jg0 ̓Lx7]٭ɺ(XWwvtA%c{ rKs&KR2hfbU?S_ew* o〕mF^`l &DW AIi3A4WVԃ_k4敖ή^B YWp,ajSV%2ʹJ[ŋZֺ(™% vFX:$Cu#$B|i{r o.1X~}]_Tw/ )a*iX^RZ+ -SwZBvI/'r  [y!.ʋÔmDFrH) [uзz\"pb^VŽDi.?̃kVWb(ł!͹,)]J(8 *Si8b%C'=7ԌC`zQ#ye"goO:]$A|vd[^?Z~ 1@~WWݩ$dWxS%/\ |7\ίJ<ܶa!Cpw_Պ7a^ܗ˛d~t&]8Ȃt[=_tft:hX%֯Xof~O.b|uDV.Gȕ (C=0y, p z'-yemd?wdsO%9Aʻ?O<1[]~ ڿ=$LLI{Vw]rĭ̔&,3Jo_p N@y!~Xddw`a#!_V~~KoܿuH" .L T0ܔCGWҏ{=QnJ U'{Paq_yEk@mt W`X cE8DR) *! ,KPYBNDN8s2t>/G>ϑsC dA#QPOAL^`<=/53wCzgV6hIdGf4}AX_1=t? XӝPǥ-:wurs2h;@rtj=^#:7ݦΏNv|.;;>{gPV2xn& G8*LӁWڣ"V$I:%M_}m`Jhcbr2MN&ɠVtkuИboS)^lΥ V٣ JxCj>~3# 2K--R)(>.` a%5/gFRDf2F4[ kI'⭈OVڙ[n]{Br0WyB0o_И?LW\;96F/(P d CN eiۿT^>gr8t#=8S2 2[P_3a|'C 7e!D钏Ue ф98TxCF"_fN.V jQ~*$FՃHӐ2"b1Ӈύ̮[\;w*&$:[meWv9s$x>.z?7TYhfv)juݾEdDn{c|m=z?Q*A{k};\ݸ0;,H]xO>lbj?|k8&?Oҵ>^\'McjmOw׼+Zb? uq ~rA[HJ;Ʈ_YOL#j #*QPHb @=޲J\Հ b t]WRPTc B\Hcc?BPkqJ<h8Z4r)e*u_Is#J[{APK8*`Y[ N)ZҶ=C`:*G$7<9A>@OK#x.Nǹ G1?7̮:\!ȯ-pOFխz6ڹF/}w SOWخ/xg3*ohgnuoǙug7. ]7S˛䋚Od[̂pM> J+>^f)ӛok[kD`R?M/v.7R)0Bܞ1"Ғވ7SAx\K\]S<5W )P>0\xCL.w#`F)B"֕ПwJf^"ɀ%xX,CO>]7=U"T)|Hi@%0 ]Xl  L ~%0 S$_#[1`K: @#9^St6:&i% ӵIHu 2;wPcy 3WW?A;OncšaJCZv}@ = N{"]AoFVGݝ:f\]G.JIT-x\BE.aD.;:LJߝf;]C28 ]i7HCG0E98Y]^kršJ.  Iu,v@TWUo \ZaB0,qW5Ҝґ(cN-FNx@a摹{d{|3>?\vK.MY±.#Ǿ_\Q1иK]IJ!FcZVR (˔!Jg[ɩJ`JEf.TYGn,ut!@ F@EaR>j+{}(A[5%X_盒}#yF=FZ!]8Jh$)qɡ,BZ qZk55LR0$W@ Ҍ#MhڑftMA!ŕa`sv$LQ$RWfm4 - R YΩۉ> !Z|UqQ;3vT(5Q,utɀL9N{5M \O]!tl=Tggrnymui}z\E+Y?O߽ƍ<=?ǧ#m9|n}zl]^VX/{7+|\ c),?#~hߙ.;k>;Tݦ\n=#_Y eb<>M=̎6q\[/5CҞi"ij~=|3ܢyr㥆:&"ST]R6$fm 釄9)N& ^ u;I9ur]H6s">{=jYRuJBR3w>Lr=[>Q `A^ Yjdqw2&*Hs:^j;'s]a0Arأ|N=E\hhjdeLPB|ys mţL Bkf،(uQRK5)b`v (FK+^SY>={e!OKwp:tf5 3N1XC vlAYa:7-1Y^YGXrZ|>1rů3ߖ~igy!ߔd5D},1 dgt#cq23w&3GY,v ;wl6_Fx5Kt!VZZUa95E3zyz؀۷w2B$]<*ap|2|pK .*^do(j-ؚ#$S$!9m!`[*%R]2M*\-F/iW[LڥΣC6H3XV pgzLѯA=tMXo36?*3vF_}C8H?@0fӗ R[ǻU%gt@n~YC}0e`} KFzDGo;lAGv% nl Fw.s216scdrpX/CKH-NɿqI*sߋ8:#e;wMN35 K)܌@䂡!o;rpVt M 3vs0ޖ'DHPogzɑ_2;(O=5 `zgUWE $;-lPʭF]J1_A2-:Tt#v>.:y5d^|N ;/cz 󥸒\T)5@T«uMA~@r+!Op_/gS~-lXBRsٻt.{eҹ]\vӂ[L )%A(A᫄,=>R'5LT2Y 6ٙ(bԟ˛co&QlE&?' 10"{(ê|d)T^u4/?ڜ%%Xˌ4AZ=;{x9_Dyy5epgaZ=芧F%R WdZVIE'>208x8닞8_p*<|Hu@[ !' =&dlM 7KA9dp'8 똞ҵ7L4}ߑXJv$7ǘw.cޥc̻1fuGReREØ>bǑ!Q !ՊDs3Q\;m_jGˏ7]& =ĽMv_,^)]zx) c/ ^3{!A!XH]"SUD Xb.r 0eKZŴps2$bԎ8_ Fn,Cq7$x7Q_;_Y\Էŧ@6((BT"B``Ee01E*bW4Xq@|Pb$a!hc,L ,t*2ܠo2QU$QAiMsnY=c6m@SL3 Q+ 3> QcVZi0fSP1IsIf! < s0s흀!L"R60Ô5Ǫ%PFl/4  % ޱ5VWr;/#3g]>1Kȱ$Dl&ˆXKA}HA% 8tI<%5OA^jgJ5uPC%fyi+EW*tPǼDw%uM]G@V:BW[Tq|C_ ɕQC@LTmCBӎ NFӕZ!2[?jq\ IWX)o'CV~mvvJY"BK@RT].Qd>sbL[䰚Bc4_CQ_g_'8 gPO:߸7*@Rʙ]}xlޥicDQ]D& OFS=!Ljb) Xhy4%֕CxӔL]Dy҂ .tc1d'_(VkE?o+9-ߠq8 Y*C"Dy-AhlI m1#iHawK Êk!KҍM]9h/zQt #RW'Džo>ܢhN h]i=O\9m4^pYzZ:ZVP LCT Vfmk)b;̈ [/a TX7*x j*&;NN4XD f0#BeX#yX5ښh% GAˢ x`N!,j Ƅo!X[[ÆYOhJm;ߕLJt>wV bsu 0' /%\/30J~nY) 7l?H>yUeP~)"VC J(.e+b=W).&}(~gflo|'kdKÚoO> s%S)s+Z_)'kU1i0Z"$9:#M| 1pv$ǰ(2#%= 3+D|op5*Ki,h"8aÞ+ 6ZYs11XZ2ViY^9N 3lY`u` ;'b9!MC ,5 6jR*eCoQ۫cj6(^Ik82`)xDyyFxбNN_)A<h6g35mb[ #3i'b0GiM" NcJ,awIRpb !5znbjTㄉ,,cjlj[tװ\B% ?_/LL(5ڕ [7Jl_3Q5\s. *A},|`X)4 k%pi\E‡d N"s>ҧi\װzTm$,9bYf/M%֔XA'NWKAM MW A'Xu*"^Hbi ZcwQ ALvAD㝨 +<`"7uTib㱴xUD3U "IokHE!~qauqW:衹xM7zYPł I{hktV8"=Z!˔5X7Z }ycyi#zcWR5s%Sxkc22aT:,|#0Ĺ&oY~ԅ٬d, ~/'.?&OھfF(~FATD҉d lqIL-)Gv`FRnyo12  ic"ё%U$Ex3\J>G ,a!-rɫ sČy1m5CZ қb<Gs*Bn1}X-;Ew1Ŵ?Lu=,`+Z+j/V5Z3sRI0FlQu5) 1䰢eQu6sݞp学7nYv{~uG̝ʹش_cW=_}mQgxͳ~zp-֋j2, IPW]_6WQihw{XW{O=֟BQ2&u<{ nB؛)osl'ҌMY`dWa}^AxlsAWx[ G!#t\QCR7Ж }c@ڬDA\S&IΏ jU2Aic*a ŠP"V(`FM;Ϩ;nHǾֺY+k|+%k]x595w&K"WjR-VLʎELUb8ZWWOwPUӫbXYنY^*UK}QRbթ%o-N﫠=q2O\u0X0=`CR5;V9Mߺr~r u70tnuTgƇnV_ @լ^Ҕ^T9k)G'QmTB,a&r ;ΏVZ-[W>ZjεQp8Z"L3FP`"uIUH|ZbnW*&d="Wn}P+zYvf!TU^͖Ej뀼P-5y/.rxdDަhd@ěVLF4rBMMw2?q+BBp_.h=JG3]h(}Ά V5T:#'V ]}oʰHZH+,ല#0拫0e|'n񍅕kJا[;e&f@H4y|/>?'<8If>է۫/fq3ڛͫoRz^ӛͻ<3N4%Aϥ 'Lg-۫FAYj/|7ek|n` G̈ips5:)>()_}IO0s/W 9Бi$49 6O|5UʯbVCrox7kTL5Ci>ЍcV !Q#ŝ86b=-DIEHT>`I9fz4qh.:FBH>zÁ1MQ"DDJF(&AhoF 1 L 3׌@ʍ 1Po6HSjk!Ds Y {O% Fb,k >*a/`>B, *Oݦ)Bm jvUY9d|V <bMnj:xX++pf L/n> }>E,.j2iNg/c&?N2k-n>4I(k (ku k?5=L^>Q*a)Hqt},~d|PBjM %-NVψdB Vrl#{_r1Щj^froSrM}Ht}y6K&rJ+4}P0CY#R9H8 .sN(I=5@%K=Ve?.{Q 8賔sKUnW?;Rԁwuią=Ѯ0hJ=^p+yrчFdmp׉9?]^Ss=)9gkzDqBH29Q܋ȍ4 Yϰa{mSnBfIS&0RDy$V! 0e윀NXb:)#R ;AAB+ s#6`‚b ,qFx%9֞ o",T-.UeQzE^aL8o"ÕaLe+x .ǹ[ܵ,,uvYY"H}T1D 4ČY+YtyҘGڡ+Ӑ -HJ{mzLKX>Ψjݻ;WsTp߭K0?eX|d3nuLu*?m_s4:#WٻFn#*#:ߗ}+/qm5LRqR߯19#b8]54~htipBXLy!T;FUi;f@伥[Z٣>6hYyĸbI{.HŕO+c6ll,Hjb%Njց 䍗"ݙ&OU ٙ=^/C7!T %gEC<;x!5v,X9X0|iУMt"_>̥}780\dA/D5ݠKĖ9O?Qwu 4W׋i3QU?yg*n=t.)7IqrN" ]xu`,5tT:qU(23`RIsKܷz7лʨ21m=N2Ю x ] FX06Fϵ%>JoRAgFRH$-CDjDӊFj;(LjˣR1) /dfvjD~.WhYx)Q +3Eq6shk>Ӓ[淓M <@ 4ZQf 1`fH(m1TIEI;أ]45]F:T)fjБ`xbF ->5AF*gY*r9-]nݘ A<=$K6d.~2p'L1{AITu;_|JiCث5|rrk.5 .z"5垳BO noi..Br{ˁdIkKL`G0CWEX̓O5\.1ošw>ԁݐkXCƌ3K=mV);vw, $dTݸG(1jHF![AHbg/+VM'>v3nwUbN-f oj@N$e w3aT.t9#Rz ]N3ɤ\UFrLQGĄN-|9H HLfmTIG ЬorupWdbL;[8(sJmC1"^ VIP;ፍHP K,<};g3)%Wѳ( DHK^@ khtL[ (- a2= m) iAo@}^h^~"lofP"mʡ!`**nź1S'w<%) pԟU-/Rگh#D/W-?%+&D|k 7.LUT3QLqkcSB]-B$UZK|j K@Xx=帅:%R2P{@TVGkɨT/g.**O RjeJ+Hf(K(1  Q}P:-e(4Z8qpSL15Qw4 x5 w63P+0 2T+ݕG9tAОP+S; ]>^g6̂H(JN5L1qjqj5ZMN- 觽E+|3cOSpe !FMNY,Z9S q!`ZU_!9S?ߩ"2W!c^XF6&Xf?ŸgoV7?&:<]bɏ"ɷil;W?=(דX?үXGDM ml|nt.n0ZC;Z+(mQ$պxI@Jw1JNg`j> zu%Np;o0"> $Dj.h J2(fсӅ )+P *_LvİRlq[Hs_/U S͔zV})n$>nA~ *76EBV3<F=HRP"qK(+ʥ_<ɫ?u!o^U<꛰y_ʣۛ*cR" ,?d@M*D+̂yS~^.B]Ulٱ#H&y EU{޻AJChH^8 ؈X3nR)WFP^Qsj>m8МRZ6F"6G 0!n Cp(.H-Nqo֕(K:Yq4'v^1Oa"q>G{ϓ /M$ąǟMJyv6^9Iut}NJ4wipG/G# OM!OICHkuyu21;#.C#T1nKj],nM%*bf~'Tz6ȒP-fD6vbJI?NH*g@μ0 \}X5𻸉~m3?MާHR 4b#=O+Kޥ W]ǚVy\Cme9OtM%z3&6\s cְu۩epwE-J]BEṞJG)ׁN0ESQB8I8T3SzVr^T[lu83tN|EmY|jxg*A !J* Dtw>< *}>,ٱ{gyiA53v1@ ՞A2w3o.!Yż "oMl.ÔNѧyjKu/3٘{ ͩ fJJlQW:g4J;*?#| ]_noJ&!˟fuwLö㇕S(D&4LJɕX?8I{ZF<_]\~˛kw~e 1w_=yj4P͓M|orSԈѪi|jQ(Ƿ6=]Li{['OLj4떶>\̈́pt8qsբlc!P-4:`6f%_f=T|k_ZWtro}8WOz2*ق瞚gYW}[ [M_n-=.nmBo0mnY!#FN'R5MG[2m-AzXabq5\ urO7P圌Okrr_q,^J nU/_aeA\rH~h+3RQY֞G%0YߋAYLSE ׎hkb ;SϰiŇ(ZEdLj]z9ŧ|s? w1Pt9xp&/k}^py˫?6]$?2n=[xo]H[/>p]M쯠&VOz2ƿ@1;awMOeO/,[grH DïQZvA>N?սٰ7j2T[U::'&uCxhɜ(1.cw* H8ń;&Th['+i)' _ NQ7 ֓[$mb \JqGQRNup aD9CԞi !IA4EA9ܮI.T C(ނ\*G0x._!U~H)N!)i;WVuY//d8 &/30xu֖=E)Jf7[؆DR,~U,ɺ,WOR UB~&N,/$^{{Q٢7k"@W)^1j~h'׳JYt:9Ox;Bs3 _nSx>6c*T.N~cJ8,,BeF8hR|S0VjPcu&l?w=?wQ;#VP:ׁ"NPj1hߊ@%D`=kpQDAc>"g7˝%.@ |DcT 2uTb-:D'SX0#6TDV:0@Aˠ`a<+ͩ&`Tg)Z}Զޜ(jrDUV^ũmS-$è Uq|AGo7rѬA.uu}וR sX!G4#+H%j\e(E%T3^IfTC͋^bN'} *f2>T2kb>ԣ!-6-蜠EV4t)W0#A eXYvXe`Rjʁa$+F1Fy =T("Z.Rī{>qWB' |֢-))(BD%ĹZ+gMf./]!jhOƌ26^zL4xOJ],p4EN&kKr`5ZWs^kfaUΗi<9xhVqmqxG?=BSU73֫&\S9);s]#jBbUD]eW:/FA`9j>]Z1ж-O ,Eee_6a i (\x{_Znmı;vb9'9X T7{/IU,+Rŭ5LIG3Gt[L>NF}9)1ckwysGU}ϻߊ1f]>,(cIyԸ [l>.03);iR/u"y鈸F1cm/n..W9+-ů y"Z"Sɔ֬OqJhT9rޝT܆S[\ au+9:!fͮWnmH3 `SID lR rD:p?V)S!!\DdyǤMAndm u2qoi匬ҁfL !\D˔=Fۋin=]&Rգ}4m\aVc3+ؚRuHP*_+7A9 jWGdhY x`yn,Uݲ0Y;T d=yˉj?ZթM%_Oj߬9\9\tT/WYc?L]1k>粐Q~Te-;n)5]ňD}ـn@]Mײxm\жm^4Բ37쀮&j0w<TnťBo /wQgIAt3_O-dKv~>WT(MuJe3 B)m%ԙc5%Dk(.d%]觵HSeUK$?hWD|؆1Zkr,U42pGbICXZ(:z `Mqk]rX[+0*hpKb ,lM|q獅RV`R1r"g`xnFWg zpvWf~vrꋹ~u>V]J2Uժ `Aq*5JsVӴҜ洺T5b5"a{$JD &Z8&r6b4 }32[3vTJ;o<9@>T%^Ndi%Mg^ l[~8-=9J5h|9৳ 3c Oșcw|(p1rTԑswg_>o[JUuߟ+ˠk(;k5\E=Fޥt'ZC:U2VJ 'ׯ^yoGI$ Bc^]yKgT(>,FvQs9BR=DŽQ {*,^ -lm},;9X)H 9fEpjl;ǥ ' +ª- T!L-MCZ1|$6_yC,0Wrۃ/U$aT~+/3oS_Ja[wႌƲ3,9 ;աV{HapsʺaLw"X8 =} r1U/yڥvZ;S 9yO\sITgǫ=wVg.x=&8}8<=,Mr\NGӔ6&xRzf~oV~ԩqp:V@34Y||$^}^{)LH0tˬzgF`Ӥ\2<7;0faT!9 6x6=AD[ęKQZi[͕LvLCu U9a)D俳c%s^[ Ry:NYX5N_)]MoTpCӪGܻI6X )Eɭ4iT4*KPfLtQȘX;mynSBD| oNBxK@}HcVX5:@(<֊|+I XZ% *)X(BRqE[wbcȔc 6^ϐ&&DNpAd-F+ HxF2ưmt8p8rkP`)A"G+fʹf{LGZFELA<QбH)*ƀ2Nul=BܯXoNS!z:֭Q2/إ鑿Y9|`}.hiyEozI}\*91Fx\.OoWfxrwo;f6ie~r/7)<AvL%ۣ%JkƥFoq(,͂ Z *T2<۠t"#CH]AN>>Cqېg.%2E*C].nNiuM2@@s[ELWՠRiosޤp7TRN(i;`4ќr9xrRiE,,Fd 4aVZ)'GU pـlD&68*!F[g4-z%7S\mW~Uڅw_{yqq3̿ޗ׳񗫿oj0*Ҳ0L1OJb#H TuÏ4ZY\bZ>(aV2@bcR.σDSF)jy_ֺc%9l5)摊<Ɣ#=gEG4XKZmG12 EWT)= DwDѨD 0")i=ˤ3BECZ9Öy̘E%yKlE_`C^/}F;u$`0\7< <(Lrya)99CVy45XaeU ͩZW2Ԣ:ZDbVǨqp `U(]j%^4IFqǤjjx}5g{NH bpay4$)J),Z.`^e5X|?el2Y$sn\i61{4Yo<ց!](r 'A2[fLhoӟH> 9\wT%썅{bg^" qCqg=sK+C|v޹BpL6x3<R r.Js|8S==Sͼ_nCBɔƒmi7 gJ1>hmr׵v+hvkCBa?&nBWn'J1>h%[@s[ELqBN62xMpV <ؿѼi"w ĺ^.j 2A#p7ꌄ"N U(xla+# ̆!$k- XsyKD[IUN 3 UNF=4k9Hمs 5|lcz ǜ Q;N1?/^ zVtjF+3ǔMD˱yC9-,1~*<TvHEogὧ!C8KJݙRR SٟODsX>:ȳWgYD,h'Gq򹗁βێbrrHkD8TU$_>;-ftXڴ`֣ }P0PK[6jCv[|P+g A47ӳ`g_WB)n3i@gkrKQ4Hrj#DEq!BqjjtJ4ౡذ4vc> U=Q.Gifww GW08}g> 2~/!JlvnF"QzV@uC痟DW~EOQ Bv[5`92ʉ&чzB4)쓣 Y?ҹ09/ݰnA\̯nLjbR__+PWӻ?D*;TjFI"%U*yߛE,1X-Ur׬E0o-Bn5` 9=4nQãxL˹KwNrX]G*͂>"4?c&R-yrZY.og@E2? #/dn8[LfU\VfY5HpMH4m=t΢^^9Hu="GxKJyp h]0j:]g+"Ȃ9G1p˂+)w zh(PH R o5ZcV8q.x1K&]f*5stƈqˣTM/ʋ[۟%h ^Zq0¡#Nh-Z" AYB6/`;QQK]_B,fKjr _}+R-}hWANgiwFGks\ngkBBp-)*NnFTK$@Yhi eXlg9?ʹAλ}fH#;,<ϐsLzQ̔b1%.) 5D[Ewh1)wm@ӷ٘ZYdt 0(d/;cw~8FH?qniqTS eWͤpu<ݩ3zrcgb[ x ٽBwջ=9ܚR r<Ϩ? 2ΜLg|hL #hOI gnN3h3\J]h2S5!!_,SRe>N "8{JdM$(suzVс|hMLP&&hthidI [iN& mͷ߶I׻p:6fp}e=`mrXqO՝,S X̔Y' 3lw}n7矂ɢj>䙖4n%1 /ؙEaJR8cpߪ"v0XʎgO`ECJ"""8Қ<}su[ S PZ/a%`dIQU.`GX}V{1&Vϰ}0֨lgi9Z$/ܹlݸ&.]н>Z7%-aC&W+8eΔG<ڰZ{o."ͭDHfcieLd:x%=~-hLѡ,, isP\H{'ά_o'z>LcE_?Osy1J3Pxq&cS˘G6?|M^c(c#ذ?0%F=Xn/7O1x=|XL<}X gNZÆ @c`17ĨޟzcYή%%89Jp!RrrH+OUp%ꆟ3p03YPD\}-l.1j WͲ27b1ʡxU&l`/~.xz/0i#qxrhwM f \0p djɝ L lY16kj\>:Aia뇪zIXY6$?F1,0q'h8M$llk< Y? ُ 1 ء)KxzhY 9nȤ{cdH$yG>WQŞ _ZhZ?ks,Bޢ{Zi WC)i◭00b8rf?'%ǟ>ӽP &Iˎ/Ă31ǣ ˧SFތ%Uq Na{$u =Ujv9գ/c\+e%|?gpy O->勧!e.m>ir]decm+ iyrbxV"=! ,p!Œs*AÂ7`as36"%U,RXPXx罇R)z19=h&b4Gb.b [ɉ@c*RZbx_*RCjt`."X@$60,{q"1bC0T4|>#Y(c"8 %2817|˿- p 8Rn؇=a(!;_$̫[WonWYaj ?g &`oL|)߹_]>x>#W G07gtせ9k2qh<\>" ZRe G8΄8 *4T6hmPJJ=G|%9]?FQ~;?Qf!|}lMdlj޿wQfG>#M˄;H:Cmq=FRH'_&Nm=q"c'>5cbČ1 GQwN<'F-hjbT r!hˉ; _cd[_jP'?_fZP;yX$I#;=3H/uw`.\Ãjq %2*Mgkf_y+W=e3[NY%: ip% Fs )dq0m+PAw` ҦwKY܏|;NC(K AJP"J["06gKj@~*o{a}7:hzA[m;z]Q]9(y$^ZcԔJ뼅,VFiNZ`%ps,fc@Y㈧]fqTuأBă3̝:%@* \*n . Ofܑ*C"B܎$#`O%f8^`lB)ܔtyz/%hk+*0+?V6ƫU4`uy"dN3UbA"4Vw !4Г$irsxQ׊۟`IlK)䊔L)jl۳HjљTlu\-:dTû lF򐐉$ R*C@*bN;D<$+eJEZd(̓Hc=JɨH}uۥ#ꮄ4Z'Orpp'rr{lB f>}qG&}}Mٵ/RU.¥auPVY,M{txx/X_ZJR> +†$ ښS8ڭ)}Fv}xڭxڭ EtGDLI) WCP9憒iJWrDg$Orsnwf- |I~YyF[g‰1,w"?< m1/aXR%W'ݳC%!`7C&LD`k~5IV%E+xbԻ؟efmiW3bhIz_s|@(gmly(] ,sDb3]kpf}ݳRĽi9Z_J(Zz H#g'CL$wLf%GqL3ҧqX3'rAu S E,KT;#Bk$V 5aÝ.<(+uhVmbe&RSG )@JJ ז5*b֥X /n7&h#DXH)HCh(fYY&EU29U8BP4 =\3\Pff.ap":=A9xঔ ($;fUq%cY& pwdqn\rc!jܕk6 zAGcM5PK䬩8cۧ LbJ>6u]?ğyoru#e˕`npjIÏn0Y,f~XLwp@Y,7'o2&ᅣe"ۅc| ÿ~;:/rX>WzAͿ2pK< )B:C"4Xe*O(w'A$%Hpy6feO_ZhC cQ၂} TVH {)ry]2 5T &Pb(*"# :CH z"=V &^K! .28jBclM8 8>hB՜+5(Tgn(X0>QLN;f'?R9t-3dr)}7'jLK 8%4:7V 8U!`Dc#PpyMoTkgu]See_ w\@cw.ůV&`bIq^kTk&XP=D#`([ϩFkNbMQ6"krWkڄԪx|O&1+ j\³ݘ#.[ǐn]:o%m(!A r?Tkb6/d7?~ {V׃o4t[_.V|;Sy#l{SftkނEV[#{uXD@q RNHyFА ߅-ko2ȤJ^ +_`qQ\Z&Hx*u~Ӕ!] "ڔ*q,htFQ['[YUPSqc˱J% }8ڼ%چS@ǃ z_|*-q/BP>vN9\8\łLK}E|/~W6 N[إًqh{}iCyԷ rWzqZq%HiwyfgދYݼ_fݲ6/ m)mWl ڌ_*>c+>B)k8vlB| vBAץyw5^@Lo^6߂WHj P}o0Da4 O_XwBPEӶ>#LH|DuFlsp9H9"j3AI/ڲAxnGH=pxN0s!T &PCP@NhL S"- jz=(USiWZ:;zҪ }NswBFl 4%XtLʖ"Nq0f2 elh-)ƱqhDisD͵z}a9{Zl ri3`6@^)p`17 #@D`HeI#'? t!}SbEw:r%}{j"^y7fWrs}v'! 7Fk|aE2;c軱LF{;P'9͉'/z K/d\sO@ [.3W沲4w}?.O1#B1cǘ 1cdc$ѭ>@Y0|cx m*U<;&ݘwof7OevO$ jc4mS+ł4ݠI&C5czw zľ*!DbKtM ]~QtZMoiL'GEPТ6t+t@ҘH.KvG{#&iTj >[3m'Qly7/l]Mϗ0='i ũH(L2~[vnNXucX4gܔ) Sk'Ļ9ю/qQ ('^@ LGR `!s 2z AāB_/)1"K! ԷS;Ɂ@CUrw=keuc 9\uyw㙱q1/_^8z/5_S&7GcwNenn xBq (E]oǥS*JI DҔh1֌HKHf')٢IgbA}=NS&Ogtb3Λ(X0G=0z]U<F=0X6Ty=H9:O3SrfKD;^9ǟ 96ŪD]grEA(p%ԗ?5aDF&0AƝobl0 h37ʯ0sW&4 3H K2&)bOLK;$B>}T\CIYu>J@Ih%3VXgYlFmkn!O(b8a Z' -ﶔo ^++ifWWf6#Dɧŝ9SޭU(&b^=I7\6`.)?_pUL/@޿|~U`~<,n0-Eû+3W,Wm~:|{e>Q{w/9;-V2r"L` 39^/ktcM1 i<["^g5Hap@o5@gdxd:XJѰ 7kN1t[$a %wˮ^LYZ Jw- 26΂۞xL3zb3p$LmGf,2QE1" Q­(FoonLǩJ F2UXe(Rb ub\l1LrRI87-6vnv AcE<s4TXWզ5p=flon:B?rK.ٹH".(E&1D޸} 6U&:lK"k'bNȟBډSKgE8CUnyQDjS=P\gըБg?Q}m-pv>$IZQ y_!+xr+0 q[٥F*B9sIAP\Pnq  Y%4-!m\1A>XxYݬ5qyP )= wd_Xa o-cp!pAF+7.H?{fvpAkk=c ،4e܉fIjF&b 5֘T4ervk׸K-&A[1[c֣~/W: =Ń&AJiٝg;.h|vc[IJ ]D3NXyFȐsHAʗw/"Aop UU y6\'n$Ly+S*)&PU.w0AYya3||\Zm*V;ƀ.z߳پ̸|kľ]S՟&u?+3 gf~ǽs})^_[FҮ~z'm920Q^>npkoψUWE Kfov |6}G^x]у'(YODܔ(}rUAh=A"1|Sd|pӍF1F]C-0kc'jݴuE3*[+Һ4]*Bi4e=7i"XR+EK9ny)m(\3er.4)觬+:jk^ꊞu]Q(<#oEc=t]pBon-N#\|o]N q Ű똠,o˅~޷&F?a9PgȝK%ܽucY-LlЩ_ld vbmzvQ(3'h7EE~n!բxzB#?˧W(e[U6Jåd%m4jьQb Yd6JwbJ)z< BgMM#ipHK)+ui|i+h9gdo맍3Gy˚m'K">r88C` "ͥ@9_[^xmIR0u j#D%9Jo\#:tAl>had㮯صur-J[y#%ߠ% P' &+]P5)u@ М(L$!0Ya1Cڷi_&/yÎ}eG1W̒aI+YbV9S3Q䔓q} I"AMTu.L"ؠlR֞J?5B],hOZX;H\xrUu*kKU-U%Tߔ ! z`wQۺ9ߖ=Ĵd൓PZKIk~@үOK[+:L{`wl xdCR:h{+C/TR r4(er;RerdPiTF0-lHcԥ 7_hmp?G`fL}oQcsrar# %e$c|rϷ?g{~J= '}  ܛG'&۵Fߢ :~-?i~z7{7׍cH>ݽNx-cS`_?`S zNǜfkKԠgǧ5i7tn- Q As|R3 ħ|np~/}:̻Y]<59gg|V{[ف;F ^ gmPo>|{ܧF/괇 zB<ߩǻ"eWX)xk_^Țl3Q~0c! v?D_QReYU ReQ˒8:!>ՋV<ٻ-Uti㫑g5^SzV%K{,F|H5I8c÷Q ~ْgK~-}Ӓ вi `5HEADں:б עV1++[:dW:5&2>=| e-⿮~S|U>v[SL}w˾Vo f!Zomy·I%.(鶣דn0D&c`{J2>7=.ds$cP'3O6I{é=A1^?=̳ӿKNgkS㶭Xvd=aP h{{Qᄓ[iog 3;}`Fv B5im[U(%52tk"mpqS!7ǒ 9xY].zޱ |`ު-_~~a:n+g=9iyѰHNY]dE:(Sl\`dexՖ Hœ)0; 4'10)-aKDvj=nU5Rl7s)Bd-&$N;O 4KoVOߗq)q7JTTHD߳f)+V05e'R/lUaV>%.3kj0mKѳ֔Otp 'tDb {ZG-!iו1bn* os~~ )v&j .lg,ј~@T蚓$ܳY)Ԫr*NF+N䊤]$I=;l-mCPRcJ=PTR-e.jK' Oy$)c.ާ 2[gZmupTKZЖ.BD*6]XB^dnц(h8p? 𬩇 ʬFNMKn]WޝÖ"hdLP02H?2 X S;)@v}-0G+S+h85 ѡb1j*,X=\JZFVX9OCZh FE hxFOQ L.YcHOG:4lMJ)~_ۉӀutYl%Jzxxy~I+BN~] k㖧yBJuEK :r{<:R?^τKVIaDlL; `{bZO+HBOQӠ#-z#gq } s("Fgy5c"1CRWNtDQ/i NU-7_['#"UWV%Gl:Tvw& +JI.v1#.t(txSdG;ejcLϤ˜_̌c]^b+%ZauNý{Ĩp tihol 5uTl|cпM(`"+dDJR FeAxHF$( 4G2w$Tm2 :XUiҚ `ҧ6$l4PKyw2:y4JH*ee.SļQ9iɖIRZfz+[Ʀ!YRZBқL s%s'KhKcYwKf"ĺq2}JZ>7먓 -ihˬ\{IPFd("KPRy,XϘ̘BV92 &1YGenܢ¸rw]z|eV@RXY[*,t\&7#h d`vMn?N`iU]|c`iY̥{)RЏ2 ?̣,X`1 Bi-nV~Lz^F:#~{} >=^=<=~X>̭ dtT+?;+*9sEUv/gt}w[]>}}]շ0xM=RS%g)dv4:Ʉ/xC7MσO+)P q(L"[?71xuKnFQ'i`R]M)?071x=^E5dLd')a|TٗWb-fUX;ϢN{u+ ;lnq:tbYN`a܄vqV* +)jBd䠅NkbUe|B: Yi: FoW6MQPJ#P޳u$+_GKUw< b`]$X'3/}u4m(*ȿo5)YG(5y)fĖDu׭U]c PhH9&.0n54B@.P Q fƇUѥG닡(!!P[{+GRC9GDp{,KWAƖ͡fRHmp6M6A$S xasH$'L$\"D&4ѫ aԄfS&{!\-ΰ;`1"$ {1 Dn-y1ӳي0vt/` ckA9j3zh3][q̞/;;z9 HS +'xnY8OAvj՟w^ b5,Z4 > e{]md#-F6#^? GџS§z *r6e:K+V0! ڨ?>7D p2Bn6R/Ķ@D+@-dy:o.%7 +ެ5wk"@{ 8K$1D ߲;kǒ.ɲ ˞! 7țHd`e j݂@Rj[ >A.Ͽp >v+[VXV<{c_a ]ANu͏A!H)3jXa䵒zlMzƎ0q uf[$x@OXAP2uZ@1sG|Gt*#d c @c9%S9#>ŝzeȷ ꆥ;B8uKQ[J[/{"'1(FK+tyMA"tٓ`bA(xLkOFR{mVժD, m+iM6V`RfJ%_mC1x*ʘI͡MKE(bQa%2A)8)%s":;HX0蝠PS#tP_ 嚒6Te؉S{iE;$ daqDK&&_@$M=AE }8T |!Idu!#& :4S֚Z&B#FRI&cl 8Q+B͡VRY!bK>X%kPe bd`EBb9ADv^# y XH#&V}Ϩf-9'eXBxb0jag\`ID1gt>zvZkV-B$ k9ul&2%OAk=+-Q{` )̎=y49Z $,mEmjsK בچFE% !j HϾC03d6)YFi:QJQKP=KVyT5ҭICfl.Vf4V\4l#GF̎d8Y. (tcE ku@a]o.`b5DvƼ:W DZy utLeN/b#ȇ}]3vK_}.:=HP(:9#t"ƨ ݁uVޙ3g'e(#VxFm01QGq>wH1![|F'j"yeʒO:tf$~ľIߊ!$Ы*^H7GۛGE5{0z4ڌb8 H6q`ٴ#,AVO#iZ?w ۊu͛E 㣒xk8XwYw>=+?_tPttz3vȆb61gMTs,fⱑy]~_I9xubK{p#ײRܵ};Fe /3ڵ\WIH~tZ/zΦޢϙ{rlިD^.EVyLR?ϳ> yl+$'ϓ[>1雳Nw*_tL_Y]QyA?OOXG7*cXoG?9Onc\}vy :`娗y lfVr;gֻW}uWr9/fF!ǒTY7GvN'giFM(x.6Dy[Tzu{!c*C` 1繽J'TقEoYJ"B..h@'MU>E0U^ZlM}'[Xalo|38% 9=3_&gw*dC5FzUy)>kiNm7/Xo_ų!2yBG[7},gӻRvqqbĞ ȨF7 jۻv$7FK F-Ubw~OS88 yYO~UP2v j*+ohsf&y<鯃Umкz_ydgrۀ>)/~z?l)QU'B9.vH)vP\#U1u*y0wOZ;Nh=WU_,|HkT0u_)v4N3e8E,qڑе oke{+DNjj{9.[&oK,sZo`o߶M!#BӪٹף[a{ct;{e."/$ŪRQigXd(mCFm]ń2`J ySIvC2k8}a[&>v; !]5Q%k'0J9QP 1SR}`P*$= @[@J eMJw3''%s+!ʢb27wW*bG1 b KlO:VktXaF}ktBuT 3 @ 2`NPGSrF$f)yE`&d'+tBy2tדzyy1.ɦ<ߍ ՟f='_roGiKE\בpE|Iy4,-#YaP;*"g!%&L&L,ƺX[ˈZJwZuV*(K^Cj{Fn\=Xfoeb:즽@;5kE P#(oFo۩ӽ&q8pN|w ",hN BNMjGϑ!;Tzv$H(PjI!c#HY4vE#tH!N0oʝ3AvEKN<2ed#%2WP4O?14_ouɧeF=eX=k/ږ%qlֆ884|jtH@ovzl:'5: U}e KwcbC!;&s@ҹ\\' xxj**2->ve ְ1}5{M|cCƦ; rnw hv/FfMoݗP9֬Yd?mxyk}=ΐb悓hF^3PvFZ#I~ycQWV>י^K/̘pW_a*U.r/L/d X#!HhM(K9e6I o}DH0}F Ӈn0k'a˼&a}mעZJ%LI8M `*+22˘%La%(]wn֣" EPq$ޡiUz{Q]|?uNmT ӿۙ[`s+jƣ?5iYe߬\@Ժ)l֦$EV-Ѱ޵lލ&UʐYg+:zy %Xj1G[e_`-zW3+Jx.\h_] [Bܨ<Ӕ ]-=8I5I ^Pnvav25/8]:kl#$.1ʙmIKg6R|1J/煱Ax(,\XRej8z̮EXů2n&=vbA$x'.-gV5lt,mh &drI6)"^ b_K(sLFL dKȑ Wkkn#Mi܁S5e&5qOIp󢐔eOUآH홱l |Sj89`I3$>)NK820B2sjCLi.า9:bՊe:E+ kgZHu=Y!L cC1POW>ꋖ/4ƒ݌&+WHK&CbNHwsh x QǢh JZɐ3M~yʐX?ԢQ*f9嵘ֆ#ώp)J$Fj#k}t|t@ //aD1=0<7JaCR$\Q`[!jh5Y i4}VMo#=cB% p 2^ܸ\X.^~vXJBƷXl=L5 ^h :o4pBݶ>>F<_`;ߐk\\/2sLg˫?~rw^E/#I/V[uG]4x͟w-\ej)ƽNǗ1@\$T9Byd զ׉/pBa4&4B]aTŠ[(}Rf rSzEI rS2MME)өе΍֎ӤH (!0ڄ_aDq6jǔlNNⲎ~؈j4 $FJCNPi*sBOD@@oA5(k'bEytS˰S.ezns3X+d]LR3a{:Zb$N*:"J6xbʌ50F2Kݭ-xazPcáEԄ~uVfgf;S֜!GnwkĻѪ=>e`|n+Q5|bӚw+6_ff\ %z]"=KOϚ/u9z!W u V Ukܸk?a\ouH8sx>u[Q_?rSkg+m/~Ү|ŋ` P^[sׂ\y"TWhijk}}RR7`ĠOȑ F@o71\QAZڈ'*j=BBz=s}gƝzsNÇnپ[v8>~N3ms6V6`εJ,yjorPL?&rlpJLj!g)uecpFcc̗pBTwWCW+͗ WU n?JuG}PlX⻫o6$*`Zv2{#Q0;K\']2AWB0W1#N 8۠ő+ F84 0莙OR=2B%ЕA+ʝ˥w0/ CĬ 0i^g;\ol2ǵ$N2 LËZQ`%(WQ  *2PNxJFk<)'?7̇xVwY?/wZ3 Dwg=BPFc.If TA&^ YSGJ\}|JF8d>yҎ ݙMCF|:~@3QLSE?ak*>x`MdSÒC8t:)Vɷ|swuӄ<ɕ0+"O'sВ몋 2 VJ: GgKҽ2C\GR)MI4]Z04~)0 fot0)N5P̉8^ 7ڐ 0QJ&Oc1`$215$3/P /9{AcT%\{)dtGՉ rd! XȄW-ZZ׍VcEN L KN6X0댁5Zl\)=+_f1d4"y/*]Y!l43,B|%R 臭Pfީ$vAT&Jr(3He0F%P JM GG'$IX7 >n'>Xp5BϾE=dte'4+[r ;bbCC$z:N38;vC /]TqR2F ,"W=,/>c9&E}Y&G_eoX|iv|.%]USZtP -~zs,%B-zZbMF'4 B:8wήLjyLk!u]=/y>qvm_}n=)zR%݃a5S8vTz 1߬,D6Fd*NiҫJ"v=i)]޵]WOX3YZ0Q[ۿ.==/vgcOz{6NT>(] ?Yj`lF )gy8d9&@\ex8jiϵH{'RU뒚k~U})}2I?CBv,GA4qӋ{m,-QPKu]u{;p~.~ilAuahS/L?!,06j|oc/gag1$|5?,/ڂ"ZHƱ`o-$[AD\4ɔZW,<"P$`d[B&IrT"Q YItģF@@ʠMl /"5öݚ/F7W5koCjQq=ZHj%,ƹFيj)֫k^F5a7 8[5UXf?lTR)qvmC҉`$ o"C1Nyg4sdzSp[7l)Ld 9 EDZx?74A,)(@Ch¦q23kdT #0+Lpz@h67r'+Qj^I^iG*QTaY_X\?SЬ'$+c"80b i 6RByXkS)Ϲ6!"Փj`bAJRcI*BPFU1HtxEHQ|NI:ASN92nobn/gC7s9X%6ey2s\TN"K]/|*} &D85o |Y3̌L5S-Wgq2v _0f"N-u%\\~X9)/׳/ ExKPȤ)MRuIGG4Q:_eF]yG?&{z`S$3]؈qr~)K';M:%F lϛtJ<;'>L`IY7 3*a4U 5kYvҒπ)Uj,4\PUSddRi^"CYp>ib FAq$$l%]ZaPUe*CbLr sMx6SsU#wF5Bm :WP*־,֨Gfcz XfD2y۵ _lW3&|0,cO7DmcJZmjY[cbPJRs-J3w$ᑧZfz+Nj-(p:Ih!DZzBcX7|Na3lH7VJjFG0ѰObR{D9ְ vOއ'f ZmhOuFKZ:<3ëvSe03aN;ɵANfiIfo.9m̲c{N' ynl_nyN>nӱpsm֮s<B4an>ϑ!m^-З}y2f}?Jhn*˘2^ 5xK˚&w$Ev6LlQKǾX\5ńJ6]-ZyPJprm>Au?ծr!uzU0u? ~3~JOA#+d3SD0Њ*ø7kv`;))y`R4]a|q:!)dA -Z"$= c<(s>F{*|xbr=1U-XOJ?ݥaJ==vjLLjXsс[>~LS+cUJ\ 4i)LN1@2J -+j!1(FNcɂAWzcw%YY|vh-z89?􇛇cbJӌ?:e@,u0 Y Hߪ͛/oWbrS0@>zq"(kSZJ<%)*m<ǻ\/f5_se K1 ^}O}0~Q}ic:{i AZ!9tϪ/7µoQ5q3ܗ.8R]&Aʕsx|v|+0q@Yo32թǙoo!d&L3!?`2uY~N߱:R3 ,rf,̼M+bxa|9wSYڛJVb:%$9Uj]Ir4IR1xvѣݳjƪ3;Ԧ@ਮԅ 9:νp>fMʰc:ʘ1;:gќK'?}WzNZkvάu>vof;C4,>tb9B}}~9ڀ{m靴DU9nml6-Za y^3T5'Ԟ_e=y[./z{$Fk^.=spM{~1 )/Ta~T#[>pހmx]w,7R4r~5GTlUj MYJRd[:Jd+]_Y6/?DԳOK=]vY>}r뾯цOr?z7;K@F1{xߟX߯Mvb#@竍nm}onPKJF{-W`Eus2;Ү X./)1EPt5y;n8gaﰑ#/(L \rdMSUr B+UU}kƩ&0cRPq31-CLGi؉],mӚ -Sajb" -Og;ͅ:,KA"|R׸1 x.ĻmPJPjc'wg ~xzq]0@9 _\uq{}jJ#ay:7l5&Nq2hF@ ]ة3$G6iU]͑-^@wf0e5qvk6BجѬ,[<$/~JQ| q; 9gKmۻֹ(xMzZ>AokY,fgb?~0}/ҎO  &IA,wI7!ԝ pr=e1qZ0,&Nz8A@9;u։Ό:T;vxvCn47,ZCӞ4\ȫ# H7sg9"0 %:o&S_y2Lk]ٕ8>PN[,Cw/E5OA=T[NєT)KߑӔ~+t¦ktbS!,䕛hM)lxNsޭT)S@i`yrdD6r-)87xޭT)S]kcz&Pm y&ߦ@FKS?ɴm KL8]EȅЄ]jPJ;#v&<_\.o.`3Q1ĻYJb*,pێ( 'gbZR-5I i:X4w|n*̜[eza峰E%d"tO!Y"T1p$p,[ +\*`Iϣ;8mGB>-h~;)+"p8dccɗ93ѝ&ISCSr*@Mz=pVMrh<VV25Rv 5V]9&F71hΘG9}٦5ZÌ6N͜Wh*T3񎻫>v#CDY\$MnAx5B=DILx?N'Guf˩ZHGq|\UR䨢aUXPzL5"GqII{%< tR;Or1G 6.!>,M{y-_N.G-<әWٻ7ߪ7oD,}ڛyIK1BL/ݭLJq\M% *mX%tWI⨥ʁvz7KS/UL쁠jɍEftUq)'V3M+lx5*. My{sD8];(hx7#@&Fb-EC6(Pin!Nc#bxF)û,e!m*+`81 4{< L!YY@&u b /,@HA\REEbߒ5 |CY#>1| yg.J&TuxԓGY79l*%pI>nphvi%G׻ɟOHPRc.S,zS㍴QNj^^Qΰ-Q嚡j{Xc_n9ciCXé쫭RU{R|>hv.cDtOv>rcV;Dy%cII5`i)Xz}q*sUpevС?V;Q_o4_b*2t{ҳF;&4oK˳!`1j ״22 3q?qN>(#)+5dހ/az772F{Ua6Wn6,y (|I 禎W\b^ #s8|4r0jn,(+o;$|MKeT%y hR"W,EjY O&S"i3Ň>w+tVOY|v-T_ȩK[#tǎxy ]H8+NO@Tbt]b msz֛HMb J蕱Z ( ԍChU5$ʭ)p>ֹ߳Zj^ bS5}du H|Nl|n&/e EeZ8#4^}ΐcӊzOJ5?꽳1VV;Uu}q_}bXX/rۈ/;fnUm~0@ɪϢCۛl{O;7h{PeOAӻegQfVP[\9hՖ\ Zbi nhvD,hO6,%kMOentϭ>j/)MɿHv7iq}TR5Kl,I#l].4[KQnfPzB]^|pY-~G;]4p1p*o+(4$zeTI8MDiY: pr5`4NPuўT9#t(#JԒJԒZwSk{Tu蛂6xtkR?}hxʷ7D>~,߽~ZG: d4ASN?Fz &&m"+vYٲ;}VXv^v:%'l%qO^49=+H (ZiUYR2kk&wٔKŏI%ݲ" ÃooEGڠCElde\5' fSK&c-@dٹkR(c,1'#["tZǠhtg?R0B4bۇ4ywR1Ԫ&/s(X BQ;RH6uCbc]dV}t#Fֽ52kCQ& 9h*\Ӝ+SFyU) Z z2├DUh(E_w.#!ѽKAۼ2*.Z;*#$ mG0(PdpľF_jZ$%S-DL#33g& W-Bjm ?ƃ%r4`Hn]rgOgRߝv6YZ~BΠ·y+m2S#&OG{C& Ѩ-YM@ck|neUG_y۵U"+D+xV)lӊ.Ԑ I=TR_>Zm~;t9Kb9jt`k烏؎;,б3v]9:$wwW`j b[٭UW@:U iUoQacCpS@'v֊]t5MڨJq!3F7Aojo*M][ uW٨dzϒ,+,@0*ZYgWV`R[RzlYR7\'dӲRB\2 &vQE6fSԝ쉋8F́W<چ`kUmkLw KĠM/ʉ5@jb)m.t@BӚ`]Q&*kTF| UY[>Ļ.]`,LSn2F9٪87xʽ/@ݳ;}ӭ&5!*~ڦ၊pj5#UA7S^gHB 竝3(h%& _bր,ArPbiR-y7^YIɴd{6vZ{ISӨ02y Ϡ3 i.JR㕷(nt؉'q\e!O^* Yrwe6噲;2媷':bWOUhr++Yª6Uk͡Jɹ*Z| _k>_*)/EIYl+ ka7N?I7^*M@d[dq0\M?96zd\<x*n,~1>'ñÑQcNKfX gIg [6M=p&m7Wl3lSpNG9]4iC6zé4#D5(D$s=MSmܦ98Y &w<==[D~ u^}G'̺N׷%Љv٩v9:%{Ic8ơ~]5cCӃ3xt_&?}PH׉>Ux1; nLN6}>ٵQM=r_j^`9^"=&XBbKZ?j]*زT žxz˰}I=H^aIYRi] I.08DHBgض>sJ1R*[.'IY5U]%\ک%䌰NW4x!JO3FGKH Nd Kv9acI9' P蹗SNY< Ǯ3qE#Cmc\6B/Pu: X3l勤^ !@駴$Gj9}8+[q]vsHZ.kŀkzy4\2NOͱ+] œ{/q烳;qk%tZ usKix[czn.*GĀ#zGw͟:݊)v 5U6 \}lu[TQj.:jʚbR>*:M+S2NyE؆J3&H amc=J_+(P@bdUi3*Fk3ۍH!x6N|*_;ejeV^g9`͂!eZ%T\ Nw]p~v:%'2C]fr_jj2ҬTzDEXSR_K-NiԘ4+5(VaOI}/+}V6Jd|V6J7R+b/J]5 Q'mw|H_\W77<פ pn󼜱nPdMI+O;8?gyj2͟?H[7KK)+QtRȹY!*DxPMOIZEd/+H{;czQ$#3KĂa\"\-בRajI:GLeqڋԾ)\=Ŏzs8wc!Q͟ǻ)[Ԟ>zP}Vi}yg}/>{HU~ؾOW޵6rcٿ"ǽ|gؙF|IШ[v$ٛ%K\zD :r9%yynSrhkı$&֒[x?Ee.j)=#zީ劦vI^ϟ;{v X 12?Բ9oYE.g۫ G_E?jYET֡K`Կ% 7Dl?;=T6 _sE3N/R=Njg{wz cbЩ{-wZ;vrjNlrwCf<&Fd)ղzgzg11N/ޙ{t@(Hcϛd):R4F'tNq:ZD8NgM\N֩p7Sh oQu ㍱SHq$fR`(u d !V}:EI(r/ST7+ A).-z*w! NbE,V Blj6K>M8|Kba13Ȟ̷$PJDBG@ۨMta1]9"c4*D=<~%@Z7GeaG'I=kJa}t#DqZdu:Yoɾu|rQlgߪr=̾[rAU"X;O6hX#ŭ>vޅskã^!=ɏe0*f_G/濻N8+>\ٰQZ5]lpܙqr7\<̦O5euv~ߗ2jKt:+ݵ& S]87PIug-MYS{5Zلe7'Bݼ{'3߾_;UNwU4M*bOh3Dcm,טKLvT*CP<`G4[d20؇=/{y;s_η#Lk_3m7.k=Έ-l%Z9ѓQrn9 miG KcrT}jV:dN2sCd)ou@\va1T0vFn$(6%e〪R8Q :N>ć񥵪FT۳6 H+KO.e)ffL*q[lV}gxnM&]}(OWLLrY,b+IsV>D"h3ZDHn2F4gyQ te d9ayv<="P*V ΂%0$G&¨"MR)5 1$^&p#XTE&pex9lܺQm9l$̿a(F)Ш4RJo=l>% ThD|a;Fyyo 2ZSy {^ ֣;R1k5F\=l1p=lͶ\+6TǼꚢRtP (7;Vka_2L^ZI=fxKR~^ esKypҋR~^r9(tiu*ॗں& )>;~xj{(r^B\ .Y |aovbX\ZXk!pCXy>]ult|7!a; VbC}t8DvIvNbu1dW bb1f 4E -  sݻGufyfNa"Nz^WpOpqNh[רq=6:opJ$#Ӏ?~}ؼq7-ж ꪆ/%8=j”~jMM 8[ +L7wt tcҿd.=Yk蟹#Dl/(HCމ3ϮᶃAb4-'j1GmCCmB 01ΓB$0X?9bؗOrc -cs.&<ҥog~5x$#$SųWd['dش9o!{CBeA^pP[e_8 ґ GA:!M+֓c¨SjBb:3\Ў{*A(zoJeJUOV YDdM9#ʇ83+{tFn$߇(\Nx6 &)U 60DZ oЈ}QɐeA.2D%!X$`Ɣ$r@gEڇN~%*KIE Y /r͐'9$VZTD>YnMU%ہ2{ɴӉtQ"igeLʴqLrMr:AjL{>E yndڗ9C !9 i_L ラ!V[tIVJIWS߅j;;AR^ʟN9K?/-fk ^z^ KA9/ACVl[̀c{),soKX}m5E:l/ڀ)|KAI^ujL^z^ ^ H*YMdڽ76.~p{=E*n9X e/BlD5O2Dڝnm|9 ~%M'N^O7F\Ὡ6,6Jȏ%7#NH x\\B>X|pF%O[{JoizW{iFrrujީ1Ro BX?AQ?[TTI:;ky؞iF5w$_+,)#=:lSq={bR q.)CQSP{) ѾtXqE;ȝ{EN@zE^we W{f~0R[פq ;5oHJEZi:[x=+KG 6\Js % F pք-A%kX2ɥlxE4Yr!; AOo"B.Dӟ+j!Ⱥ*v3TxβjEP l$* _Jtb߶]kw*tLL|cC e`BɦGV}WǹR*iM;yҋC_y+ BQɾ{QE]wW_X*:*^ɔ}I€䛇 t;8}̈́Dq]ZLt&z>)m(3+jԒSՖEp|ض=Fe TG FT1׺٭ ?7P1ѝ]MR2mL_[f8\3 VdR i%K,Ӕ͉@Nn] (}}@T ՜F0)ɨ2 $)`5RDHҥh Z  )uI7d!6L2x5Z1NϿ3KMzkjbl욟k~rɹ槪k^ͦcP,!3y IIQd*O F4Leg&õǒkrj9i_E⋩|۞?ӛKA+u } 1H2zs cʚw %pgǽi_lsz/e)f".olfC4]J]Cص&-3LpMTE2ٖd"H) KȀjjCYXS1Zf);l Ońs+=^?@W0FC97ڡGcc9Eô!C4O-y$$e `J)n[T_TiߋZ:bj"Zpk7(J&v!XQ*M@& Dyb m>   Xvs|1rV蹢D*F li=5nT;'mܒ>/ߵ2^.O1~_^Lbŗo}1m ][ TTBT|-5W2=u >m{` ٣ ZnO迧OFbF3gİ p)>3)=ۑp;+MWWeC?|϶PGOT u=%vzgبۄ߳n9LUmC~3aKQx !Ts;*x*pU_X4V5 iC ->V0- WAۻYoF+uR*."[3C-H /;kpRv.N{) L53bZ퐢"^U\EP|-ma(~}?jIl>Y|tTu}{7np"}P㌝ӭDov@ DX\* gڑm.ez Zl~?u~s3|Yץ6?7wۇa<hdC"QrxQrn]V})ͼg٦=!ʩy-h,*T.Z7|J?SnU۟qtk4Fvr*ѭyֆqmSRFqnHy [S |\7BFs"Vݚ_p7[>噇r™PtGur;}):&;}k=×;; LzH_޿zJfNj+%vgD!c.$Oe^ُ8!I.>WףNK\f>g 83=w_+keez|܎hiLXZ93 qz{dz&}NvDzyMn0z3ȹmg}v,3\3=71 ]h;SW|U6䰱7rN^'{xTN /ܺ_NO6?l?EWw=|j圝$2LMCI%ܔ5"srJ#Ђl'77"9Hi (*:ٷ*+a+/5.RSIv>9 ٻ6$W f1E}`wuK7܄VMļ逇$G] _߾k(5Źx =dhZ=^1 y~ (~^i=%ARy1auy 5C'̗K2r rV-54 mW /J(3%@-9X9֥'4Zan+…,UA]n QPY6;;E>$kǰ'&K5NpƕtD*KNJaF%0.TtՔjMQz/~YL W rap`1״Vr*[JcUR*)TUYYqR`TV.GP[֢(TQ0!UCVW'RY,΅->`]ZveV341B}im$l;TI+3PN6Js`sn"DL;Uv-*M)O~d`MHJ mUYGBZwk�y1z)JM{@EcaGH%G}Eh##EJPsn3]M-?hRt.)ԋجc)#I<Ov9g ;MPjxMdǬiR(2P*Ȋ@U(MՌq@d(IɬڀaU))X0nB#K$V45{-R`d =Y|4R8Q!8&jL VXjKlͰ҄{~( ,? 3(ת.1V1a(UJ X 9[9, YQI/b^"; g,-qDwE&EwrzUGn> \Pm+9X2<:Vu ;JCv uu{cfN+ BʉU&)җl}bN`釃1/b,A 2'eW,kjM@CŊA00gl}j 0 *4bpIH +{ާ]R ~wK#z[ FfP@ _ rF66K{ $Zg2yקL~e~{ |{ Vְ>{zY(E!=rPKTJzcrl'A xx6T< i` Z6y(- Z D^i~R _TKVH`6VjY wHX`dQ%w8CE8};weJ^N^.E#e+Ȟxa$w\>L<2bJ8z}|$%H2ラ`GHA7wZ7v[to#hs&[] yvrv0'tO8Nא:+ β!NS<N{Bk>̋ޛ#(W!CB ﱪյCmC;P٘¬(:sNǕ7W_/ݶğon3+<56 ׸8r{O7՟&7\n jiRڍtT?CGUr#[⫑N #wXo4kmd戌zςV */yӋ)Teл z׿sM=S$֏?״4v%T|Bx͍!*hڮ<8}LZYg'y>k)AG+5 Fǿj~m7ល V]-$s7~6W3 &"1fAvu7+fc5r{nkf1h1۰WBX5c5(-XvE5N/O@}/-|g=+V [H:iC9Oޑn! -#E g{MA&oFJ68䙳hO11zsG)[.1;FtҋPޤ[~!H}%Fܘ:F%ɥg|jNz$jDłŧEѶanBfC$ڥ򴼲$$:^69):9ztc4ZJNӏLg&GGŎCFzԙ@eg;ENU,~鿽\ͯ?.[k$pqkWg_:[}BQgO_ƨ]|Th5t7a //c-o8thļiD0=E1laIwS/`6}^NsB{6]W O!D.fe5_T/'T\D'̅梿8ңWqjA+UXEp185OD=[:Z03:D4$X 0]=;+FK 0y9 GŎ)T z|Wӧ3qabhwgh&Cww:R =mMPG_^KiZ3tϙFGj4CO,]S}k$rdP<2h5a1,;-d)ZUܞwZ8D x/Mc\둦;2+xvO68䙳hO)=z#((AT1 gwMYaiC9fף8wD(AT1 EPI[iC9nTd!:qC1oh5n\OGV:zPA_' 7w7K:>#P9DҖ / _1?`OM.Kuԃ%OJ'B6*=.%p]:ī 9E?>WRUMz]]o.`soo_oq~{2f;:n !ԲŅŴHV(dBUq#;$M-891 $"Z'%~./wF+_-ʯ:n6ͧFɐ[ "^*M=3SMXǴXЃ/E) 6~뵿/z!Zl6 Ash&9:qw$Rbo>o6sl qQTuxn TjTgOC-|$G̩hAU^/lͿ[_ P~>v}ni7 &*[h/EaTh>}G]ey b(G r7=Iqdz3 bmͥ-:ox┤a| j/%p:g_`WXZ{Wro-8irabb,-KF%Nja5ot9GO~UwQ6[s:u1ĶnΨكZp$`T ':G>3ٝlxl1T'ߺU%Fc&KNRl7YC UGzWq*LJԏa7s|JRMLf$%Ui|hs;S I%JTYIZIHb38{ 4:BLVN*XʪBqZmɶ 2K5F@dyohCgO9Z2V 1طHTVע4 /Eºշp_?umv3 c*OƓ3"]kB_Њ[|8O,qˁ\"] #XQKn<.ׅ~n,12?V7rel{22)U`Xtu]}tbc4ZJQ)Q=Y04qɷ9!lf ~$[$';G ~^DpX$8lõ|ሥ-O=ٰXA$` (mPCըen=7[՝88bz\CPA9_@c^A!q(յx:1$wKcw+YXcbלw$9exY",*น}y_ٗxK, 1 P\\jMޮtԸK5͉|,>oLHf+qOaLi8u@"Ltz) q`.D'RHbim(*IꥑONyQ ʩ9{3d\1;񨢼p0x?)-"wx0C:b$hԝ#y68;]"?Dٻ&qWT%I*HT6[6ZY4'ifPJ$P$HPwe{(*l^l{z2 qnԜZfU 1đ6-*ԭyb7Pf#iZ1hZͺv'AǗ(Ko۟Q򹑼媂mԬ{65"Z}r3W?˱D#م!{-_wR_}F(i US?6T $[,=x|fG/VqUnm͍n x;g6YV)>.-i.oXkqr|G'n'<5.5~ 6U1I\ CDOO~$zVJptK^a՘Yf=@iwk6>#u)Yk{i䨆=ylrBBXIi7+)V+W9݃N St :Wi|Ռ {wdi쭘SqxC|7Ijy O$g2_%C(T}R8$OslBM$jBǽ7"~ƾU Y~{:vc苹q=\@n?ocleL!>8KcNkpL}j2 @́vOwNh4ŨhSS~|ţ].8bwio=5X 겋 Bb6^1fȼATהu`R ?APU ZBKAvIϿ'͸Ӣxi޶TfN.9ߕ&1Q1pYpǦKW`f vPxK{|D"P'|2dC֎BXSl{[מog֤uJб܊Ѻ2 YEBs&N#Ŕq hg*4<0=_.PRޝ#:]! @exks#.`^7O(=]4".x8|{nad0s:psLk6N$;¼ ~EL&dr[Lkbvl7q%S!0iE5(qy sD1OZ9QCt<\=7WSp\]qXDFJ 8=(mc9}g9xSW9щ\jxV;VωuF!JHwGaƕ@ۻsa\إ= )2@Zg NRDDyl`T8Lc}tg\K5$%1HJ 25o1$*" lwc!*%gq TCϙ`j<33B6)VȚgv젱 gq-Ai3`a#}Q3"]ng^h%&O2)uϼJnK)r)94.Bc -\jL9'c O-\c;kC :*|] 2奒q9=^nƋ&=-w A L\YC7/)blF`H.l{zR" Fӕ* 4UQ ( 9ϵE&E$*yBH@*Mgdysy(/cP^mvsq3@>_,ZÝ܄3.9{wwn47~eG͆yv#vzX47\`)mبV?9bvUp iʞ%g:0]WMu)~Q.ߞ蜹',v(}g?9g%l lr)$8IdkJ Q8>Ѭ6JzAW Ww'^x?bcz= ' NNs3l˾h\mDܿt)~ o-.b2/ra󖫳9itZO;vJCw[XmuZWBj3;CR84HQqZwߝ(n!SKU ?#XZǧr@F2ڵmtme05o rϛ%-9v4YV2l. 7? mRf9`SR iayz 8"=osHGc(t(ڇ7#^? a;6_6-^q(PݚUچ|bV`/,x8}lӖmB;[2tKksa•9~C5p쐅M?n7mBʁ-K:15M{5/824;VZKx b|5(y"QG`-ihIʛnBC@ \;~e|ʣ~B; 9m40͙ Mri<(,t'JOT c(>&cTZ\zjjMgTUbgݸ?f-M~N}t7fJ4$HAM')hX)\q%cHTNnN%WG0"%|}q >Opu QfGf]>Ydo׶xb9_n^LQ گW|ݹU c÷wǣ5 BɲPr (\QCR9+zb!puT3d l6!skT=+{օ絎֔"(ˤk@3SZ7סXdX(HT*P46Y5NfW dE0" \bjhf8X=Q lRTVS>#DTOT4( 0 SG F 6A6ĤH4 =As510DX-v+ VxqIIb1@ )FiZYh;l t BVU샵cBJV02ΤQKf1&O6 $N(ciP[  3Br:da_r䓂 mU# qFjgƧJ{L9plmd(-h[!8o0Mj42TK`)#┳Ņ4ꆀUN<Ҕ0258`* xʔ{ʉܤ.ԦyĨO&9D2?Qnb2Ig`&-|NNx2Pϛ=% yQG4ήߐZJ)ֲ(_2ٽ?\u"p/;Ib4߇l3+N4BD==_B݋`Zv)F n@rJ\q Bww}7Q4{$ _|5#{K.|) ZY9b/Fi-šDex}Wq tY>PXY&r@[V HL\`(X,pHԶL`t)dEN DˢJXksJh$)$rc5{wK0 뫎S )} MbVc k-BK!{H쩶@y8g0YSAd]i6X*w%(xIZ%ͼDWo6mrSrZJy$'--tZkRj>k k)Dli/j)WK}3/5T2FjңrY,f Ώ.g[@Kg?bS)|<evzYRY92XgJ$aED G S oRkE`b^u |Ɋa-Fmui(~e$+d̥<_=<LsW3g?vqO~.m6s3}>$:+cO䝷~@FО }{.l܃ʿM!Z}_U(Tr_@f%4Ιb(Rٍ#m#+ hnv8Ï@镎ZZ}V:{iOr,r;`3;YfU:5ci4 DhmS$-XTEcjMER!>j"̌(A?8? G]^1xiem8U\kLMv޸o;öJc_踽eu }!^SD|u;9'y׽y59Y㝕n4DK%! JwC39OȨ#fU+vcem͵_򍣆#t%_˨_t'ƥֵ>CfYA%nƪ(!k*)3zɯ_K3QJo#OyzŔc><UW^? &,Ac2yjcNOPP# @ǩ0[f7ۥk=jC_|?bX[NR ZeVu#˨] Lm*$6Gm9"5 &zKƣFT;8)! D9f1-pfp :l#R=Ўm?rdy{>:mzGw!ׄ8D))ΏSЕ: 5ֿ~Q庤gGlܰET Fc8  $œ5`sK_|^,*Fg 6etKo޽HSW)e;$Qin,b/e\ѽ3]CS(9*'vp){nݽkFoH|v}IrAy{矆z[o}pPM[OXc[do"lHuAm[c&0bĥ#ۜ$E[h?im|1'Hc?s*B 7F|C1f]*898Yv[E{ԝL',YP{Eܞ Jm͕qmjǬzkZEiZZ~օ"#r1zvȁ}ixUnͨY;wyh_>Iy0iFmɠ}[^נ![4t7,T jlu=!Mwƶ5!D-k5=&UGgP@ Kgˎ*廛o~=T3\gRt}'oDdW ୥;`͊~gz^w uKm;eWjFo율בe/~|Eypö{rh*ȩlB l%Dptx"vA8W`c}CTX;%s5R0*bZ@pe䤠cN()|24dPyn( c#9TWvtS˘= 87ɣǺk,q@$ofY&3ʶ}n!x#ad7]+W,mE!QU\6X2ipx§ i$x~|{x{e$y{V۩un:h1BԚQ=N,|%4ai2_SDe&utw.V:2;w}'jRKq41z6 Nwްp(,բ8Jg#d9YT T Gf :}5?.h'O%DgmB J*{k쐽ےjVىr%wXKbC{1G(Ĩjp2V )|dsӆT鶨:;Fh.Z"/y6\Ӗ ?9wyhH&XDy>rkN*,Γ zz׎y-$i+KN+n֭9gl^' lX_gLփTF+?v Խ~aqs(WN>Wg7 W-~+wyNi ):  UJDaBtu yg-|yZv!aZ< uB^yRu**T^Q+*WnPkzZ 9Yk> n_l}@QSM&%/Ӫ e=b]P1\ZZecg9H%#1EGJs~)ߧΞtbLv5xlDž$|n1}mX*-j Ub֍gJJgqZ臷ܼzq/T6ңɣW|ݹ:?v>|{pǢr'3pڌ \ nqn :Xd>?dy|ǒŋy5) 3R9AS_yO_G7y @~3ǤcOw~ꔤZ BTO{HC "5`enYyB ,RgwBQ}e|lWʸJ7,䷥eaԝk P2v ld]\UVrr^<Vb4OG (bԧ^3[H47}5Ŭ@NB6Ȋ;?QZ}7`2-P8'd害VM聥`pBpQ!LZ=*J+M,Cv/W^t_]leTc1GQ N|R%a8^ےFw5pfXN(ͬ7ܪଔp]c=E;$_㋽tpA7F@Յ;On&ߖA鞟/=_~+F hfP~btYqػ|cOhU{ztLfbΩH6D `fn3m(WF(_rtCwD4$E݉ᤐḧ(LKT_SMWҞM)`HtoŤa_M8 w( _U!NQZ>Nt yf:Jk?C>}T%^[a2+k-aN5-1l>v>n>j0;oL(AD̎]_aY|d񆋭,KʜRAeB_YC8mZȬXm-HiΩv4rN7_u<;;l6IOcm!tüUFKڑcZ?pqڲ.94ykUCZt^o) :$BwE[QMȞHrTЌ\α\^6] g76U&q`@-[sE49IWf0pҴs XJaH]w' i' dD!Trz /U,u1 `LޏOZ/Zp=K+\`OŢt!ӗ}d>>~(̛(̛(̛(̛0o&~PM I7΂sFQfV 3*,Kϗ벷OɇcCA>tR=I h 6+nCiТekN$b4C{)ni;})&Pq@%MZ%שfs Ks)(sH@uQQ+joxL;kX[+7"e5NVC'MǦWuM/S}N5'I%hƬ(? -}u:g-=i-*MKyq͎BKyih3ZVD;t,Vq(-`duhS+-psql}[TV1v'P54;̈́*( OG7*gրu!I2Ќ\E@>[{cV8!eR-gaCBz6FK&T =Ax}ЃpCo?hO֑/ׁ qo| Vt/>/P~T)wߒ6w;WdO\iTc͜Gfֶ͝\s)нY&"6Kݟ3W!ӹCC:>w<Ă'uLx~z  WaR;bmנ})byez5-+Kք]\^|>\$\c6F-^HH<(*GdvĂ^4/'9*:-:]~c8M3:؋&K%AW.h.C_ĔBRbb9@-LRx=2YJgi+B>.Bk,{tV󸫖ޛp(VO~4v:~E^QXK$;;S +^%0uN*U#ep[$,bdzNTȣںo|kՔ @FAtfe}!h"pcDwn[14I(o]zLvq2/I8[OS UhRL*HE{wkB.=ػKF U{oO̱K vލ)ߓv,6K6x(%iThڡ`ׁRusoRDAsq2 /< Z-Nbk%ӵY: 4O[q2 , W4 bŃhpN\1' e<h*f.CoKŜ )3&7^@f\*y@07P$"] b`wڲj,L-k\ ׄ7Y>- 糇+V/{ϕpHr@ŕqN+ B` -I*oE[+؀Ǵ@bTغY[j{L/Ab8 |5ey4PkU "W >mrJGwQVFmy1Y~cmSK;^&-#!תەj=Uk uwdӝv2^ Q "74թҚZ#5S*O*uXZShC"*NN?5jģ!wE=j˽մt=fj ƶ_%>ݟj4ZXXKxk|?}]FHZ NqֻZPj Q)huǨѡL74QoۀD&4䝫h*_}=uSu˃թ;F֡LڔԝY֭ y*ZK{M.?u˃թ;Fv(BiY]֭ y*S7 K+[K5`k^Qrbsuhlga;ݢc?~{ϝ'*"1-RN<(<b5<$pYXᏸ~p;=\^\PZ}vԛe'.k>1D*'!O/ݽ~Uɪ=\|Ȁ-ƭ,{I4ϪɥVSާ>g}k]_g r7j6ESL NZY*|U&RYVxHR" _f·Fի3zFJNu՘lE%FJX3o'}8Nz HpH :gJR4 c> C=1jKR 2; ̚/}#wZ>2kj$4 ǽWKx)R6EBKvpBR(LbX\,/aWJP(MNڠ +W"u?#fjm~kjFQNDAʼ/Ы;Шʜ}x U0j&h6n7ШFW%2?$@KNUڽiՃ :KV *)pM5[WVQ[zd.r*yIi B@+J[+i`xM-y}iK -[8U9W{O':nٚ*PL5EQZ8l]nV #dew'3~ESJ:.XL\@g*2ϳ\pbUbxKȵS+=c6ŅmW;+(_7Jv;V+7vUn^yp.qKgjI4p-q}Bus w6bC{wx﷯=Zn)OQp<~DG Ye)*gWc>,ݘOPs7SZSβo-æeAƻ BRQ֒rU?iW_܊hr}E16wgVGO!@g5)\lԮBTB: X"J)8Aj[/wũ`NI5gBq*lr k-4v0e&#Ryԟy(@VX~OCbBC>/eD-i7Ds`w|R!MAHewo26-.^m!_9D0%MFjE8{n:N;b[ty Qޭ |M_lz7QKѻ :n=)%-K3л5Gc8ߺZ?U~{8ڣarp]s-N1Wdڃ__ҏ糛|s8:=8_{W`tmWw~<PL_Ռ|Z猪ZSW|=Y\:al9:)*W-D5@5 $Őjb&).<.lZ&5..ZB*λK;i'-Zwnb4nQ6ຳb[ p7LZ=sVP;oo*-Ӑ;R9$P9da{H^wLR!Kh`!RKaYq:`%>EFl"zB?"x9W׺wWR85yG&-5 7w6ۻy®jCuR LeSurc&GFTKPnX6= n^#τDec&#*a%)% 'S߇& 7uLw8ηJ@_(YH _x}]ս[}ً;nU>Ӽm8ȉɠ8XL"ϤL rO^ }Y-K_3! aE].Q86r!ZKB3Wmk"ڙc^J tbro>&9"?궼} ୘/wUMO]4 e\mMr|WҀI"mr0MqM(B$M^j\-b_Nl1 q^ F݃jĿ\ ate Su{@KN{wȂ];!OR"l{[n)υug_+iO0}_վ=gfA9;?b@ 14%4ҋW2,M J}w\FÔm+~[^ǿY{-?^4祠AdBʹ2űȌ})l/(v,-ABޝ3ߋk< Ϯw|T1Ty1hff#$ݡrپ=xy͙"֒Fآǚm ȗa>tWj6qLJ7qGנyT+ٌyơӌm(8e.Fbt6 c0E]=obwB j`SHrSj} VR8/TI ɐf.pM(qn"+ v\Cw~U{b)/GoE )$Z t?l܂rHay0yn- -:sAڠ͛QU s]Gbav_jNUq5HC:D[JnVKzjIox'[~/U$;;EPK=j1}^lrźK<%.Zwnb%Z$J5_py5B$]9i'QU"/"m/mim)w)'vNh d|ˮ5tKdW`` %Ϭ1QI@L笤Ѕd++ \YU..fx%T/h6}A3vY+Hչ?.;yq6{gw^|n@ ]Z5t{@:u=pT͘o!]DZaV7&;9-.R(r-19&%e,2Z#A)C3 0"Ue/jv7$ј8L~|x#fx -Zs(+58Bᶸ\sFvո!G2drÃly姳en1O>Q?]{|`_nv^pXUہW_!8[vU{t qѥG ɸuLq|GX{e-Ck 'E".CeNG2KiLIlQ\Ҁug ĽѩƤ H.ԬVet'q J)#Iiiu|QNR Pk̴*!KP U ,J8آ$llUR+Ʊuc02DmOr]j7؈F)fHCE%1 j͝q?C }'KW!Uc)#%(KSdb QtJH*I}c;nkq)A (^Kt8ͪŋ?'<-ʴ`EFOYr _֍l;^i+5-snBu`FmV{aHui9"3:˝f'o5c+g3Mf屧m"1]aS? kZw4%c(@V]"1 €hv'H#` ANS$P-3qCLԎ@+i_S*e#0\y ZɂXjA @[xa/c1"-r%X8i/ړ$TAU%8Hhf1 n \ؒY[- JǸj>?\y;ʳ|Gzq`zR]J?.*dj ۇ3 ?#k?ޟM|i֋3)dIE~msds=IEzDR5uCQaҒ#q(BF8>,X:bډBwަs_αg-)@!Yk}!hڹб }lxۻ[U٪6=][P+[q O0Ɖ:][U @(UHniA(9S] %uNATVg|R~/ץtTvҗpfތ?D>/ץu^R8+&r(%" i02\\ \wFJoR{%P0r]jHq(R+7D>/7V#މ0O$A0.W$Qz(" Z= ×R҅jKO$:02(Hxq?qC)TeB P 8 ԘF"J1 C)"<9BR_IQnD)0ZX 1r]jxt( [=:a9?l=G(=i2RV*s8&U/; 1]^r~kvwsC sV<\抌Hl2*Ȕ:C Zl޵F#_)aÕ"~5޵,d]eU%)"y+fZ%e%' /DcCZR7IqW޽kVI+(5.uTK+%iKXvuYŌHmpJHcy[XI7CĿ~_m3٪z?a̖&} ](^৏sxb?' y,*XU"/7s4s"/T412FK帩O=z&uŌr;,?i%r:Ɔ,Yzq{xކ#ߚ7zX9{.O<6-, I11f) ꪴڊK%-XH%bH5\FCcR!ЍN(pfC[(~2, 5O/]}<F@w;C@Z># _*;x'slk<6W^?|5 j@,~ HfQQo^ayayT9xD|Szp>/~/-n-ԏGv\;?/g/y{^oz&:|R"*?omFg![ʎ9d}Zoj AT|ECarY>S{\9AiesC[0r2aw?i{}ӽf-Cnfb~syq0zòa ȕ Ȃqh)Zy|78G{ 9Xܶ)A̚`NG8$?Az:Q[DVXn~l}?W7vU9L2?gc}ᅴ=I,Kyr{1dSEt22X]C)ߺ٧?M# ?q/g^{hruR֑ؔl[{9zۻ)Bs-m#ŻwyV>[~*`wBDlGM(6A䶑F\E("p-nIwBDlJEڔ*;W kWy~YvQ@b"m㇋_ʻ녹X7}v/mou3]pdh6v>Wl|0їj rWskXAj7XjN#Q PC)G%a=Hy:ORIlqE41~6`8I w 2DX`*99yd) 4#J98"B+Q1KtRNs;$Zx ~<7vj?R 7:7#!+Ԛɲֺ廈*گ,*?D)[;"K g r`~aC~?A@'K'c:o9*|N"s,| 3y[1- tuH"J 6wBm~?&/v"nK7}8f(7;SlH($K.H.nZSE=<\dX4 G$ޛDɭ\H.A~L:kQN%#Dr=M<M@06uލLPsz XCniJڒG,ۈK)S-=SZK D}$Ky2T-ӎHj {ܹmP 7Nȟ#37 p3ɎwA:w trHn#F^B}{HօfGޭy2w trHnc"̻gGz.,MMA&vdYYc*k$OHWYm>Bo; ;Q)^^?HL`ōxbްWN],\ YW/Uͽ7;wt^~i!~<k뻗9cSo~M6b/gXB*s)[p]RS<<^^\PPKP_;df졺m>qTK?SxS?އXq9o깹~"Es&TbOsjksP|5s])G=.bt𣙽|' ԻfBBuƯo`7e-ItR_L^ywI8pU!bJé}DɌ̴J08Q!8tq?'u(lxN)I$S2jsV>bP+t_^!1>ӽ yhc|&|RЁ6ކPny:]1*` cK?תM Y&o: rù+A26擢nrn2g )DQrbdUUH,HkY ]J`.Rr&h܆87swy&^\j>͹xqsJQ!T+@!4V҅,nPf,Ն#Fi5ٚ.^BAm޴(Ch.e@L~p:U$[A)K[Ȍ+C$RrRqgEN*!KeyK'x;(Hu! Q =Zv{$b:Gқ_-%ޙS@`Hhqu[0;k9JM(sFyY3b*]͈{ y"ݛW t.7Tt7.κu /!@7TeMF !czI-e| _SׁL@#Nu]'O@j\''0iq@K֌d/7yKI(65Vh*u M2F䌃Dy,/W84{އD+xcޗDRS;=cA7ꩌ~.zq @9AanXPȧkLvn}Iz/pt5촷ٴ`}8}q0 do8x~FLtetI5T(+ԫ8WDit]B]Y TURʘe2ɍ@U rZq4('~]˖lZS\o8mq}\xݰLK쭅mp$8KBbsALSs\pl$pʂΡcabMuY˲] I5ԮdV[lѕʖ\@m2Kl<:߲F9Q"]M=D?;B ed-ƉED5oѝ3?wv e2KP/Ҽ\ZPM-UEd-U&%GzU jkJ]9WE! 0G-rrԤ*iIŞ^1솉RsF*{n*?fG *--sRQ>Rf㦣r\RД[<vU[δ7*)rW ($q$8ғ-ǜA$PHoT(Kiu`RLK@[@ \8!Oj-) MU ev`0B$Qk4fwhL)~عH *Au)QvԀ{BE`>iE~ӥLe%JW/p#V$AC"W5`U/HUr)? "! SV!_Ł4z)%CqUKl(޿]V$y*%dW=[ݽ/e21RdNHEv@Zh=(.-^+ȿKSkB)3V`HuP(./kԔleeZq#юLf` (9\14_?>nZzxڒ׏ooWObtR?>M>:Y g8-R 'I><*ΨSSJaTpgȳW}Kj#M6 | b9]8?W܅fT# }w>w trHn#^^j8Qݺ37,j0O5n[.1FwsAـ-S:һua!gn[66u~t|OL1;#Uxy yCB+%p=Ei)43ED zO-$fgֿJY$/PG!8?F=Fb\oV:VȔmRMc};)nKrbwYhi7S3Ϣ, 4E[ymoY~1:MШlaR! hk2D yӾl| M<E2⇛zJO i_]ۮӒc{Aa;]q T|fj-58A5Ӷ,[L;6R ټNE#;Yy݅fv6n5ATkfIأAb;37,B|Ĺ8#z -m#Ż[>)ow{WƑd _֞nVeXnWllXc>%@Ccy} h\ ["Yl*+:ey-`ʑ22tR yڹY.#!>4\ v{ JA)Jl,s{V0}oF{ىL|51w|I##=[p磠|+<%zo 4 E<&0^)@CvN>O͔3Y~W369ÀK<Ǻ-^QJ+rFJT@]#9cҲ*!QJwN}F|͝JkHmBgj+ڿgj+!wMpX?Sհx~uؠ0n4Y0d#4aߓXn?-"I|ゃo/m#q{H|aK%ΥM |q<m;Y֢ Fn R ;9-%umcK=*J-}͸JMīS6cRR FxҫF)w<b@1GEGh4qӢ(=.(٣Q(E_ ' ! J&} &F[<+pOJ =* $: ({z̩sP[C<l[0 a|{ZNLcRR#+M_t>"tn(ͤ?r2RLTEi'PzLQQjOnJ\mШ$(uVY+F)s6Y&;Rvد׍R`n(vRx'P SQ (-J@'PzLQQjЌ(j2R2"JrCi&5O\k[Vu\4I=JS%ԡ֐uoO^܌$Mo]|=N&Ixk%[!h߾hJ FE Ƚ|1d3 '|g&{b9qkDJ[߬PͻL`>|H.Į[dRYS)pTWaK4钒[)yJN9{ҡĚY) *qIR3|ydR ^M𒡫g_^Ax)Y9m.=5].l$+.fXkH6b2[_o,܈:]N'B7.tE1ZjYIpGզ"*D3"~>J?zKY`=.Fwo;sdA P\njd6NsobE| <j bzBcŽ x$9#i ~srv;.BL0ncD3P[e <|\M`³_PqO4,zڕq)iT-MZr[sw偦="̹Ť󋟧i=!% _ #c@þʼnI.QGŨONb+A5A1Pbh{V~\3^mVR-* (M e8eP.M48 4Aj^:c3~/m >N. SB*D2R %I=l0 1IYʈGC!OY\6OmE_i_\l-0=[G nkx%5k؛tԱW {ARZ[nZwu[]p삻zn?ж皈==_cXYJGQG}m̶dgͮJt-mu0A9ΙKc)m^D,GT P(#l1!00|u'.zxvC DQGcAP RD77.W {е )" 'a=kfk(\t3.ttD+D[,}T+M4WJĹ]xU74Nǩ*~ذd{J9}&GJ\vq^B$R R|χQ*>Jb2_p9|!7q ~RIBiF:U\&FX"P2Q 13)E,#@C}ܤ $oꗛ˶oy9- Wi!A&!bSF Xb!IYQ-͇@ƒBQ,2Z-f_%$5qSrƞ=u;C43bL6'i,f+:wb _cR2 e6 2A%8W`,Enf 'N< L"J &!t4R:W~[[ñ tzco0͆l;2ӀIƒH#K<4V(*|NDmF j0#Zonfrbx~=T>?c;C;уxV5.?GݡoVQ`COPX=,~&ӻ!^gOo.b-Ͽ׶a`eX%kz\q?5FS%yuw^>Gu) {YtlVÌqgbQH!ռq.MeqΒyjVg[3m4]lS҆#+޵MeCNjnU8tE[TZ)UZ=H}%UZ:;B]M^:&>6vx^~nyMyT!uԨ>%F+n-NET2NBȴ4r2ɍQ- R&1$LB-6Q8ߦiBAbcϓ?؞DžX(^l&ںEĦ̅j(gWb[r=/ZNqJuD̔:.QZM#t9"缠xp=q-]jb+kvL6])59Ƀh MHFO2W˳OmFo.s \P˱l;9gw :;>B y&~ ; O1|"_ЫJ""ԯ |"WDvpM$>%7%Ռ\:H\`Ux;C$i5ؾ`e!FR:d!l*0i At h[˶_8TCmk^Æ_ w^o4q)c8NxQ(,4aa@"Kc$L3ג#KF C}"&P C$Fi j%5Tl,evfcmka6D<[c=9lL9 42 aB:NV!A uiB1ldfd>f8l}`׵~6RvdZIʎu傸C١xGJّB3E6 -5Fٱ$AZj<@2xdP)e1t̆fFS~NlZۤhAx-!8Qv8j2F|JGЂfR]v9f/ |bY.?WHː!^!YI.m>[+51&xە8r;|*VU[̽z]43h4-4w?T;Nt?\K\+2JA#Lt 5Ԟ\-?M0Yuq, h#7>^~"~䶤 z'M9!.(%m}v}9a51ש&I9i[諾i[VϞږrՄb[_אqe1f|h&%B{X~>1سWc5u䚴r6_g646qg$#u Z'xXou?c%U><~LٷL:_tf6>|FMV F.m|m&b9W3GnOUvn,')O?&+W/yB+h-Li|.D6pN(. [*_ڌnK[y):D7y'[]@gnb1 9_V,15WʘjksR y)&@s;}4Lv&v { -wD䶃ٙ<_ҙ2Y^콢"&ֽ.=Ovj=y}v *mZΒr1].c"bz{sCAKoMڡnVv`2XD,(<$b󗱑r˹]Z=^ad{g{e=-Om> 5U{{d,VMl*rA0HB: xhŵf5vi1ߧ16-+.)^m)qѠ()^,sFvm6yN\S5quLך^жZ`EdNc7!ZQ/` 4^#u eKQNn\*2AH(30acQ]\Rc*Tp3a(h:T(%=$>7|V P/X9_oPߴyA zqw'\a)%Cxo="U1n,~пިprn妠6]YN4>^ofhQMcI87YԢOG,L8I^1D-6JK@sY "*f*D$ZdZd##fyix G bMBB.:R<a CR& DKj!*0I⩶ʬ"b3],SEALMRS)ǷF1q`@ P'Z7" \I/r*kq\k=Qsk,bԍKuLsn%T$>Qջ6=M~[IL)&t%yS$Bښ$2\ -q"6ͅ#(Nc4bLއ=&6EKȄ\↢!R- A C@N[o3D_O6BƳ*YU(g"VR_JUVČQ*_sWr?"F__g0-NFƮ`a=~?X~U<Wěy 1S8ri7lܤIGYQqIkWT!ڵ/(5DžPqlo/`{H)YCzQ Іo6`qѲ3g_Vݑfppc3" miG8l/}S{߹.)mg2:N:>S`Dhrfw-wqjf瓩Urd*o*KU)Ӈ]sP8X}~!š&~yXŋ왿c ZlW ~/?;>~LjdwqL:hh5aMv"Q.>D(c_i2މeQ, IZ8-n\Hs=EqVΕ w- {\|XO!VYݠ*ia9\v:@Pռs V/`b埄h: 剏ssUR J]R+vnm/aX Bqοƈ:ȩ"ofd1w.(Sd[kz3T>g;{FRۉuVM4&OO{uItnezCUglb.Փd"g tbz<'-QdqL3 T*e+C^9Ekq*wuoH-rAgnuuCϨb{9LY{֭pOC^9EkqJ{֍\֭|SUW Y<[ y-qʳ.c#JJ-Hz+Na>-.溾pඍ*t̓훫뿂t^}" ^}49PUyfb|$#JRTIܫ}{oT1lA)᤽3[n-KڽTzfXގ-ARiZ{87ijkuq{yoUDގ7yoխ]ʉӄZ+fTR%iB53#{ORW[ό<<ɵmvEμ**`pj *.]M7\Ky8# 'ԦR bȑo֐Zg VVJIV){h%@`R\;uuUUvLgOSYz~i yũ܎iW FI[]P3X]Sݞu_Ӻ5WN:rqnZ7.D'|:k>vU 5WN'Ԉ4kޗsn3OBk>ef_]1/N O$jz^0K >,u9oKK=.K R`~,X9YOqYjoy׳Y^z,R iNKS{ORV:P, LLw\#*ԲBioY-\9K$ڐB2AP-di 1C$-oJDċ!>+nHM+ ~xPG$4F%ŵM] Y/rk~TP%iaMrRDioВ70U:^r D׸t~būA\N>ߥդ4MH4&!9NpRE:s/Z賂r bٍWTPL>C=u6!1 s)B?{($;S|3䜖3L  Ƞg`awt0hֻFzF2N,:`fMkmepQkdDi?b@sVi k0WWÉfK'+ypަk쟃y3#ʖsAN~yċI-N;zaPgkZC&Q 2Q"ذt.zk󔴁^lR'K}M"4 xH wҽ_8rk!GV~>h ad8'¨7'M7 [oIiǧ,ү>̷ieh~bjD7M?! .-hv׃Ks`tDw2? i\DA%|8.>,]ChjBYH5%TDa"ť%$Xҍt -2_67$JC(Hq`4^%J2LQm$Q) -x+@Dbg*.sy*9PY9?Ah~/ 0B,AN;:?pLs3MZaZł"MK MF0i"&>w֤?5Nu6տ&hq_[;qp\w׏X5og/ᡰ3ϡ3FCp#dtrMlM.}-a>,"1^4|Zo)U# {'bbWN6^v:o.>55(-8|Wv>'=@G$@ŒD<`eo?=&(;>_5G}.灍<׍߬CtMFF+Q?խu4 W t.]12d΢oo(YZJy ls7QhX_:*+*x>WO /<Ȍ;1%|5%kzw&w"? Qtp߭jƚW(5P!Ẫ#X<ܯ;̟<>~Bc=$ ns9B$_v1ή_jXiO&nUd:uqu=z4d#oy̞br.‘rS0@[:Ls +Q A{Pߧc#I!yUFs|F첊ZKja{vYU_@bн O5 Y6KۈX!& 8"aL&+!*NQ#b6"B%ǬR9@tow׍Q`HCžŗx5:zGA_(~YeYimGńl")(ߘZ6ruEbbhݖe*e XO1H2QNNiNmɹ0TQ`)tYqQH)Oeti<Bhx.#Msen#RRf0<:aJgPeYAw@рTr|'Zӽx??ͪU1EemPM1!aϘ_՟6^|iBz7e'O2Knpq}rS|򭜸pŧ -tV&}xKyO9}>K#Wbq/]~`gIp_y$ d" ~}/uvClnuww^m2&6)llcd'[ɾ1awCA1%2Mٸ0ibOSW1a,anD5mzŅĸS[؟yQ+4 s&>+ vQ~ ~yu!۸}B9e4ч.0kMp=G #:ַCO kΩ|9!Ω>74("e&oG~1z!~ҨdRl54FbSgbyF\0|EMDpnoQ6, @?Ό2!j:Hf{ҎF&D;;[K LT#]oi5:Hm@hZLz;.o»1»7%O:\Q^j67_sSM$jJj=Tl opJkW9 vg)RpZaW%*9l>V%*sVݹypV0-#4Vuը;9XAF/;7-u)%+mؑVf=N!dp5s;N^QF`/RKi'2B26ͷ< F*\>;pLߦw"n\ߨ ֺ ֖񁋖A6"s$($h.`4 m  < ,($*-qf)PZĢ*DH<%aэ\\K$K>z1/nt&ffZaJ Rr@Z"C!(OUvӴ BmG;Cٴ!  9`삜R8ZjWҹ!DA**2 fC Đ:Tmyvy>DG{:wRDrLy`V-Ђ;xY'IgRfLey %"+wPW?~@맍6~zf*Ɨq5Jj1V?'%|'(5gDnCSl|1JnD6V{)"[JtÃH#O;5J_=LLF8߈nb>PzͿqrrĝKn+rqqI\T׷ x~ IwDc׹7dng  4!2(Q4zS}o}{-v%B,&*_ kYUo'V$3n19¬V=A` l8^Lk!.9N[pۛ|p]ѵ_^꒣4J> dT_c*AxA|-}8p73ݱr}y>Hz@]^v8A+#,|y(l:Q!H}PD(<0TZjVZߠ(;jK-$;u:5jA{V2#=tΡi-Gis)5ZTv>Q?~yXOSiQ+_w~Jug.!w})Fo^^Cg >\\67!]c%ХyqW>Tr{SOxO!xt햺/gRoW! pWk}zÔ?5U"e=Y7.6_wRлu t~GwnuZKUڻuO n}X7.6s׻ޭ+!;b۔^\ڻu/9n}X7цM@ GtlkawBްm[|#CUkU٧ +oңݧ4iE:?_l=(\\nRשϋt3O/߽T sR+߭2mm1~rõ_wyg] {. {.IBvL5Ϧ*PX\ TbY-uHgb'Go_HR5ۺ@]PeFs6`Ɵ~6~ ^:v>y?|Gel<:Su.E^5TK ;U1vHdؑ~!D\%UR#̒ER`3`Xdx:I1uXWF^ZMlGB=]Z% ),Gg$EDJ9SvOwl XZ0/.!ݬ#HE{N\-:A) k)HqU'י[&k[<'41RJj."Gk'& png !@;D©wK4Ĭ##FYҴйdyn<(-N`yɬ7.݇ӴD}$ʶcAؽo0$fIbùrcPY&eAb*Jg֖e:8ϷQtFL6:`,r)0Yi-VJ&5(<˳B[idΰ iƅPP;0@UZmdyZ+pjP'W2{yB3 ڔrX|(LԹ\Rd qex!cj$q}$|{.hKBT+ %Qmz#{IWQ Fs w H(D<gH ( 3p^+Mr>5O&h0:ēyTyڡR76"P!Dg jyM"0(^IކtyMVDct< = kᛞ 1-9C`ChqSdrЈ1|CQ.uSG*ΰgRqK cXk7eDWM kE4FVKHjd;}|L(鳼KVtL5fK4_jRfi,,ۉ{l1%*D{a5.yttZܸ/Y߿ڭρp{ן2'|}t0_sp ǥjH͵Duv 'ìѧ0gaǤڑ9%紭4,?W%ղR¬Clha'MRrVzLꫦԨg󴭔A=gm]^OnVt5[RiE*s_mbL_-12%K*|Yᶭ^۬/YPo;8sȒ>.USj9v#TB%3RPaVZIRmaL9oF gaaLkQ촭T­`SZ\䙠ĭDRUQJIYi%0|}Vj.YPK1)5}Ȟ@+ rv!>}r{k-Aފl;㸰WMaݔ}*;N ?St۱>]H5ZG+E)~*Q5zkUO0Oz;9B̌Jwp=i?A@ywZ>­ȵDѹڟRx$ItK;Y2'Bh&vy;+X+{ +7/.z pb #y=ipQ ϗ;?H99OXEn-C)ÝӁwB鑸+}n4oOr>q߂7쟤튁~)N'9UTBjSU ;?ĬHզoXH^-Rasg$, msɸ-!&X4EʶY{E+u3'.tJyS@) 6=P +#+ $ہQ¨LWj*͈RI9˱̔Mnɴ62 0/bOQ#9k!LJk6&WR%ӚN1)JI@'1qw.6 ND bZJΉҺm@&B95sϏE[k0yԍs a+՚F7(8nY)WObBiQHͿiqJ(I`J)GĐGh:6k<wV|0FiϝiQ(vr崢i0Dx w?tİv[livd0MJM}!/2lK9 tԊgښ_QeI"h]593JթLe2g_I0|YINv? R!Z@HTb[؍KR!NSQ>vG~oC]I N^q`u"YV=^8ju0ߌS?4}#Mw;mɼ[+.ޭ !)q%-3m=?njVf<Yyٔ5  qWRs#eq(PNCJ Ak(lTn\Pž($fѮb #!=>4h @;(˵,0(e"׃' ӪTF q(RvġJ{RzI"oWt"ROR3 (,`wšI:D P £7+ΉRq(-F"~8XFį!/\l!m$GY)hU(8S9O5QRҰLDY$Z,K)g"l&:pS2cjfZZ$.YKIK4t.L Pkkgp ÙhNmSI49"2;-E;yƌDSåB<08$SM.ge9 &*ՙ;C8K2"0qR 3т;_cQRHnnλU`x zV{w>~z9 _GX%nemV;3ۓ D"J>"2m(+n\| ~sxjf_5P2e-Y[Ezb3Djy=-gr8iB[Kom-|M%3c"d ARCo`: `Hb6w{W^M|qp.x2rrd Fq8d ]:ǗQd #=MbnJc6L`*X!yQXr\b[kgRB[<5XLiPtAƕ)\P^6)[lѿIƗdgxKR\!ƽek{gJ/rL) bԞp{$3 5z"B!vg6˷=xcSe pJX|ӵe.e_撢f&k\4?Z |)oQFM4ly[iB8CuC@fI)Q rV2mzL.5T[FJM'ŻI YAQAi |%M L<.ɨE+?bv_D2s;?GΏ>r~c9W1 K5=CxjR8E XėL%X~4 \?/G_Um37.G J:RU2?? o'̉ , M'eeA*rΔfA (6g▋WhPϰ;v;b96c&`h_Q_xsCXrǶ|6[lle˖mƩ6=<*DSؖOnbAa[ϢL1fW5-jbZ'=<ʁ`O9c92{p.*(?B%x`tUrqiQ~j<w9<>A㕬=d<<81IÒ!i[/Ԅ-3f8 6V63c uSMAPLg̈acEZNVynP )Pg $Y RiA`*8Eei+RH oUK Cй~ˀ(_sZ`qW;-\yP©= T/7Y|u10hbه I^xKD2Ggn۟wQMJ%Ԥb̓\8Os4TY9~z3pڂL8wעq.>lD ozqv<r,ܘv'*aqF?8z`wо'2hHLqIޅZK C,aèH^(rL %ks H<~i侬&k372B @RܦĈ9Z2{j娣!@!($9 Б:Nhh.GiA$grKRS N4ĥ"ghVj]|wr=Ҧ06q09 t)rewEm)Hf~՗RpF]m~1ەjӷH &C?/Nܮ ?A+pE~w Fw>'\!/6<Pt4y9Wռ.qjJLC$_3qUȏ8C4 `]Lk4Z"!e]!2RX9Q*]Gu(>0^ v{Qx9TT"U& s͐ka 4e}!=@l*u0o n>u5+Ov TBgp4ܱՔc߈;Tq;o=X| ޫoQw|{|sF[=kY_<ܛD_,[܊hb-RI/ąչVW ֯.4tO987 ^tCWkkTڝ= pz _==h֔Lz$ T@fRS l-|/5=dƝ`;dgq/X+ioF=F&c@`3 5L*``^P$8u5*ف"5ZnxO6A谹C)F6ω礞V (7JE^7rDR3DCA3ŠԷC{S@ROR#q|" >'*5ՕJRfP|줒vġ׌JRq(QʂDsRORP>($h{V~8R})(}NiUjQ E;zA9QӪc؞8rZn9U J (#JC(wT7Bơ4cn"vK=JŰ_sF(+y(;'JǡvRPʊu!(e::(6(4;QLwơX "wa?.D2H=H a^oF-/N8R+2Ҩ+cE;:ҨU+J{Rq(eInzZ@ Fiܮ>Y'v!rW%50vRvIM|;aΏ神VyiQ R2ouq(-F(e<8{btLDJ-&T/eD8@%LqDhȅK}PmuiR:М0qU+KF*UNP*lԟ͛O_/N{dVeAizw<5Oo|28nH\.C|5ξ˕tn֗b㽥(Eve&z+=}1| '\~m}1OgV@h=i]Ab ?K/>/HW4]:Zؤ]XQ7gl0-w8~kGxl1 J-mzGΉ;Vϱ^j91t]8U zpO୔2UJ DZJClRR)mtW0ЮReF}n32˱fR`)XaOiY2DS,MbvpH.~\OfjQ G1^y):'.P4ĝU}*z(Gi,cuPO/Sn9U%HbBJ1_҉ Ա4bD^ ׺أ˱EM+2 "W(I ">2"2T 4dӇU\ĩRqv[HN3N4$qPD"Ҧde$YW Ӓ܌cd-Lw6/'GXE9]vy$Tv䮻\;w7(I2̌SA2"i*Q AWb\Zx8@EzI9UV(M?ȿk'մ%5QmOz5dz]@N^w^cL pQG4LPvGsF#09`A(aR4YyMEn^5cqpLkHusIS-:mȭS9]nU5!wͧ` $G [*>mc'Ѕ}y4#֛xU)=2h.)!lWޔB}W$`+ag=wfSR.Ӓ0iev՝Pi7Uexg (t(}s/k POm& ? 9OöG=1M ZRF &cj.5CӡOnȫ$gu!҆2ܙ&J&)r%T,Trp$;)Y|] qs"HvVd_MiE~wo%ݎYz0nc/b?5X2oJU7"KėWn(l`o Ȫy/@:qASgy|wP1CT !'I%2-P],]cUz'Xj˅w/[R*-͈Ҍصrv$Rm]܂a HK_fc,ޚ?so_LNi o]p+bYpغqK[/ rKq~ѭR ToXfGՑH^THa%- AHA=qnZOç33YXdgpErnkpb*g8OyNR"q96 V}_j7;spWW3Xׅ#qP+A~U^ >i P_ ǡ:  i;L 9 !-'u4JQ$$q(XXp ;4i&Q,q ,1 WV [d`\3 "KDƌXPLi*W頻}.bTef f5t8/?an~Qnщ8=W@phTR}IRJJi҉ 6=m_[ic[@K}^6d8DA[sfr AWtx5o#(!BXob %⴫H&%,V4f*"&jqI[JbjjI1AN[6 B\LeH=<6Fsz<5ZA'cwBB@lϚtXJxjxz|')^_@~lrbG+(8*'XPq q}KxHs(?M>x߬4ga\㳰`5T[|=Y;EB؍Nk_~%~RכO7<j`@^F(Ы҇5XW?dl:nN}JN2F<ߩ bL:kY[4\(<Z [Ù'i?ڀw>zhBЩ6nC{R+yٕJ^+64҆j |%MI|J+E2(1 7/G߆6AnC_nμ !#ɐ|=bGmLAx:! 1}zs_J3X՗6WbGv6H9Q_\ s*~KNjqQ"AVb䤛ޙ2?B/uLżKĜ ۋW҆a#=~t5F6U:IR tΠeg 6D0ߣstvf$8ʻ/d dBiIaHē'e1:h=b~e]{r(\ة_dflVl*bD(D"q*~[2~d0[I&כN҂ .1%*L.V+\Y@qK]ҡg֛M㸙G:!]⺭Sa[ca$T `ѥSI!0cK1vBBSgZi[`8*1rՆxC<^!{$? .fyy"hiwU?4Z2l# / M5r4ڠ/py*ǝGG,qu019ԝϘ*jbjcKr860 ,@jvB2ȑT *nvb-Xp*}U2f81 q]IFz-* L%8rY[M e[hi4s$xVx\"|(0jU.K4' =+&~,>0UR|.ImvilJOyo%3)(D/;ׂN{|@\SY>eJ~*bb[CieԡC'PE 2Zj hဗ(OO&À~l֫M,*}"7\?4R"U1(ֹz5'̆j׹Gg *}A*KI>LNex/ ; ~| M+CVo5*>+C R!"rhNs¥@f$Y07ۛ/r!![>R\kV\hI~+tga-y.n2`7jryOb^-Mm3@$ֱM4%N| Y$Υn AT*!mIɪ6/a pBoKejz7{t0y4qWIG!]Ij@J 5{N=P:%jln}*_ZpjXx&P%WA4 jRveٽp?&!WGN9%E!/&p5TɩT7sQz tZd֯9 .Q}ܲr+.arIaUEg*♢ cN#iޘjon EWAI/jTUk%ϖppx*2)̴$8i`F]\gջjmQzl\TPD*֊\m_s4׷O: "&*+E1+ҡXEU͜2d۳<2VޢJ{juNC'yz+qGM/uE {MF {EV_-@~r"1Fs]o>ŧu^cBQ0;#}U.b. {)Tӏ=r`wx 6s'JwW|카djIDM%Q$1 KͲϟҺ@ܙh Z W3'+>} Mε~ͲLu~2qV*m6R[oq*pz{1xLw2Eciג*.IEl4s<$3yGm+l,2Ti dqxY=3 #A/lp`ő}0pN:)S*NR1{ԏNF~=ozУ] )CR1& XoOwI\w>iɀV/:3B8 wb|$]KedA4H}S^ȸL:9qjXSvMN/[ d.ﴝ}Z{ڻCVܛT8)WAKV؋kno?>oұ|Vͧ~&C\j_v2 ۬> uo~0wkiԧ>_ttaS8 y!@QsoddӡEel,a瘐jiLFTT"I'g*Ca*(G_(Y!Ϡ鵘>lIJV\QbcHo%&XsOOω |d/OǓX'a3Lxs +jlhɺXԆ&s9#EpOw_Z}>]g>łA.$RjVp^@u6!虣U`؉Ss9dçnɖBìTIdr[w'`J>ߛ?>pzSxv6^Ui@2UwA+]BֶU0K{{tNa웛1zSRs:0RuzyA1݇iC/KQ"VL@rSiP9@k/ FϽH3 lWsީ|`m|yv8Sn`XY6K\цd _ts+;P,EsVk}J۪T#Ж.7wb_DYYA;-Ķy鱧v%_B{X&[=4)/W$Nh*,#^ݵ8D_dQ[ *щM5K+]dX^zlcp} yٷIe@cU؏pZ͓esm?v#}.c1o=Ud s(92uLϞJrVpҐ"ZCdKi?+XOzFBٶ06x_w3,47"@*F׏I6ד/G+N'7ZH"[/Oa /QYhxq(?ӻw+>緐Ɩ^)+Iʈd( L@ˬCWO<x4rv8ý&\M&EL 2urbyd(Eċް #0?M6O]/EQϪ<#q<0r))qk"ޛO>s:04EJr'X y |Sܛ׌h&~1~ZAd7H\ 0؃k@i13"Wa0빲 Z !о:m!eI0PX_ 0!Q$k7T`q0w:9}-a;&˰SQFѺPaziC۰?)*`/م.(rXtŔiRE9'~ 19`c09Lt{;XMiI1k9]HSǾ<*>RhnT@,%¬-".G$UFptѽa\-x3_ګ#K^H$g`Sr9a-DyZ1e$E|pz_}XWݯ 8 1x#(\t Y%OҾ v.2%OH/,,~F\t4ÅE`j|Rn a4*.`@},}҅" ,0Ui$ VR'ğOH]}bARk/#DAp0X|X'8>Zij2lh7ꀯ/wBg 5/)$wҪ| ABJ< FXk":w{/{2Kyο"(g\6歭` LV*&n=ncNV˲ɵ@o8XOsb1qwi N:؟;k^3'XB->sP=;2y',a{!PݲENh` &Q(L|SWeL&-h߷ `eZy7m Iy.)PNf!N>XssWI8%hJuY''׆̔ QJ+-ސm3?NRr;4.>\:OZ:Ցiubڤm+qz+E&o*A\8R#q;Gv>Xwn~5w"%yz^8tN 3Yۘ1c.e&Z ~wϮa|%Ϸ€{ DBOIp)&]+S!hjR r&7|WS]J> цσ7tcH).4  rf `i|y: hmedn߼=,gN@UGUWxz_u@_I=k_v3Q{ uiEz:lS+PElk=*t~IdṲSx7wl1 dzQ mdA&COoNI&6V6%1W1 bh&ENkY4r-*2,t#?]]O ᮰)+׏{7~=>մȳf0wypdu6z _q~|ڽ3n?IJ*iu4SyCs &^}ځ>tCkfAUnSTzUf ӝYxKuoX!72Y Qx6zΔkOs0/ey/=<**Ab4*2ư$12#Uĩ;3/ӏnp"; o7o8ՙ_9{:~G)eUgN /fW]i'%LE~'r[m iW1"uJ-y'_핂7y@;^]Xf f>xiW+G =->ǃ c\Mmg_5_$Qbw$q524d5WNspgdW-^ v~qb['2.?;sd]w&od?WVݡw-hݸ =kQh9 T:C~uQbY2o!D;ޮb~^m>8MXk>jG0<DղAׂUϺ: S b1H N:c GZd`N9 =z\ơß<9Fu=(b=r wU_  륧0iI ;A5\ 2z5Q(*n &(F騉Lyn&M@Ͼ9-ˤ,*ԐPNJ컝;('*͜Z'#6Yfls\G9jm5<䱣=v+S9t:G7v5TԘk}nNT|"ߺKwTN~]`tFeQh #$òMhWcak|7-O6hX^!G65G9zQѷ;JrrQC+SRNFL9yV(/ro?>?>g%k"-(0#ߐscĺ"rȃ\κȉ֢ :6uȂ2T0Xp&3D"ԄƬĺ'8{͵*l2盰. hXZ\LDE#i 18xXycґ+8qO&R+#]:aR0뢝4ߜ",ɧP>w9ݲ}%D|}mWGT0%m&]RQRdC,EIE#** äCT:UaZFI?ZXw`N%Nk|˂ iM:79, cgJuNЯ;>"IC~yY^F0Af6W |{|^jՂ +>/_ eȥ}E aǣ5VFfǠȵC892_)iG`i qDӧwcG良bD"ZG5SwF: [GE3GhCൂ(t0׻}j{jVB^/ʋmLz˞N/ Ϳ9T\&+Sɤ?}|脸YD ;Lv7;ib?q`0KCN\_+,f9;FɄlqHr6VW4b;Q\cBsRɵ+X[Z A6Q, Z ؘ>kp-K3F6_R k]N<|I ir`8ţtsB;(;LJ|,:584褴 |{ܐ)rg,^M}5VB{5CTfs*C^qěm͑ؾ̖8%k8FS=cGP}t[0]YY#tf>+#:0ãÌy:0#_JꠥB'a{`W{Wd.\Z.3}+4a?ƅOg˧á3Lue޵6$_NRy `a3XˎA䵛#$%u;"TUCХX'NddDVFm7kA L(۪)_(x9'?oU9 IN 0R8me<}3feCm<fZܸϯ1}k7iïѶѶcn?8sRٞd=v2]qTm-G' M8DOg(a阞m< :9 ]ѥEfsr;ob[l\Mo&ò99-Gq.o,k.f4<|Rr2p~鿞Tw|(iTO`Xf̂a,Q,wsjSRLyF%Y{A#(b20$)-B\rw]Tt̕r=` Fv%>JoRZjJ%-GSZFuNh+:gE(Y bJ.YIE(6vXv÷lʾ}OyU򋟦xX;6j._a>fa__.;g6J8rM(#P[x9>3vAY wx}঱DB&4{Շ?aaXAyԿRLo ԷukjV0&|[=(lm9|L%dWYSrPFsmOhyo1sCͣ04myP(w8@+.L94mukU˜z2grfV+W\́n=_;iqޙ~m+;Lt댵yW +7ثԫ0 Ў?:F{ PCF)ϩhFEoD0䳺4 Y& 9FÖ:s” iݚg0\l`v<_~#Ys FE 5O,~x.5֕ODuh4 Y8Nmu5]Y0/4US>K#qDO'_,dD%;eD$1БfE/" .> dVS(NEjwE:w"2r8xt/Um1HƳaQp,u Xٚ@ ^v9F[!; 7*k͓ S2R0R0d)9R\-΀3ի􉚨DI-3J*Hk8^fL(b;h Aae*@bFdeDd%`puf?bVWC}*S5?4)Af09z&hAВXK)O7k}\O?>'z}1K`h4$i&>JCHL(RAfp.( |}Z  >x L"N F"zЄQEkX0]lfʝN$8UrjJ6+/i PR-ԄfBuq&iv;v! y`"'ah쩖^1n!k+j{[=T`#bp xQӋ֝qz(oZ ϫ\w@_KDjƢ.pnd*e\))A]Υ9J,`ːyy0Q':xވb4G.Nf; Qj( h(9!٘% "NSȿK M|%I2 Eg"c.Nf>ѽb5a*Qz(ŗw+<|ءNSE 1c$}Mgo;h"uh F6aMH+- j(q7WWw7S⯹ϳZjS&R2AdmDz暉E[Sާgޮt #{T_iLkBGNIU,Ӥs"\eJItĚ VŦ;mDk;tA ٭La81DČG $BR^&[k"7[9Tq#FSypwMc^=7/5S+ź̏c; 6D: (4 zjdaBMy`wps\"TXM:cDDV++x3HrC9M L^g>P! N3B* f ͸&(W|R(uBP HZubww=]jW F0""_ͳ^һn"kRx JO oa2"7?-r!4JQ%Cd`eFx[Ɲ_'<%fj1+觳x\RŊ5\TjÙ]S'ΞH+k9}2z>O53|hCX!hZ+s4XN+%%"=AВsq23E\:^noeånk|=kof_lSWtweˇ=KfE./uO-j:J6Βmt^&WkIռ NV6k>WG ÂysYU frPS#"T6SJbQvQ\T q9$R[9HK}:[q<,QQY# q2$44(51Mk5G-@`tWskר?iMOs疪yH[qFq}ԚU❴%To:Cr..ӷ eqFjVIrJUʓtФayΠ%-]UOQcCѺYzlx Wv5Y6L{koÅP 5 !',Xz򌧂$Y)4[Qgz'&fM KkqrcgDZWF<H#-8uKw?v~?#?y.yRtImtͫ6eƓ%~Xo?YfdX*"dj ''.+T{LHt$,ҫbL(OS]WsP mE} ?Vsl@9Jӆ_ajKUt0sK ]ԭ(=NژRf M4kihAB!r.V&h>|KS-iEzFA*t9e_m GQFy>z8B!x3TYc{jZCۡQ)ʕ}PA]1l0ֲTJ;e(`#A}fw! 7E-pn_uM#1:Si\M[cSJ9#>;7ܶgUJc> {<kIJDSWӽd([nڙCu"?scd+oW GW/;Vs+k]\b4mʨ迅gV읺۷mƖ!Yw c7VPE'Z5@̖ᩫ&{Ӛ4qm,Z g?UR0H{INF=I(e:wΛo_!>̭Yjd$2 l?~7jNIeǝTk}+Fq}LWNZzZ*d=-]ݨȏ[K;`讋z^ucRtji)Uuo ?Z]f"2.p#)G^bA^d#9tSy;j%3ygav$PP E  מY аrJ{ր6ݴY8]vK_ 6`x%CՑ!k9BWّHE'cŖ /'{V&$L%@Ci&3J /6j3jQ&nem1HƳaQU:%R ABxM` f̀JHBX!S`58  +P>B%֌qY2¸JAC3tjp.:$c$l&] 4l$8ď~*i21z2bV L;Aybk*9ZRkFhB{N̗Or(1DGyςd7ZAo61RnV"WIňpF aኂ*t0]hބɐe2pE`D`2*xNMϩҾjt2#ǫO0e12x @r@-̱:îS:g cF}xzҘ.E <|O"K }3VOGy^ * O_]^)N X!93oҪ7?\n(@ԗpt X$&%8%pKz}w J z;kS",q?ޚ$cʮ@Ny";8x[3D .ʂ0R;AeWN{D?9tvxdYq?{Ƒ 즩_!t&FgN-yl/oG4Ehİ,^zꫪꮪ=G{Ǣ_/&ؾDj =R)8j/S{dK|DCu>T:r9tv0>D4lws$$ɳitmZI%6wb|5>k!hW s2qʐpό#3(&CYqbx'gc\z [ޘ)iS#rRc>@pRJtt=gKܔ:W9=_s<⼑ 54O 6*@0AqT^dD[Vt+ʒQ}L5|_ʒv, ;l_p$`{l_zn#TDm` VN\{տGG+LbieV;'4 'z32* Cŀ%il?>pch[ CaMߥ#S)Oq.4AőfV'_  'UC5/4 x9w3hA}sO4,J&}{b`\Mi&WL1Y6w#g'klLyO$6~(m" a}q(Mv$a"s[>,vTܔwӖFi I.6nيIiVlRr.5XRc 9KY0\hc0M!+Ħe|h>^^0;csybt`Y KL+“?ZI!' 5Zc!V-64XU ͱ% j\?XKF7"X ޟBZ9)Ī].OXk5SUЄ=*ĪpMثBz5ߣB,ZBZ {TU =BqGXW M!V=:}*Īf6g_'1xs/QhE˔Uh-kc6jEkA<_74K@'_x1{/Z ROUϺX VqL"뫸U]?;-Tj6wtKuxt+%/ⴰGGK;F"}zTϗ]al9f6 @/~}~t?/rA&|\YB1 %M.P}hX ŷ 8_7-D?86tn]O2՜~ZZJx5-%#LɭR«iiA5-ߑw]R*i)YK[>[R#Ã2]MK.lii%jZZPM-i-妚-/T9պtAKwOKy|/^$ubW=gT߻蠥DVRRR"iiA5Z*jʽV+8-N9Ͳ~ R΂pzzԙfc_٫~A=^䋖^BnGVlVKwYoB+PAkO@Q&d-*V!n|Bz\xi&45׏G&t/ʼe,64woc'D.H?%_!) o KN ϣ}*,0t;?ǹ+ Rv$0!hK']g.woN'B3 Xv|K0ssV\GóOrbqO_~0h0s9,ܯ/q-h}1S?b֣dCpnt/Q菏]cKX"MBzXqѿ+䏣8W<źVyg',ݼq9ןeє 7;7."+7ĵzL@WQ%x 3,JŷEJa+fJ`I@Q! ,U#bޱ>jd9*)I aɻd`KN[ڑ2U|AEj"cT)d{r]bde5k\"D>!-]Ѭl2v}*NE-Gѝ @ 7[eeer1GpmN4`%'Z6k+62apcM2:$rwn+y[H3NayiNM:s3!@2-ٙCM~h'+[}_LIw~뛟~DzZX嬡R۔rLވU>$Q|w0≠@D9)0?}= nK} g 󩱰QO>1^G'XQ3>SpO.<Ŧ3(e8G0՘5qewx[|7!&;LO744g첰A22+6XF6L1,W:&V=gJ|Mֱ@`, j\%NINKS ocz,`m]HER sJ¬w&xJංQeQPɦx)LV:=|<Ǒ5J/&+BU/7VϽ|aŭ-W46 Et' _*_nGzELPG դ8oֿ̹W(asy#(/f6 ;y҂RE~s0Eh͕(w`'ے󇇣Ow9BBQg`ʸV 1IʾtcC/]pwZ@R4I+TBFE>[Ks(tZ*0| ڽu|O;?_-LQ2: ϣL?mSbO4' vXJA/Y@]#MUl8=l3~bFMWk!} 4;8oXoXl=x=?WO5yRoGƞ+zb9b2Ϡ׃hsf3_`'[&}wxw^VKBlGSjmT>S̝>(]2ƶBK&AKwXK tԘ?wrZe)j9ssۡT҂jšnk)մ%7[u'zX&VqM*ljฌ8]^RL)өVj]9v%ݪl4 T:oZ0g(%YKמS)_ 8=GjLʰ#Xԏtp?^O7Ěvy5Ԁ:a=3C8B7K$~ IY#0BibXJ /-FB%Z2^޽钤ԅpu'O@ڷ- [Yxq!/)(^)/\ JU~&HCVo~ VID=wщNG9SL؋N$SR'2s*EG%݋ : $JR$:_w>H@s *aOƐ`k2;toa$>j3G^\>#dcj0?xi~W]Oާ%,aycky$6Hbρ6f~-W.:bhZ:RoE7-Bt :Hn;O[#V-?-2u7N,NMuƹLk1bCoD1WlHoDF.)))>G4Xǚq[χ4cްS[^k<9H9CZӧ/ov>~||4ȸT0 `+suoĶ[4dܲK,) ބ.i]%̒]. J--7rkcw<9 =Ⱦiђ4_>'YY:ι)[Y1yYءz9 (Gע9irbQ%Ssna3 $kvjQ09Y&`:2o3^eG4{+m":'RM\69e HW?Ajd*U)gʲUPX3y2Ru]NAid_i+uzeQؽz?r0Og_ tv8XB]UE-cL+^hu!!£ BYNa40֒KDRHXI>)j\ e?T`Yoˆ憁_CԲ7%B +jH+Җ \i*^)KFFeteF VIR590"n04JiexhY+LzV(H:Z ȼJkGd:QȾ'$ŖΗ~++gz4z!vcBW1<mXAPe ZY9HBX?*Y3i<*',lZ°"aHհȥPZPȶuow/0N,SrTsRINmt3/v-w4s4#)g3gISP{|K|c`Z\#N0NpGT6?[ҒMڧ&V :R-dx XMWcŸb33= 2Gj޲ZIfbIٌaIρ"e3`hZ:RoE7͍Bt :Hn;O۳[~]dt!oY9vtk׵1bCoDW=E+q!2u7N NEJ >G@%⶞Iża䙊,Z[L!cpz?z}'pP8+gu!ꇯg뭽{ 흇'.xKOق/}y.,,_Q.Z,޽_ixM ,o_?ѐsr|/!7ճ}Mg]5}4ii4O- !|&J 8e 3J4+vtA敱.Q{r״}#/AqJ:> jii 5)3 QLi<7b;(I5ۙL A @^kyݭ/.d HEIprPHr$,2hY]:, .l○/vՊv-ޯSnou{59>AO5&%v&jLRiD)ǜD¯;(Ż̇9zk1QH2U u +WYYAYZk?1~28A+LʩZ I*i5RUpF9SH $ r4)A|V]5H.ZikqMK=:U?aD2S {FlEv.d&p)OvIjM0-dƏdh@A= %e?jt߆&Z5 Ol p=+PWJm]cH=xMј(5!rΎ['#ܲ###L jj oJtht_Б>!Wk9ԅSQcZ~bA- QmN/^1v$Q ԅ3dKB9Kʸ(404 ̰; qڄ !d7QhT JXVPTFeyK.JAQ:pi(ӣd69$R~)Er~<6a:ɗȭ(y8nUm. r4Ǒ߭lLy'܄ج^A("( R5S%S}@PE8yLY?Ąq[¯%0=+\d7|oH(ykH03V ֕FEm njy]p J2~@Q)p`(CiCu~"v sJ*7ȉJOCS#fn$͹7@bǝSާ\y{%Ip}| cE~閠DD%^ i5EN),r<l=MATk@LctO8b#}0_y_r}u*v︌Z!| ֳ]5C^ _Zhݏ3_Eޗ2+ f,4 '?LWߙ=gګZH\p.:zuO==q*>E5)U48hoޅӀڏVQ#-$Jp]jh-|zp/6 4 2?訑 ʝ΋9!HZ*=)?\?Cj?gX$k$H}$n'ȅ9u;MmHP }X'1(ȱ U==j#ٸۨ j72)sGk#&S7V$;>k.1◹Hgo>8tD0R qfq膭Scjgb%\$m]޵c;`hs;Q'r1N7RۘG[ՈC8E78e"9eMt~H[? }3R ;$A)P"SHJ`I̅ngځcD!)ji 5 B(ʜ!?Y>堪B@)d(T+o3Z)"Ym%G_S!q W*]X'k[~^$YYs VVZKdd'BxD$!64 :'T}!g[M^/a ~Bu]xa5{_=W'YnvaySY,=dBufI4nՓiݶq%Yz,'z"KEI'kgxu{t ~T}H[^|W xOHMvrtT9"qS Ni M-OYNlLtֆ= ᕯNi=%LXLFÔYӰw~|7Aћz%Z>ry]gQm|w՚)g?9QYVpEˉOFC)3lnMA; $OpS&?,⛇*&<6-%!j"NLqd⊝wlOe^@U1!+;KA41ùmST<ߓE=Zuß}7=ԹK띞Ӌ5nMԭYg>2}~S"?[ݘ~=S]lЪN NݬwKLlJуWѽj|jtm~Kj[=٫Ό1ܪ=% "LGX2Dph)1(jI)X;=579lӶv28:=}xm aZ?)w_O϶F3Q88Ddl]yjS-H;PB RRrW7eoHbG3֯_ŗOe/yײ*j%=՝mKڨ:cnyO>+w(.i>ߡ[߳z[\&_QaOE察~_DlI",,4,X_*tNj;k|3)*Ȏxgմ&B^-Wi|+KE!I3Yh9-J =EӲ&idkZ' %Z8{Z34b&3L9qDRރr(](4JY _+`_X)׺`6KR EB w2 hа*AcgSIнv6?nBz>BIWUN?+j8 `qm8Ue.lA{v;l\F:_~O>BtW eiV߭)L,`&S<ؖZS#'R.AKb;f(+Y. >hRTƙ AĮ0P""c˪Ik-b–`ndX!a HX > ɻ#XB+U\1>{ГHZv@174>ӏ5-ϵǚꗇ }Y_eh?~𦏰kX6oA5⋘U?͟~k !"s߿ۻBo-L(cg~3B+w{ gKa7nX}Ul"EL} iebPM~^O 1uNi6][GmH&d1$;dGKA}}rn2ݝ;/a ],r:);.J I(5UN} Nw6 ah!{2BE<W/Z8J-_@4HF% #gpe i qY t$nb6ǰF svRvncb=R,6_b[jiȜ6z(UZz remev"e@4IaG:f@bKjJ<%is%/>Zkc1Koa6vFsی0C\bDoc6Cqa^U ?S(*UTj H9bAX mTɪ.02Rv`Eƈvtq#?XJ$]+OPR R i$]yNUI#Zi8In2.[ѷ2 GhRBK+ S6)q \-Ox8"Ic`JXGY]X4ih#uLquJ$D?F#Kg#VKmqKޙʐנ0ǣ{9l%OsO.)HVj֣t&{HH[#h$owPz5Gh-xfSSGg$:i+՚#Hte5+;N;DgZ-F/$:YUM:בDg<-B?8̡QEJ6 {5\ʏ n4{޹4- } 7ayΪ"5K| [6"9:d,5K%X͑@Lʝ/.udqγAPfl՗\lKmk4FM m65g_;gv GLíϟ|:hdzKC#[_ud*T]1LJÀ,Ex?_nW1$Pu&JXե?1.?D|"79Ĉ+xtF*Z\d!DH;wc*>лu t~Gw08w~_2ѻa!DGϽn]1H1ߑ&zNɎݺf$z>,䝛MD"Ӷ_NyN:n} 7h[S6UdjE!tosJ~.,_bKjAJJNRYWamV*u6R>YQ[)&R})ERDa#qRIt%u"FK]J.ɄPX*&t!ztpDPu\=WIoܒ⴯+~l4$縼+뉷/d*:v1TT0^FUCH=?g7w>~NwwοJYd,ץ{2Jo뼿6, jss<»"^Cd>~x`߲]ka/oV?ji9y6xad 203xiP$1(?-/PHz-Vy/i7dlG\%?J^pV1JçVVJQ*Ap҂y,&B!m E%YBڲ(P^/b^ + G ]Uj0U;'*Di/WZ$WT)|$hoQq*Ǡ8&SZMjQ|t$kE&l26Z'n. 4'/ cmhcf&M)QXj)~hSi\A\£һJ&XoA8!JTeU@{ ˳!hk< *;CG9As(T)Y!\Pi ĥTeYVŏ2PzꂡL|pB<>zWୟ:?@ED.GCYonjZ篛CS%RXT և⠋pK])**'GQ9ZP JcLՉG)AMr;|͂NiCJF76}s]nfXkoz&^b'ugb#"Jyn`u%DHgvQ5߹2ΡD-LZ.Hq܅q<@ K6lD5 *6UޣSɕn *.Yk)B-s++P1CeNT8BapVJ"=Vߩ}n:U(ȻNIH/ujѝuqH0[4FiuRש`]t]&l A&5ZOKh| J+$&ԓm4zb'JH3= K TBZHT`҂7&~Ӓ' bnӻVD5rDaii9t Kt SC!WuCSy04# _n7(ҎOBΩ]ϡ׃f Q5 !WU1?x<[MoݓLۅ$=|݌EŮNnf$U>w^wn[6em,WO^KzRްWO&N(nx`$hcC.CJc*.Tob =xzARåJmPe%eACY* AH]a5lY\ԻDAj-eTVW 4)MR.@{ m˂J.ISU5'Q|lKJ˨%*D,럢s$NOs'?><>G{{w^ M Dp\cɩ77dh=[i @`gO/~omf{-–w|+Wa#JU2vRtJJ2ja٨e|R)L{3EĮKI gۿtr ^-ycdj[gWw!cl&uu CaQ/ىغɞ=2+`^u$-n:fvC 1ssH4>S7O?Ex.?)Y L~KY:R9&8MЈtyhK RYm_0Vo""0ɪ*wmqIWxg0Ȉ yYObk~!u1=wdV7 3+d0$JȈ8Gת2KuU+#39KS`UfS%H. ǩ9" ߽X >B!\]U̹UxA.u!&+A.,T ,Fand6ֽRDVueT]:XP[@3֨̕,<Ǥ( +I<\`Wم!ScNmn 5g%"(-Te5U lDU%g:ō/LPxƱH 嚬rQj8W!5@ *l&mTQc>"0MUa 5!G&pPTRϮ*rP*|FI#/.X% w&8)f7)vbBƿIqT830kF [89cľONEZ8B ) ^Uq'y}//F]u/skޕqu,}cCי<'=MuFѾ*> DXk \ cgyFklՓJΰDkV@mŶP|pq g vpAO-_6 f>=y1cgTDUd1~18i|~ n edD((s: vFξH@G8D*&.}y.JaO}ÞG{'>jR=[ܖMDKdɳGګlS^[m+s /0bOK sj*! ;߈!o|-Elbd9P yMTk z#AvIfn>N7b|U뱳[6B8D0՗/mAvCT'NAu7g\E vKG'_O.%_!o1H80g9ϐ7%@=z75kNv^e\,zԾFm%4 %"!JkGc=z?}:9|:sTM!]"YFK/fQ$B$E`-+S{ɱAl29v^)φ2Tc]H8z7{7|jr_?>T=l`gV7.9??Gęp,?n̠X% 8hTҲGʃ jj9ڑo1S˘M mz^h di׎w!p Ȫ.ÛA3/Ϙ}\|l&?aԃmwύ_7 bI߈3U L~_ =.{\| žVE%ATy^U\ha,#kk d{ߚo܊Ϸ7C!__ؓۛ]M; T<{VݕLA/՛e%BiJPȸyEuQA6#;Z @92 J"7RVGnUD źCyrٯ rKBB "]x:^$yQޫ|t濫e=u{¹y.eUgG7Mfkw3Ujȁ{Q`ARADtp"ːLa 9y))Ѹ ey^C&H{&eD.u +cիH,˰U&~Uf3Y9s2r(J/#=cZF8eoZchC)1YV%62Pbq;w#CPb3Xd}L1Bb wS9[Ut&2p.vT&T nj7K7L_t0uL^tڌdY%9\^Vܙxv/Fj1yD>-8Oj-Ysb*P70&a?lW? -NfJ,Ii]J?W•vW)-EK0j:I;XiqQSg:zgB I:mKVG:q!9ϱs%a6a9ƲV!@{Ds.JNKtz<Vq:c[inRRL٪5^eVT+U9`%bk9|E BZ{K ΑB 8~q/9(D'%OD|Mll^& ~#8H6܊o/&3?}FۑcuTLA!X"@k^svchԪ{pT9[[7. 6$XIh BT" TE s1TZ FO: ߽T)>$/҇Lz.$`iNK_w:Ns ӆa8d;և5l~螇 m5IB|j!=Soڍ.MafV#QȎa-ÍՖR}BFi$eLF9P yMŏ al>LFl/ScccM0:B8DS0ծ[0-!IFLvoɲ[zqF`v!oL@L5J|5 GXasɖȜUtFt: /m]LR $HV==+J/89!eV@)NQ•Qj|qMJs gp_YAQzMj)ŕQV!c I"P ꦲKF0JHzӵ pEEUJ'@)06Vj]~QRofE՛Zk.d@0\Kzӵu( ;&M!1/a;%uWQhzl0%H{EEG|fWE4jea+ RkXk2y%S9sA\aO+nS*{/bk,dhګAq+DSkl@BfO.\U} |]xꗇ븏+i j䧽[/`iJ 4LXLl ! g hsbBrT->( po9ܩ ,M0$40U^xYt'V7XL"#e{!c VAym$ iȷU: (?zТkJ1jdi&0FH~Rn;៳_V*7A*V64$Vw?Aw^)+G5!H >rc#}8,lE|vSE7^y$]paD##p15oi 5 #K]3`2[ʯgs*k614B)ޗs&?l}ۣv+;nMGQߴ]Q=}pA䀘r?>!G{޽xҊVؖPm98-U`MCYAiW)5Hn˭|{=-5NW:On]: 6dK)!9sd-K †̀ U^%ZVڔl@ʒJ vOz!vC_btLv*ͯDZR{o1зFl-s7y.%*Cf ' .7i4O4Zx, ,Z± RwZ`艹xA،j'^Iϟ.xn c:.]L>=y$1HdDR0Od<|?퓃ƌ5zkdL9EHC;Cj*N5GuNnR8'$:E~3pc-F9655&_=~rzl?%.q XS`hMApmzGsqP4Fjk 7YY*+*+SfE6e)s}o2 &](֊܊'v,J[3Y5u&JeXfQTKt"'dȋ޹=> VNXW2(,'eޕq$e/e2}Y"ڰa!Ld 1a EQꪳe@x %h;ΛQB{aϣ/ '4J%h`Pw|Jk6j)h~Ҽ/Z(Rh} m8&*͎'#O5.n`H$p'$'6sĩt)VrQjUα46)\gpCk;D0 9jsL;]Bc1@LDse(GXQ&uHdnV8_Cfˆ[>?1Ze2F8bo;9/{`~}^"7:Nڝj]ۢFYe$LWmh%s[K$iRbB\4}e!1I.b%V@ E[ + V_.A :wusJl ZYF:j%1$-8S0f;mE16#)ib0dBZ8~zz˃y|=Չ녭P[ P>vD"*wv.'!=,f~<Ok!ܪ_#I> t`8ݸ˲2 ĺ7%HJpD?mxlߠ7WFc\Y5|2[t{;ậBVl:}받\'9q s^I{7߮<ƣ f?ɼ:3sf4 }q\VaO*No*;vUI>yF>[:ПR. Sn/7J~FVRoK=-<%W\O0hx_oEV>Ox]u ؉vlFk,Z{u ^&[i #C̓qyzrCY|%Ζڶjѕ&gb79,X l=e4(rG_:EOp+ł}(WLNP ʠםzو;@ٌ[R(wy{m|\\MN17*Nj49!9~3.NMCܵe'j| 8ΖW,m}Zj]r}E}f(<}5^NsXij4c&}dˎ SMkDutV`|[ݕ/jߩ:ՔV1pz σT|L &ٖtǬ{;x2e^鹿t?b?ڲh;e1XɬMnA;Nm N, R.gPtY,8?ز]x5UXYHB^FTe}6zY)NAb":hݎwOknٯmE4J8y_~Iy3p -щ}Gv𴲥o(PBB^dmSf;.JkT. +ґkF|'$'6J\Z >K6dL+v/TǾ7B4gPĐczes"E|Tl}~^q,weXlsqnw4O5r"Ug5HϥY&` w!3r'cT^&jwA}n[wü6,׼=dX+ -$>h)fɩhZauNEC4;x-!lEA-7|QKCD S$DqprAQi47^dNfqsv׵ Kg)#&SNeͥRJ`]s9q5Ijl@I__|@)ȚӋ':7|zz\~ o2R_~O/F|So2٪HE!IqQCҊ`%s=(JCQkD Ǧ(f^-BkD*Y~/{;! A9L7s ~ ߗfRvULܗux7t<uy&o}FI:v>pG hz2,W a t?S2[9>[ɣB1!Js ?R lWL)}2! |$bRQƯN.͗Յ-_Y'7_ 6wᖿʚ{)@,fgʴXI+.CKUt .2E ֬o ~L)2cA Zg\Ĉju/PL?<\h;^ <(\ַZu>ZPM`~bRn3tdK*a! QSÐq g?_I*/}u2iפDWW^ѱ/RUjI 8l-:eif2e27<˔H%YƔؔ;GsLeRai\fZb֕hC3^ytϝ(+3pdLB.2:vVjǩJ&W]'3 I[qwt7MndjXb|8{$ ~s=⏅H'YTiʥ VGE9Lf0t qD^pL}gzJeO _C\; XW7zs:dfQ,4QxZʼnւvZ]e_)`F*Rql!oQ fbњwv)Psr/_V~Jޣ_ H{W y)߄xTs_Hp͠^RuuB" 1Qm(R8ISJe*3ey0QBg2NDB´]?o D,kh4Cb FP&L8jr(r*# :p233DGm:WJ{kvj'xvۉnJ%R9L^`Np*αpjkDj,"2#~3 oƾ(jk65ҌLj\YRŘޝ@eіZq^ yA$S3Q9Qr27Y .@ιe[#xKMĤ ǹqG˲H4z"v}*B]$x*MWp,O , O,a4dm^kl&@pNt}"@ ,UOfR_4qfR:q5vKoL>Qa:b0a~ÄB~@7%Q|@#4C0/] N&1LRz(40Q x >|uE6lw?CSrZqo +) 4+0k:[oCLѫۖrGVo7%q+yC]"!\J4eL4-4߱ 5G4,0+OY|=P񰢽k@[=1khTJa7Ob B~Bd.>#Hܠ|'gJ&͕@CUvW^ɻSW~fW$ѱ?1\-?˭]Zέ\\M&P-G5V7E8Y_wp. 8UO|U e<BKz}[8"/uuNa1qu +2{DsA>Vts]>I3/` ᜝3$A( pCHv{ќߞjěP}JFu9~]Z~jrD҅tKl$%>Z"T٭?&P| M#&'S xM7ɿInhww-Ts>., %~@G G$ 6_. Z6vwqLN_}V0ޕ#_12`Z&C2ȼ]&FX|-ndiVraUa\6\:>Aryz'[ \ 6KRhʺnWa[  8wY!:nC"(S1@o֘BZV|YL-zwovu=y7?Mv~Qpړ\?{[yG(3A7+G[VEtMȡU3¤=rܢ/NQv{|Ba)xwׄ61Qt1ߒiG :J3Ko#494N"NsTkڐ(|>ޒqsu}eN+N( }sIW^ iyD R3ڜvfn̫ѽ'bb3{xTT$ܟb3-.jl/5"6uɪ3ьۮc;5`љ83Twh-5\~s- ql g6p9 }YpSW$UU/4*c 7h'_;эqhCmgD&Ƨޢ[xw!9D0fC3֬w:`t tB݆\E0#81@ol!D7#ʩ*YW@)N!4/MgyF/&Ĩ/ tfֿ|Yc|Yo8^E3Ʊ;kX/yP#h9-Mc:ZgT1JPp­piN^A;؝49ڐ@U0vU?镜"⦞({z߂k"jo /I&K#kn݆#47VGlyg*jt^/F$*G95THF Eg}?d|`K(Bc~{ıC-HΑw rCdȫm;X Xmݲӱmro8M3(:I}%5|YYuzT 5suSb O6[瓸eM~S9M&W y(/DJ"$M3HL2acc5k]><Η+=ޛq\<l3ž0 'Dn9hN(1j2lqĹ%(GcR_oKmfJ8Qw8fPz\m4Z(N(=G8R@1JPuPZKLQ3>3>JcƯ)5J".gM6sR_oKLLQTg- PGROR+J-F gJPjN(=krC)*QR#)7Jc6wݳ!Q3~-54"&J.(Ve5z[jl?Pz(uSw(JI}#VS1F)h7y hRn((e 1M4(5PĨ[8h㪵vZ.rۚ<R42! %aU Ʊ.4pYY ^\HPqۚw XvLq`Jt͏ei_k>/P?Gz[>,,.+S 5Y+riķrquyIAΈC[Ηl &_,戺Ï2_>pc/K ?IvWz`ZA̞b6os7qv蚦/c/(t/= (gg-TK̳[J (fX)7|mu(?L7VD IuDQOf;)dv܌OH@O_ &+_,ewsעc8Y 3%9po6ԥ''2Yn>7=12!͔b<1Bm?Q7_<6 &y_DtFv{jGm,+_nۗ㪢}q?̄+}x㶮3 .Hc,(+if$ҐRc"v uI*q 1]PpKZgD.)ZiղQ/)q(3/<;];m-%r 2`lݿ[zU "MRL"&@L̲@%K9fD&T1]VC04su![̺dy䞈Aq3E8aD44i]L1l.QGR1M %c(IseFYPƸ<*PKs"HisiU۵"GH9*Y\bvv>ՒxCR,(J%B4hE4$ӌ4ygYARbǨhT QNjKuiKQ $E )dMEH* ,IX zJf ygz=x[ѰV\z&v2,څ(Ǚ~/uv5%b_hrѨ2B{QpQnSubo 10610ms (10:57:14.569) Jan 21 10:57:14 crc kubenswrapper[4881]: Trace[1415831381]: [10.610859008s] [10.610859008s] END Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.569965 4881 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.570569 4881 trace.go:236] Trace[270957976]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 10:57:00.226) (total time: 14343ms): Jan 21 10:57:14 crc kubenswrapper[4881]: Trace[270957976]: ---"Objects listed" error: 14343ms (10:57:14.570) Jan 21 10:57:14 crc kubenswrapper[4881]: Trace[270957976]: [14.34349883s] [14.34349883s] END Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.570622 4881 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.570621 4881 trace.go:236] Trace[60241337]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 10:57:00.503) (total time: 14067ms): Jan 21 10:57:14 crc kubenswrapper[4881]: Trace[60241337]: ---"Objects listed" error: 14067ms (10:57:14.570) Jan 21 10:57:14 crc kubenswrapper[4881]: Trace[60241337]: [14.067103256s] [14.067103256s] END Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.570668 4881 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.572206 4881 trace.go:236] Trace[827334694]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 10:57:00.237) (total time: 14334ms): Jan 21 10:57:14 crc kubenswrapper[4881]: Trace[827334694]: ---"Objects listed" error: 14334ms (10:57:14.572) Jan 21 10:57:14 crc kubenswrapper[4881]: Trace[827334694]: [14.334322514s] [14.334322514s] END Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.572228 4881 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.575546 4881 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.580548 4881 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.581014 4881 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.582469 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.582529 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.582549 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.582575 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.582588 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:14Z","lastTransitionTime":"2026-01-21T10:57:14Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Jan 21 10:57:14 crc kubenswrapper[4881]: E0121 10:57:14.609406 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.617841 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.617886 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.617902 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.617927 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.617941 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:14Z","lastTransitionTime":"2026-01-21T10:57:14Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Jan 21 10:57:14 crc kubenswrapper[4881]: E0121 10:57:14.637074 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.646389 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.646889 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.646972 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.646956 4881 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:56862->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.647172 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:56862->192.168.126.11:17697: read: connection reset by peer" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.647089 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.647306 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:14Z","lastTransitionTime":"2026-01-21T10:57:14Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Jan 21 10:57:14 crc kubenswrapper[4881]: E0121 10:57:14.658404 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.662970 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.663014 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.663025 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.663050 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.663061 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:14Z","lastTransitionTime":"2026-01-21T10:57:14Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.672993 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.673866 4881 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.673975 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 21 10:57:14 crc kubenswrapper[4881]: E0121 10:57:14.677391 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.678175 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.681586 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.681629 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.681643 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.681668 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.681680 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:14Z","lastTransitionTime":"2026-01-21T10:57:14Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Jan 21 10:57:14 crc kubenswrapper[4881]: E0121 10:57:14.695017 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:14 crc kubenswrapper[4881]: E0121 10:57:14.695198 4881 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.697277 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.697333 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.697346 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.697374 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.697389 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:14Z","lastTransitionTime":"2026-01-21T10:57:14Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.800767 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.800844 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.800858 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.800886 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.800898 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:14Z","lastTransitionTime":"2026-01-21T10:57:14Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.807122 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.812477 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.904163 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.904239 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.904256 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.904302 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:14 crc kubenswrapper[4881]: I0121 10:57:14.904320 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:14Z","lastTransitionTime":"2026-01-21T10:57:14Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.007417 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.007482 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.007495 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.007524 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.007534 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:15Z","lastTransitionTime":"2026-01-21T10:57:15Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.105131 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 18:55:36.033715398 +0000 UTC Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.111255 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.111308 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.111319 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.111341 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.111354 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:15Z","lastTransitionTime":"2026-01-21T10:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.143285 4881 apiserver.go:52] "Watching apiserver" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.147294 4881 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.147914 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.148293 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.148460 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.148611 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.148703 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.148948 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.149015 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.149045 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.149045 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.149164 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.156569 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.156703 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.156702 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.156844 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.156875 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.156893 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.158838 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.161350 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.194035 4881 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.205025 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.205322 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.205350 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.205374 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.205393 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.205419 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.205441 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.205520 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.205546 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.205568 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.205593 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.205620 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.205640 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.205660 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.205683 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.205708 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.205729 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.205750 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.205802 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.205831 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.205859 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.205886 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.205912 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.205937 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.205960 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.205983 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206004 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206028 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206051 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206072 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206092 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206113 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206135 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206155 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206176 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206203 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206226 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206249 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206270 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206294 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206317 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206337 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206456 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206483 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206512 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206572 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206597 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206618 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206639 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206662 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206702 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206729 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206754 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206777 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206821 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206848 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206870 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206896 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206919 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206926 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206947 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206971 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.206994 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207018 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207039 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207062 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207084 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207106 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207128 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207150 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207172 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207197 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207224 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207249 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207269 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207297 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207323 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207344 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207366 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207386 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207406 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207428 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207449 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207472 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207496 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207538 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207564 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207606 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207630 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207654 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207677 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207703 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207728 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207750 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207773 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.213495 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.213629 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.213658 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.213684 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.213715 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.213739 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.213764 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.213812 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.213841 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.213866 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.213888 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.213913 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.213951 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.213982 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214005 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214029 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214055 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214082 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214104 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214127 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214149 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214169 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214193 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214214 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214237 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214257 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214278 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214298 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214323 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214346 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214370 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214394 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214415 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214436 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214461 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214482 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214504 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214526 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214548 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214598 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214624 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214645 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214667 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214689 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214721 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214745 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214767 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214807 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214830 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214853 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214876 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214899 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214921 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214943 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214966 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.214988 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215009 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215032 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215055 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215079 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215098 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215121 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215142 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215164 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215198 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215220 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215240 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215264 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215287 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215308 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215332 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215355 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215378 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215399 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215420 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215443 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215464 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215486 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215507 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215530 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215553 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215576 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215599 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215621 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215643 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215666 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215687 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215711 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215733 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215752 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215773 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215820 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215844 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215878 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.215984 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.216020 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.216043 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.216066 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.216090 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.216118 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.216142 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207223 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.216127 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207436 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.207819 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.208245 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.208496 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.208904 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.208938 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.209141 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.209201 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.209386 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.209387 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.209632 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.209687 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.209827 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.209907 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.210014 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.210933 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.211207 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.211284 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.211285 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.211376 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.211496 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.211558 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.211593 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.211603 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.211746 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.211893 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.212060 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.212072 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.212105 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.212259 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.212283 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.212329 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.212420 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.212416 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.212625 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.212655 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.212686 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.212774 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.212848 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.212890 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.213035 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.213059 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.213100 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.213130 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.213249 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.217032 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.213330 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.216136 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.216494 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.216733 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.217052 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.217454 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.217758 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.220717 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.221008 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.221144 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.221261 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.221345 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.221878 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.222132 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.222180 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.222266 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:57:15.722238804 +0000 UTC m=+22.982195473 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.222790 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.222925 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.224088 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.225431 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.232242 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.233963 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.234819 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.242085 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.248281 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.251779 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.259368 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.260266 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.260487 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.260495 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.260554 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.260566 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.260582 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.260615 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:15Z","lastTransitionTime":"2026-01-21T10:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.260969 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.261250 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.261492 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.262554 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.262795 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.262841 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.262943 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.262979 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.263076 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.263248 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.263359 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.263687 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.264175 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.264223 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.264272 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.264356 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.264629 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.264656 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.264439 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.264670 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.264693 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.264879 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.265335 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.265370 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.265487 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.265496 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.265898 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.266095 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.266127 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.266118 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.266328 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.266594 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.266626 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.266714 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.267235 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.267318 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.267389 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.267557 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.267738 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.268020 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.268096 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.268358 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.268382 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.268510 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.268549 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.268629 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.268720 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.269259 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.269322 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.269530 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.269588 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.269651 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.269756 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.269866 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.269951 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.270046 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.270085 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.270157 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.270252 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.270384 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.270434 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.270524 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.270608 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.270797 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.270817 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.270995 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.271211 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.271250 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.271253 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.271275 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.271472 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.271570 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.271595 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.271596 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.271712 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.271983 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.272089 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.272210 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.272292 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.272668 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.272646 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.273018 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.273108 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.273348 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.216168 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.273454 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.273479 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.273499 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.273533 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.273531 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.273562 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.272889 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.273743 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.273885 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.273932 4881 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.273980 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:57:15.773965832 +0000 UTC m=+23.033922291 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.274032 4881 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.274062 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:57:15.774055604 +0000 UTC m=+23.034012183 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.273702 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.274094 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.274114 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.274132 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.274152 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.274169 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.274187 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.274205 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.274226 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.274242 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.274264 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.274357 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.274380 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.274394 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.274403 4881 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.274422 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.274442 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.274455 4881 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.274468 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.274483 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.274499 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.274511 4881 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.274524 4881 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.274535 4881 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.274548 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.274560 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.274572 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.274584 4881 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.283322 4881 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.283774 4881 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.283816 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.283828 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.283840 4881 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.283851 4881 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.283864 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.283874 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.283884 4881 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.283899 4881 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.283908 4881 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.283919 4881 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.283929 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.283939 4881 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.283949 4881 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.283961 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.283972 4881 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.283981 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.283989 4881 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.283998 4881 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284007 4881 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284016 4881 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284026 4881 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284034 4881 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284045 4881 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284054 4881 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284063 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284072 4881 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284082 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284091 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284101 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284111 4881 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284120 4881 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284131 4881 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284142 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284151 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284160 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284170 4881 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284179 4881 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284188 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284198 4881 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284209 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284217 4881 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284227 4881 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284239 4881 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284249 4881 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284260 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284271 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284280 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284289 4881 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284299 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284309 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284319 4881 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284335 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284345 4881 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284375 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284386 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284396 4881 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284406 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284416 4881 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284426 4881 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284440 4881 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284450 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284459 4881 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284470 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284480 4881 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284490 4881 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284498 4881 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284507 4881 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284517 4881 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284526 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284537 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284548 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284556 4881 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284564 4881 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284573 4881 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284582 4881 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284591 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284601 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284611 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284620 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284629 4881 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284640 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284650 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284659 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284668 4881 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284677 4881 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284686 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284695 4881 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284704 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284714 4881 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284723 4881 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284733 4881 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284743 4881 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284751 4881 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284762 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284772 4881 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284785 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284806 4881 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284816 4881 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284825 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284834 4881 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284844 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284853 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284861 4881 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284870 4881 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284879 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284888 4881 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284898 4881 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284906 4881 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284917 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284927 4881 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284936 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284945 4881 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284954 4881 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284963 4881 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284972 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284981 4881 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284990 4881 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.284999 4881 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.285008 4881 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.285017 4881 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.285026 4881 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.285036 4881 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.285046 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.285056 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.285065 4881 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.285075 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.285084 4881 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.285093 4881 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.285103 4881 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.285112 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.285122 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.285130 4881 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.285139 4881 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.285148 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.285156 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.285165 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.285174 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.285182 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.285191 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.285200 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.285209 4881 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.285218 4881 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.285228 4881 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.285238 4881 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.285250 4881 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.285258 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.285269 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.285278 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.285287 4881 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.274278 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.274525 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.274604 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.273696 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.274661 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.275342 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.275976 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.277177 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.277472 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.278848 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.282225 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.282492 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.286986 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.287217 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.287434 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.287877 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.293137 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.293231 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.294923 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.295009 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.296053 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.296779 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.297161 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.298134 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.301256 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.302494 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.302810 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.304354 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.304493 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.305881 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.309651 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.313960 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.314628 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.315952 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.316642 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.317832 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.318507 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.319598 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.320930 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.321690 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.322839 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.324225 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.326327 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.327931 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.328518 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.328572 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.328591 4881 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.328685 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 10:57:15.828653874 +0000 UTC m=+23.088610343 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.329471 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.331387 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.332119 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.333470 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.334097 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.334855 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.335976 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.336555 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.337412 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.338281 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.338776 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.339988 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.340876 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.342266 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.342486 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.342502 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.342513 4881 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.342590 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 10:57:15.842569035 +0000 UTC m=+23.102525504 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.346480 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.346472 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.347262 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.348062 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.348509 4881 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.348610 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.350892 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.355814 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.356280 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.357683 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.358352 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.359980 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.360600 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.361577 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.362010 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.362564 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.363045 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.364029 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.364631 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.365440 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.365988 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.366834 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.367555 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.368583 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.369261 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.370287 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.370929 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.371494 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.372344 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.377312 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.378692 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.378931 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.379052 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.379183 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.379295 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:15Z","lastTransitionTime":"2026-01-21T10:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.386558 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.386766 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.386913 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.386985 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.387059 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.387170 4881 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.387253 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.387325 4881 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.387386 4881 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.387450 4881 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.387517 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.387589 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.387655 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.387718 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.387801 4881 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.387877 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.387942 4881 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.388011 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.388074 4881 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.388136 4881 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.388209 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.388279 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.388347 4881 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.388409 4881 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.388468 4881 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.388528 4881 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.388595 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.388653 4881 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.388732 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.386824 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.386761 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.464497 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.482853 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.482894 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.482904 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.482920 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.482931 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:15Z","lastTransitionTime":"2026-01-21T10:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.485079 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.491541 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.505321 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.505301 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:15 crc kubenswrapper[4881]: W0121 10:57:15.516486 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-d6436754ed5ebf6ee90a2869240521a066631a25d2a4b654baf6933f752a4400 WatchSource:0}: Error finding container d6436754ed5ebf6ee90a2869240521a066631a25d2a4b654baf6933f752a4400: Status 404 returned error can't find the container with id d6436754ed5ebf6ee90a2869240521a066631a25d2a4b654baf6933f752a4400 Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.522627 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:15 crc kubenswrapper[4881]: W0121 10:57:15.525526 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-80bcd2d67fc2ece3f2ca34c1d85b071ec2520641e88b7bbda14251a9114c6f17 WatchSource:0}: Error finding container 80bcd2d67fc2ece3f2ca34c1d85b071ec2520641e88b7bbda14251a9114c6f17: Status 404 returned error can't find the container with id 80bcd2d67fc2ece3f2ca34c1d85b071ec2520641e88b7bbda14251a9114c6f17 Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.538160 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.560763 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.576029 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.585765 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.585838 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.585855 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.585875 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.585889 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:15Z","lastTransitionTime":"2026-01-21T10:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.591483 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.606515 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.614342 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"d6436754ed5ebf6ee90a2869240521a066631a25d2a4b654baf6933f752a4400"} Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.617369 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.621584 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"96404a7900d2841e95a8a7fcf083d01866feb5906844e55c1617d9f30bafd933"} Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.625004 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.625567 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.632993 4881 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570" exitCode=255 Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.633117 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570"} Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.633191 4881 scope.go:117] "RemoveContainer" containerID="d84c900436f03473de2cb7e61d5cacb76cae260a4b22be5debafff2a5cb4d98f" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.634241 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d84c900436f03473de2cb7e61d5cacb76cae260a4b22be5debafff2a5cb4d98f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:56:58Z\\\",\\\"message\\\":\\\"W0121 10:56:57.509137 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0121 10:56:57.509724 1 crypto.go:601] Generating new CA for check-endpoints-signer@1768993017 cert, and key in /tmp/serving-cert-3442157096/serving-signer.crt, /tmp/serving-cert-3442157096/serving-signer.key\\\\nI0121 10:56:57.842593 1 observer_polling.go:159] Starting file observer\\\\nW0121 10:56:57.865464 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0121 10:56:57.865720 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:56:57.868508 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3442157096/tls.crt::/tmp/serving-cert-3442157096/tls.key\\\\\\\"\\\\nF0121 10:56:58.276304 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.642425 4881 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.642839 4881 scope.go:117] "RemoveContainer" containerID="676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570" Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.643136 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.648362 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"80bcd2d67fc2ece3f2ca34c1d85b071ec2520641e88b7bbda14251a9114c6f17"} Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.655635 4881 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.660661 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d84c900436f03473de2cb7e61d5cacb76cae260a4b22be5debafff2a5cb4d98f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:56:58Z\\\",\\\"message\\\":\\\"W0121 10:56:57.509137 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0121 10:56:57.509724 1 crypto.go:601] Generating new CA for check-endpoints-signer@1768993017 cert, and key in /tmp/serving-cert-3442157096/serving-signer.crt, /tmp/serving-cert-3442157096/serving-signer.key\\\\nI0121 10:56:57.842593 1 observer_polling.go:159] Starting file observer\\\\nW0121 10:56:57.865464 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0121 10:56:57.865720 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:56:57.868508 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3442157096/tls.crt::/tmp/serving-cert-3442157096/tls.key\\\\\\\"\\\\nF0121 10:56:58.276304 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.680044 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.688876 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.688914 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.688926 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.688947 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.688959 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:15Z","lastTransitionTime":"2026-01-21T10:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.692234 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.706386 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.721517 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.737838 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.753334 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.767290 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.791798 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.791842 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.791853 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.791866 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.791875 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:15Z","lastTransitionTime":"2026-01-21T10:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.793219 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.793280 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.793321 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.793442 4881 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.793492 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:57:16.793477034 +0000 UTC m=+24.053433503 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.793716 4881 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.793847 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:57:16.793772642 +0000 UTC m=+24.053729111 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.793907 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:57:16.793890705 +0000 UTC m=+24.053847454 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.893998 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.894044 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.894201 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.894212 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.894267 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.894283 4881 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.894234 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.894348 4881 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.894362 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 10:57:16.894337108 +0000 UTC m=+24.154293577 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:57:15 crc kubenswrapper[4881]: E0121 10:57:15.894406 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 10:57:16.89438711 +0000 UTC m=+24.154343579 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.895275 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.895342 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.895360 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.895385 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.895402 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:15Z","lastTransitionTime":"2026-01-21T10:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.998042 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.998111 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.998126 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.998150 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:15 crc kubenswrapper[4881]: I0121 10:57:15.998164 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:15Z","lastTransitionTime":"2026-01-21T10:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.038950 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-v4wxp"] Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.039644 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-fb4fr"] Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.039931 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.039946 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.041685 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bx64f"] Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.042299 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.042399 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.042851 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.042957 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.043174 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.043377 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.043641 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.043744 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.043880 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.043937 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.044092 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.046704 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 21 10:57:16 crc kubenswrapper[4881]: W0121 10:57:16.046896 4881 reflector.go:561] object-"openshift-ovn-kubernetes"/"env-overrides": failed to list *v1.ConfigMap: configmaps "env-overrides" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 21 10:57:16 crc kubenswrapper[4881]: E0121 10:57:16.046939 4881 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"env-overrides\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"env-overrides\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.047488 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.047889 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-fs42r"] Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.048310 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: W0121 10:57:16.048414 4881 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": failed to list *v1.Secret: secrets "ovn-node-metrics-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 21 10:57:16 crc kubenswrapper[4881]: E0121 10:57:16.048445 4881 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-node-metrics-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 10:57:16 crc kubenswrapper[4881]: W0121 10:57:16.048733 4881 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": failed to list *v1.Secret: secrets "ovn-kubernetes-node-dockercfg-pwtwl" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 21 10:57:16 crc kubenswrapper[4881]: W0121 10:57:16.048764 4881 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": failed to list *v1.ConfigMap: configmaps "ovnkube-script-lib" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 21 10:57:16 crc kubenswrapper[4881]: E0121 10:57:16.048769 4881 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-pwtwl\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-kubernetes-node-dockercfg-pwtwl\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 10:57:16 crc kubenswrapper[4881]: E0121 10:57:16.048819 4881 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"ovnkube-script-lib\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 10:57:16 crc kubenswrapper[4881]: W0121 10:57:16.048958 4881 reflector.go:561] object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 21 10:57:16 crc kubenswrapper[4881]: E0121 10:57:16.048988 4881 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.049478 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.049824 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.051382 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-8sptw"] Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.051766 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-8sptw" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.053455 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.053666 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.053836 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.056271 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d84c900436f03473de2cb7e61d5cacb76cae260a4b22be5debafff2a5cb4d98f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:56:58Z\\\",\\\"message\\\":\\\"W0121 10:56:57.509137 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0121 10:56:57.509724 1 crypto.go:601] Generating new CA for check-endpoints-signer@1768993017 cert, and key in /tmp/serving-cert-3442157096/serving-signer.crt, /tmp/serving-cert-3442157096/serving-signer.key\\\\nI0121 10:56:57.842593 1 observer_polling.go:159] Starting file observer\\\\nW0121 10:56:57.865464 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0121 10:56:57.865720 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:56:57.868508 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3442157096/tls.crt::/tmp/serving-cert-3442157096/tls.key\\\\\\\"\\\\nF0121 10:56:58.276304 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.068679 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.078408 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.091149 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.101396 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.101437 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.101449 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.101466 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.101479 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:16Z","lastTransitionTime":"2026-01-21T10:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.104147 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.105392 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 09:57:43.532369125 +0000 UTC Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.121107 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.135511 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.167527 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.196669 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-host-run-netns\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.196705 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/09da9e14-f6d5-4346-a4a0-c17711e3b603-multus-daemon-config\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.196723 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6nwq\" (UniqueName: \"kubernetes.io/projected/c14980d7-1b3b-463b-8f57-f1e1afbd258c-kube-api-access-t6nwq\") pod \"multus-additional-cni-plugins-v4wxp\" (UID: \"c14980d7-1b3b-463b-8f57-f1e1afbd258c\") " pod="openshift-multus/multus-additional-cni-plugins-v4wxp" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.196742 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-system-cni-dir\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.196761 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c14980d7-1b3b-463b-8f57-f1e1afbd258c-cnibin\") pod \"multus-additional-cni-plugins-v4wxp\" (UID: \"c14980d7-1b3b-463b-8f57-f1e1afbd258c\") " pod="openshift-multus/multus-additional-cni-plugins-v4wxp" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.196803 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-run-ovn\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.196826 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.196918 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-cni-bin\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197011 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e8bb6d97-b3b8-4e31-b704-8e565385ab26-ovn-node-metrics-cert\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197044 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-host-var-lib-cni-bin\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197069 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/09da9e14-f6d5-4346-a4a0-c17711e3b603-cni-binary-copy\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197124 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-run-systemd\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197144 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-etc-openvswitch\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197163 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz6fb\" (UniqueName: \"kubernetes.io/projected/e8bb6d97-b3b8-4e31-b704-8e565385ab26-kube-api-access-kz6fb\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197187 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c14980d7-1b3b-463b-8f57-f1e1afbd258c-system-cni-dir\") pod \"multus-additional-cni-plugins-v4wxp\" (UID: \"c14980d7-1b3b-463b-8f57-f1e1afbd258c\") " pod="openshift-multus/multus-additional-cni-plugins-v4wxp" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197208 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3687b313-1df2-4274-80db-8c758b51bf2d-mcd-auth-proxy-config\") pod \"machine-config-daemon-fb4fr\" (UID: \"3687b313-1df2-4274-80db-8c758b51bf2d\") " pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197227 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e8bb6d97-b3b8-4e31-b704-8e565385ab26-env-overrides\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197253 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-multus-conf-dir\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197271 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f19f480e-331f-42f5-a3b6-fd0c6847b157-hosts-file\") pod \"node-resolver-8sptw\" (UID: \"f19f480e-331f-42f5-a3b6-fd0c6847b157\") " pod="openshift-dns/node-resolver-8sptw" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197291 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjr7g\" (UniqueName: \"kubernetes.io/projected/f19f480e-331f-42f5-a3b6-fd0c6847b157-kube-api-access-hjr7g\") pod \"node-resolver-8sptw\" (UID: \"f19f480e-331f-42f5-a3b6-fd0c6847b157\") " pod="openshift-dns/node-resolver-8sptw" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197318 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-host-var-lib-kubelet\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197380 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hml99\" (UniqueName: \"kubernetes.io/projected/3687b313-1df2-4274-80db-8c758b51bf2d-kube-api-access-hml99\") pod \"machine-config-daemon-fb4fr\" (UID: \"3687b313-1df2-4274-80db-8c758b51bf2d\") " pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197402 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e8bb6d97-b3b8-4e31-b704-8e565385ab26-ovnkube-script-lib\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197422 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-hostroot\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197441 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-multus-cni-dir\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197470 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-host-run-k8s-cni-cncf-io\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197501 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-kubelet\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197527 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-log-socket\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197546 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e8bb6d97-b3b8-4e31-b704-8e565385ab26-ovnkube-config\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197564 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-etc-kubernetes\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197598 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c14980d7-1b3b-463b-8f57-f1e1afbd258c-cni-binary-copy\") pod \"multus-additional-cni-plugins-v4wxp\" (UID: \"c14980d7-1b3b-463b-8f57-f1e1afbd258c\") " pod="openshift-multus/multus-additional-cni-plugins-v4wxp" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197640 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-multus-socket-dir-parent\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197655 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-host-var-lib-cni-multus\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197691 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-slash\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197709 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-run-netns\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197731 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-var-lib-openvswitch\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197762 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-run-ovn-kubernetes\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197802 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c14980d7-1b3b-463b-8f57-f1e1afbd258c-os-release\") pod \"multus-additional-cni-plugins-v4wxp\" (UID: \"c14980d7-1b3b-463b-8f57-f1e1afbd258c\") " pod="openshift-multus/multus-additional-cni-plugins-v4wxp" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197824 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/3687b313-1df2-4274-80db-8c758b51bf2d-rootfs\") pod \"machine-config-daemon-fb4fr\" (UID: \"3687b313-1df2-4274-80db-8c758b51bf2d\") " pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197874 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c14980d7-1b3b-463b-8f57-f1e1afbd258c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-v4wxp\" (UID: \"c14980d7-1b3b-463b-8f57-f1e1afbd258c\") " pod="openshift-multus/multus-additional-cni-plugins-v4wxp" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197919 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-run-openvswitch\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197939 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c14980d7-1b3b-463b-8f57-f1e1afbd258c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-v4wxp\" (UID: \"c14980d7-1b3b-463b-8f57-f1e1afbd258c\") " pod="openshift-multus/multus-additional-cni-plugins-v4wxp" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197966 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-systemd-units\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.197985 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-cni-netd\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.198004 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-cnibin\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.198036 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-os-release\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.198061 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-node-log\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.198078 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-host-run-multus-certs\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.198096 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kt6w\" (UniqueName: \"kubernetes.io/projected/09da9e14-f6d5-4346-a4a0-c17711e3b603-kube-api-access-7kt6w\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.198113 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3687b313-1df2-4274-80db-8c758b51bf2d-proxy-tls\") pod \"machine-config-daemon-fb4fr\" (UID: \"3687b313-1df2-4274-80db-8c758b51bf2d\") " pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.204135 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.204179 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.204189 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.204204 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.204216 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:16Z","lastTransitionTime":"2026-01-21T10:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.221806 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.250016 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.300532 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e8bb6d97-b3b8-4e31-b704-8e565385ab26-ovnkube-script-lib\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.300919 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-hostroot\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.301050 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-hostroot\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.301064 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e8bb6d97-b3b8-4e31-b704-8e565385ab26-ovnkube-config\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.301192 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-multus-cni-dir\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.301236 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-host-run-k8s-cni-cncf-io\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.301275 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-kubelet\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.301302 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-log-socket\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.301328 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-etc-kubernetes\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.301357 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c14980d7-1b3b-463b-8f57-f1e1afbd258c-cni-binary-copy\") pod \"multus-additional-cni-plugins-v4wxp\" (UID: \"c14980d7-1b3b-463b-8f57-f1e1afbd258c\") " pod="openshift-multus/multus-additional-cni-plugins-v4wxp" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.301385 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-run-ovn-kubernetes\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.301415 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-multus-socket-dir-parent\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.301442 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-host-var-lib-cni-multus\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.301492 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-slash\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.301518 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-run-netns\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.301545 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-var-lib-openvswitch\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.301575 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c14980d7-1b3b-463b-8f57-f1e1afbd258c-os-release\") pod \"multus-additional-cni-plugins-v4wxp\" (UID: \"c14980d7-1b3b-463b-8f57-f1e1afbd258c\") " pod="openshift-multus/multus-additional-cni-plugins-v4wxp" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.301604 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/3687b313-1df2-4274-80db-8c758b51bf2d-rootfs\") pod \"machine-config-daemon-fb4fr\" (UID: \"3687b313-1df2-4274-80db-8c758b51bf2d\") " pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.301636 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c14980d7-1b3b-463b-8f57-f1e1afbd258c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-v4wxp\" (UID: \"c14980d7-1b3b-463b-8f57-f1e1afbd258c\") " pod="openshift-multus/multus-additional-cni-plugins-v4wxp" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.301663 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c14980d7-1b3b-463b-8f57-f1e1afbd258c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-v4wxp\" (UID: \"c14980d7-1b3b-463b-8f57-f1e1afbd258c\") " pod="openshift-multus/multus-additional-cni-plugins-v4wxp" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.301712 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-run-openvswitch\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.301752 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-systemd-units\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.301821 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-cni-netd\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.301849 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-cnibin\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.301878 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-os-release\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.301902 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3687b313-1df2-4274-80db-8c758b51bf2d-proxy-tls\") pod \"machine-config-daemon-fb4fr\" (UID: \"3687b313-1df2-4274-80db-8c758b51bf2d\") " pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.301932 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-node-log\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.301959 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-host-run-multus-certs\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.301990 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kt6w\" (UniqueName: \"kubernetes.io/projected/09da9e14-f6d5-4346-a4a0-c17711e3b603-kube-api-access-7kt6w\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.302020 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-host-run-netns\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.302056 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/09da9e14-f6d5-4346-a4a0-c17711e3b603-multus-daemon-config\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.302082 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6nwq\" (UniqueName: \"kubernetes.io/projected/c14980d7-1b3b-463b-8f57-f1e1afbd258c-kube-api-access-t6nwq\") pod \"multus-additional-cni-plugins-v4wxp\" (UID: \"c14980d7-1b3b-463b-8f57-f1e1afbd258c\") " pod="openshift-multus/multus-additional-cni-plugins-v4wxp" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.302120 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.302149 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-system-cni-dir\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.302207 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c14980d7-1b3b-463b-8f57-f1e1afbd258c-cnibin\") pod \"multus-additional-cni-plugins-v4wxp\" (UID: \"c14980d7-1b3b-463b-8f57-f1e1afbd258c\") " pod="openshift-multus/multus-additional-cni-plugins-v4wxp" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.302244 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-run-ovn\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.302273 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-cni-bin\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.302298 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e8bb6d97-b3b8-4e31-b704-8e565385ab26-ovn-node-metrics-cert\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.302323 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-host-var-lib-cni-bin\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.302348 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/09da9e14-f6d5-4346-a4a0-c17711e3b603-cni-binary-copy\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.302390 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-run-systemd\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.302415 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-etc-openvswitch\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.302443 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz6fb\" (UniqueName: \"kubernetes.io/projected/e8bb6d97-b3b8-4e31-b704-8e565385ab26-kube-api-access-kz6fb\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.302474 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c14980d7-1b3b-463b-8f57-f1e1afbd258c-system-cni-dir\") pod \"multus-additional-cni-plugins-v4wxp\" (UID: \"c14980d7-1b3b-463b-8f57-f1e1afbd258c\") " pod="openshift-multus/multus-additional-cni-plugins-v4wxp" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.302502 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3687b313-1df2-4274-80db-8c758b51bf2d-mcd-auth-proxy-config\") pod \"machine-config-daemon-fb4fr\" (UID: \"3687b313-1df2-4274-80db-8c758b51bf2d\") " pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.302564 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e8bb6d97-b3b8-4e31-b704-8e565385ab26-env-overrides\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.302592 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-multus-conf-dir\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.302623 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f19f480e-331f-42f5-a3b6-fd0c6847b157-hosts-file\") pod \"node-resolver-8sptw\" (UID: \"f19f480e-331f-42f5-a3b6-fd0c6847b157\") " pod="openshift-dns/node-resolver-8sptw" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.302649 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjr7g\" (UniqueName: \"kubernetes.io/projected/f19f480e-331f-42f5-a3b6-fd0c6847b157-kube-api-access-hjr7g\") pod \"node-resolver-8sptw\" (UID: \"f19f480e-331f-42f5-a3b6-fd0c6847b157\") " pod="openshift-dns/node-resolver-8sptw" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.302672 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hml99\" (UniqueName: \"kubernetes.io/projected/3687b313-1df2-4274-80db-8c758b51bf2d-kube-api-access-hml99\") pod \"machine-config-daemon-fb4fr\" (UID: \"3687b313-1df2-4274-80db-8c758b51bf2d\") " pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.302703 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-host-var-lib-kubelet\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.302802 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-host-var-lib-kubelet\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.302879 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-multus-cni-dir\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.302923 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-host-run-k8s-cni-cncf-io\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.302960 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-kubelet\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.303004 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-log-socket\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.303037 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-etc-kubernetes\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.303237 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-host-run-netns\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.303450 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-host-run-multus-certs\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.303449 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-run-netns\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.303502 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-run-ovn-kubernetes\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.303510 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-node-log\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.303553 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-host-var-lib-cni-multus\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.303563 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-multus-socket-dir-parent\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.303591 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-systemd-units\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.303616 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-slash\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.303627 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-run-openvswitch\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.303636 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c14980d7-1b3b-463b-8f57-f1e1afbd258c-cnibin\") pod \"multus-additional-cni-plugins-v4wxp\" (UID: \"c14980d7-1b3b-463b-8f57-f1e1afbd258c\") " pod="openshift-multus/multus-additional-cni-plugins-v4wxp" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.303681 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.303738 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-system-cni-dir\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.303773 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-cni-bin\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.303833 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-run-ovn\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.303863 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c14980d7-1b3b-463b-8f57-f1e1afbd258c-cni-binary-copy\") pod \"multus-additional-cni-plugins-v4wxp\" (UID: \"c14980d7-1b3b-463b-8f57-f1e1afbd258c\") " pod="openshift-multus/multus-additional-cni-plugins-v4wxp" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.303893 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c14980d7-1b3b-463b-8f57-f1e1afbd258c-os-release\") pod \"multus-additional-cni-plugins-v4wxp\" (UID: \"c14980d7-1b3b-463b-8f57-f1e1afbd258c\") " pod="openshift-multus/multus-additional-cni-plugins-v4wxp" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.303915 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-run-systemd\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.303931 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-var-lib-openvswitch\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.304123 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-cnibin\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.304151 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-cni-netd\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.304184 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-host-var-lib-cni-bin\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.304210 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/09da9e14-f6d5-4346-a4a0-c17711e3b603-cni-binary-copy\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.304232 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-multus-conf-dir\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.304264 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/3687b313-1df2-4274-80db-8c758b51bf2d-rootfs\") pod \"machine-config-daemon-fb4fr\" (UID: \"3687b313-1df2-4274-80db-8c758b51bf2d\") " pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.304263 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c14980d7-1b3b-463b-8f57-f1e1afbd258c-system-cni-dir\") pod \"multus-additional-cni-plugins-v4wxp\" (UID: \"c14980d7-1b3b-463b-8f57-f1e1afbd258c\") " pod="openshift-multus/multus-additional-cni-plugins-v4wxp" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.304328 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f19f480e-331f-42f5-a3b6-fd0c6847b157-hosts-file\") pod \"node-resolver-8sptw\" (UID: \"f19f480e-331f-42f5-a3b6-fd0c6847b157\") " pod="openshift-dns/node-resolver-8sptw" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.304333 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-etc-openvswitch\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.304557 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c14980d7-1b3b-463b-8f57-f1e1afbd258c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-v4wxp\" (UID: \"c14980d7-1b3b-463b-8f57-f1e1afbd258c\") " pod="openshift-multus/multus-additional-cni-plugins-v4wxp" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.304902 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/09da9e14-f6d5-4346-a4a0-c17711e3b603-multus-daemon-config\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.304983 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3687b313-1df2-4274-80db-8c758b51bf2d-mcd-auth-proxy-config\") pod \"machine-config-daemon-fb4fr\" (UID: \"3687b313-1df2-4274-80db-8c758b51bf2d\") " pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.305071 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c14980d7-1b3b-463b-8f57-f1e1afbd258c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-v4wxp\" (UID: \"c14980d7-1b3b-463b-8f57-f1e1afbd258c\") " pod="openshift-multus/multus-additional-cni-plugins-v4wxp" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.305137 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/09da9e14-f6d5-4346-a4a0-c17711e3b603-os-release\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.305697 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e8bb6d97-b3b8-4e31-b704-8e565385ab26-ovnkube-config\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.306410 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.306441 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.306470 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.306488 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.306500 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:16Z","lastTransitionTime":"2026-01-21T10:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.308978 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.309511 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3687b313-1df2-4274-80db-8c758b51bf2d-proxy-tls\") pod \"machine-config-daemon-fb4fr\" (UID: \"3687b313-1df2-4274-80db-8c758b51bf2d\") " pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.309707 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.309720 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:57:16 crc kubenswrapper[4881]: E0121 10:57:16.309859 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:57:16 crc kubenswrapper[4881]: E0121 10:57:16.310008 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.327688 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6nwq\" (UniqueName: \"kubernetes.io/projected/c14980d7-1b3b-463b-8f57-f1e1afbd258c-kube-api-access-t6nwq\") pod \"multus-additional-cni-plugins-v4wxp\" (UID: \"c14980d7-1b3b-463b-8f57-f1e1afbd258c\") " pod="openshift-multus/multus-additional-cni-plugins-v4wxp" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.328291 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hml99\" (UniqueName: \"kubernetes.io/projected/3687b313-1df2-4274-80db-8c758b51bf2d-kube-api-access-hml99\") pod \"machine-config-daemon-fb4fr\" (UID: \"3687b313-1df2-4274-80db-8c758b51bf2d\") " pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.346968 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kt6w\" (UniqueName: \"kubernetes.io/projected/09da9e14-f6d5-4346-a4a0-c17711e3b603-kube-api-access-7kt6w\") pod \"multus-fs42r\" (UID: \"09da9e14-f6d5-4346-a4a0-c17711e3b603\") " pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.347373 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjr7g\" (UniqueName: \"kubernetes.io/projected/f19f480e-331f-42f5-a3b6-fd0c6847b157-kube-api-access-hjr7g\") pod \"node-resolver-8sptw\" (UID: \"f19f480e-331f-42f5-a3b6-fd0c6847b157\") " pod="openshift-dns/node-resolver-8sptw" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.362674 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.374228 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 10:57:16 crc kubenswrapper[4881]: W0121 10:57:16.388679 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc14980d7_1b3b_463b_8f57_f1e1afbd258c.slice/crio-8ee7f772c0c098089754b613d4fa12c49ea696eef205b96618c4a6e2b9db4ec5 WatchSource:0}: Error finding container 8ee7f772c0c098089754b613d4fa12c49ea696eef205b96618c4a6e2b9db4ec5: Status 404 returned error can't find the container with id 8ee7f772c0c098089754b613d4fa12c49ea696eef205b96618c4a6e2b9db4ec5 Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.399199 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-fs42r" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.406398 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-8sptw" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.411364 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.411418 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.411431 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.411446 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.411455 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:16Z","lastTransitionTime":"2026-01-21T10:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.415013 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:16Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:16 crc kubenswrapper[4881]: W0121 10:57:16.421483 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09da9e14_f6d5_4346_a4a0_c17711e3b603.slice/crio-d27b8e14d97b15bcb1aef61298f9b7ccb557ac67c51e1a710a96f9ba32b14f84 WatchSource:0}: Error finding container d27b8e14d97b15bcb1aef61298f9b7ccb557ac67c51e1a710a96f9ba32b14f84: Status 404 returned error can't find the container with id d27b8e14d97b15bcb1aef61298f9b7ccb557ac67c51e1a710a96f9ba32b14f84 Jan 21 10:57:16 crc kubenswrapper[4881]: W0121 10:57:16.431549 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf19f480e_331f_42f5_a3b6_fd0c6847b157.slice/crio-02885e2b14dd2366051a366641ba0be9c0f8c8bd449f9e7f0dcd7029ec83464d WatchSource:0}: Error finding container 02885e2b14dd2366051a366641ba0be9c0f8c8bd449f9e7f0dcd7029ec83464d: Status 404 returned error can't find the container with id 02885e2b14dd2366051a366641ba0be9c0f8c8bd449f9e7f0dcd7029ec83464d Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.522205 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.522241 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.522249 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.522264 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.522275 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:16Z","lastTransitionTime":"2026-01-21T10:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.677152 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.677187 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.677196 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.677213 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.677223 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:16Z","lastTransitionTime":"2026-01-21T10:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.678542 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-8sptw" event={"ID":"f19f480e-331f-42f5-a3b6-fd0c6847b157","Type":"ContainerStarted","Data":"02885e2b14dd2366051a366641ba0be9c0f8c8bd449f9e7f0dcd7029ec83464d"} Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.679727 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fs42r" event={"ID":"09da9e14-f6d5-4346-a4a0-c17711e3b603","Type":"ContainerStarted","Data":"d27b8e14d97b15bcb1aef61298f9b7ccb557ac67c51e1a710a96f9ba32b14f84"} Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.680328 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" event={"ID":"c14980d7-1b3b-463b-8f57-f1e1afbd258c","Type":"ContainerStarted","Data":"8ee7f772c0c098089754b613d4fa12c49ea696eef205b96618c4a6e2b9db4ec5"} Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.681891 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.684377 4881 scope.go:117] "RemoveContainer" containerID="676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570" Jan 21 10:57:16 crc kubenswrapper[4881]: E0121 10:57:16.684556 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.708025 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:16Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.711050 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92"} Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.711120 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033"} Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.723640 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0"} Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.725338 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"30960a323fe252c1b69c590045a527a2b99ebff962e226251bc9c286c0dae8cf"} Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.784819 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.785061 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:16Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.785318 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.785500 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.785521 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.785535 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:16Z","lastTransitionTime":"2026-01-21T10:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.814206 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:57:16 crc kubenswrapper[4881]: E0121 10:57:16.814419 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:57:18.814394624 +0000 UTC m=+26.074351093 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.814587 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.814641 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:16 crc kubenswrapper[4881]: E0121 10:57:16.815164 4881 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:57:16 crc kubenswrapper[4881]: E0121 10:57:16.815184 4881 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:57:16 crc kubenswrapper[4881]: E0121 10:57:16.815231 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:57:18.815219994 +0000 UTC m=+26.075176643 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:57:16 crc kubenswrapper[4881]: E0121 10:57:16.815294 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:57:18.815284205 +0000 UTC m=+26.075240674 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.819493 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:16Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.858842 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:16Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.894625 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d84c900436f03473de2cb7e61d5cacb76cae260a4b22be5debafff2a5cb4d98f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:56:58Z\\\",\\\"message\\\":\\\"W0121 10:56:57.509137 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0121 10:56:57.509724 1 crypto.go:601] Generating new CA for check-endpoints-signer@1768993017 cert, and key in /tmp/serving-cert-3442157096/serving-signer.crt, /tmp/serving-cert-3442157096/serving-signer.key\\\\nI0121 10:56:57.842593 1 observer_polling.go:159] Starting file observer\\\\nW0121 10:56:57.865464 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0121 10:56:57.865720 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:56:57.868508 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3442157096/tls.crt::/tmp/serving-cert-3442157096/tls.key\\\\\\\"\\\\nF0121 10:56:58.276304 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:57Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:16Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.896707 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.896858 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.896934 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.897036 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.897115 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:16Z","lastTransitionTime":"2026-01-21T10:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.914593 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:16Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.915642 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.915869 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:57:16 crc kubenswrapper[4881]: E0121 10:57:16.915998 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:57:16 crc kubenswrapper[4881]: E0121 10:57:16.916052 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:57:16 crc kubenswrapper[4881]: E0121 10:57:16.916077 4881 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:57:16 crc kubenswrapper[4881]: E0121 10:57:16.916205 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 10:57:18.9161744 +0000 UTC m=+26.176130869 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:57:16 crc kubenswrapper[4881]: E0121 10:57:16.917490 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:57:16 crc kubenswrapper[4881]: E0121 10:57:16.917506 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:57:16 crc kubenswrapper[4881]: E0121 10:57:16.917515 4881 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:57:16 crc kubenswrapper[4881]: E0121 10:57:16.917542 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 10:57:18.917534203 +0000 UTC m=+26.177490672 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.939090 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:16Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.955942 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:16Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.978237 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:16Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.997483 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:16Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.999379 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.999419 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.999430 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.999447 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:16 crc kubenswrapper[4881]: I0121 10:57:16.999457 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:16Z","lastTransitionTime":"2026-01-21T10:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.022280 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:17Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.035840 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:17Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.049429 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:17Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.063010 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:17Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.076851 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:17Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.089187 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:17Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.099671 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:17Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.101348 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.101380 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.101396 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.101419 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.101433 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:17Z","lastTransitionTime":"2026-01-21T10:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.105768 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 18:43:24.897549068 +0000 UTC Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.118970 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:17Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.133794 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:17Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.146562 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:17Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.172218 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:17Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.186571 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:17Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.199291 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:17Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.203256 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.203289 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.203299 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.203312 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.203322 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:17Z","lastTransitionTime":"2026-01-21T10:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.270026 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.270490 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.271837 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e8bb6d97-b3b8-4e31-b704-8e565385ab26-ovnkube-script-lib\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.275210 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e8bb6d97-b3b8-4e31-b704-8e565385ab26-env-overrides\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:17 crc kubenswrapper[4881]: E0121 10:57:17.304579 4881 secret.go:188] Couldn't get secret openshift-ovn-kubernetes/ovn-node-metrics-cert: failed to sync secret cache: timed out waiting for the condition Jan 21 10:57:17 crc kubenswrapper[4881]: E0121 10:57:17.304696 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e8bb6d97-b3b8-4e31-b704-8e565385ab26-ovn-node-metrics-cert podName:e8bb6d97-b3b8-4e31-b704-8e565385ab26 nodeName:}" failed. No retries permitted until 2026-01-21 10:57:17.804669009 +0000 UTC m=+25.064625488 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovn-node-metrics-cert" (UniqueName: "kubernetes.io/secret/e8bb6d97-b3b8-4e31-b704-8e565385ab26-ovn-node-metrics-cert") pod "ovnkube-node-bx64f" (UID: "e8bb6d97-b3b8-4e31-b704-8e565385ab26") : failed to sync secret cache: timed out waiting for the condition Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.306022 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.306077 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.306088 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.306109 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.306121 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:17Z","lastTransitionTime":"2026-01-21T10:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.309774 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:17 crc kubenswrapper[4881]: E0121 10:57:17.309932 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.313445 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 21 10:57:17 crc kubenswrapper[4881]: E0121 10:57:17.333450 4881 projected.go:288] Couldn't get configMap openshift-ovn-kubernetes/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 21 10:57:17 crc kubenswrapper[4881]: E0121 10:57:17.333553 4881 projected.go:194] Error preparing data for projected volume kube-api-access-kz6fb for pod openshift-ovn-kubernetes/ovnkube-node-bx64f: failed to sync configmap cache: timed out waiting for the condition Jan 21 10:57:17 crc kubenswrapper[4881]: E0121 10:57:17.333667 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e8bb6d97-b3b8-4e31-b704-8e565385ab26-kube-api-access-kz6fb podName:e8bb6d97-b3b8-4e31-b704-8e565385ab26 nodeName:}" failed. No retries permitted until 2026-01-21 10:57:17.833635349 +0000 UTC m=+25.093591818 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kz6fb" (UniqueName: "kubernetes.io/projected/e8bb6d97-b3b8-4e31-b704-8e565385ab26-kube-api-access-kz6fb") pod "ovnkube-node-bx64f" (UID: "e8bb6d97-b3b8-4e31-b704-8e565385ab26") : failed to sync configmap cache: timed out waiting for the condition Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.355816 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.408657 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.408695 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.408706 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.408722 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.408734 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:17Z","lastTransitionTime":"2026-01-21T10:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.511619 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.511652 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.511663 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.511679 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.511691 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:17Z","lastTransitionTime":"2026-01-21T10:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.538801 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.613703 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.613738 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.613746 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.613761 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.613769 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:17Z","lastTransitionTime":"2026-01-21T10:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.646132 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.716230 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.716279 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.716289 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.716306 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.716319 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:17Z","lastTransitionTime":"2026-01-21T10:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.728670 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1"} Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.728707 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d"} Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.729888 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fs42r" event={"ID":"09da9e14-f6d5-4346-a4a0-c17711e3b603","Type":"ContainerStarted","Data":"821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb"} Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.731229 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" event={"ID":"c14980d7-1b3b-463b-8f57-f1e1afbd258c","Type":"ContainerStarted","Data":"a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48"} Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.735441 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-8sptw" event={"ID":"f19f480e-331f-42f5-a3b6-fd0c6847b157","Type":"ContainerStarted","Data":"21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6"} Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.746843 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:17Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.777556 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:17Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.797582 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:17Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.818273 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.818315 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.818326 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.818343 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.818355 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:17Z","lastTransitionTime":"2026-01-21T10:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.828232 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e8bb6d97-b3b8-4e31-b704-8e565385ab26-ovn-node-metrics-cert\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.830551 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:17Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.835441 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e8bb6d97-b3b8-4e31-b704-8e565385ab26-ovn-node-metrics-cert\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.842748 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:17Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.855115 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:17Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.860849 4881 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.867301 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:17Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.873306 4881 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.882983 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:17Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.889179 4881 csr.go:261] certificate signing request csr-s78ct is approved, waiting to be issued Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.897301 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:17Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.907845 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:17Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.918211 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:17Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.921340 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.921361 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.921368 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.921380 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.921390 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:17Z","lastTransitionTime":"2026-01-21T10:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.929850 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz6fb\" (UniqueName: \"kubernetes.io/projected/e8bb6d97-b3b8-4e31-b704-8e565385ab26-kube-api-access-kz6fb\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.931701 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:17Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.932980 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kz6fb\" (UniqueName: \"kubernetes.io/projected/e8bb6d97-b3b8-4e31-b704-8e565385ab26-kube-api-access-kz6fb\") pod \"ovnkube-node-bx64f\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.945337 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:17Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.957510 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:17Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:17 crc kubenswrapper[4881]: I0121 10:57:17.982910 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:17Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.005591 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:17Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.012198 4881 csr.go:257] certificate signing request csr-s78ct is issued Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.023352 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.023380 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.023388 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.023402 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.023437 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:18Z","lastTransitionTime":"2026-01-21T10:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.024898 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:18Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.034966 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:18Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.046491 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:18Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.074587 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:18Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.106513 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 05:01:23.240728372 +0000 UTC Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.108396 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:18Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.126318 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.126351 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.126359 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.126372 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.126384 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:18Z","lastTransitionTime":"2026-01-21T10:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.134154 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:18Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.184247 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.228410 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.228436 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.228445 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.228457 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.228467 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:18Z","lastTransitionTime":"2026-01-21T10:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.288768 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:18Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.309982 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:57:18 crc kubenswrapper[4881]: E0121 10:57:18.310147 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.310556 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:57:18 crc kubenswrapper[4881]: E0121 10:57:18.310628 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.317233 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:18Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.327287 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:18Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.331031 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.331062 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.331070 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.331085 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.331094 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:18Z","lastTransitionTime":"2026-01-21T10:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.346020 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:18Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.433443 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.433478 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.433487 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.433501 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.433540 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:18Z","lastTransitionTime":"2026-01-21T10:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.535282 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.535318 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.535328 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.535343 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.535355 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:18Z","lastTransitionTime":"2026-01-21T10:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.637524 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.637558 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.637568 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.637582 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.637598 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:18Z","lastTransitionTime":"2026-01-21T10:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.738839 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.738875 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.738886 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.738900 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.738911 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:18Z","lastTransitionTime":"2026-01-21T10:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.739594 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" event={"ID":"e8bb6d97-b3b8-4e31-b704-8e565385ab26","Type":"ContainerStarted","Data":"a06b3458bc6abd92816719b2c657b7e45cd4d79bda9753bf86e22c8e99a3027c"} Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.840627 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.840847 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.840924 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.841009 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.841092 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:18Z","lastTransitionTime":"2026-01-21T10:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.841576 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.841831 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.841860 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:18 crc kubenswrapper[4881]: E0121 10:57:18.842445 4881 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:57:18 crc kubenswrapper[4881]: E0121 10:57:18.842577 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:57:22.842556017 +0000 UTC m=+30.102512546 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:57:18 crc kubenswrapper[4881]: E0121 10:57:18.842664 4881 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:57:18 crc kubenswrapper[4881]: E0121 10:57:18.842721 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:57:22.842705911 +0000 UTC m=+30.102662470 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:57:18 crc kubenswrapper[4881]: E0121 10:57:18.842866 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:57:22.842855104 +0000 UTC m=+30.102811573 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.942498 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.942557 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:57:18 crc kubenswrapper[4881]: E0121 10:57:18.942693 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:57:18 crc kubenswrapper[4881]: E0121 10:57:18.942694 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:57:18 crc kubenswrapper[4881]: E0121 10:57:18.942713 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:57:18 crc kubenswrapper[4881]: E0121 10:57:18.942725 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:57:18 crc kubenswrapper[4881]: E0121 10:57:18.942732 4881 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:57:18 crc kubenswrapper[4881]: E0121 10:57:18.942739 4881 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:57:18 crc kubenswrapper[4881]: E0121 10:57:18.942808 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 10:57:22.942774715 +0000 UTC m=+30.202731204 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:57:18 crc kubenswrapper[4881]: E0121 10:57:18.942845 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 10:57:22.942836127 +0000 UTC m=+30.202792606 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.943409 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.943508 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.943591 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.943671 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:18 crc kubenswrapper[4881]: I0121 10:57:18.943735 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:18Z","lastTransitionTime":"2026-01-21T10:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.012932 4881 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-21 10:52:18 +0000 UTC, rotation deadline is 2026-11-28 10:16:18.720862966 +0000 UTC Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.013249 4881 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7463h18m59.707618703s for next certificate rotation Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.046091 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.046122 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.046131 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.046147 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.046160 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:19Z","lastTransitionTime":"2026-01-21T10:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.107300 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 05:06:56.150130036 +0000 UTC Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.148618 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.148652 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.148662 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.148677 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.148687 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:19Z","lastTransitionTime":"2026-01-21T10:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.250431 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.250457 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.250467 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.250480 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.250489 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:19Z","lastTransitionTime":"2026-01-21T10:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.333439 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:19 crc kubenswrapper[4881]: E0121 10:57:19.333838 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.352601 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.352632 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.352641 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.352656 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.352665 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:19Z","lastTransitionTime":"2026-01-21T10:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.489891 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.489940 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.489950 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.489967 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.489978 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:19Z","lastTransitionTime":"2026-01-21T10:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.593015 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.593059 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.593070 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.593086 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.593098 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:19Z","lastTransitionTime":"2026-01-21T10:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.695350 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.695389 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.695402 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.695422 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.695436 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:19Z","lastTransitionTime":"2026-01-21T10:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.745594 4881 generic.go:334] "Generic (PLEG): container finished" podID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerID="db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd" exitCode=0 Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.745690 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" event={"ID":"e8bb6d97-b3b8-4e31-b704-8e565385ab26","Type":"ContainerDied","Data":"db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd"} Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.767133 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:19Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.786458 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700"} Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.787521 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:19Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.788479 4881 generic.go:334] "Generic (PLEG): container finished" podID="c14980d7-1b3b-463b-8f57-f1e1afbd258c" containerID="a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48" exitCode=0 Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.788548 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" event={"ID":"c14980d7-1b3b-463b-8f57-f1e1afbd258c","Type":"ContainerDied","Data":"a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48"} Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.798905 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.798952 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.798969 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.798991 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.799007 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:19Z","lastTransitionTime":"2026-01-21T10:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.807460 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:19Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.822704 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:19Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.836598 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:19Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.856265 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:19Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.872515 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:19Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.890169 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:19Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.901561 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.901594 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.901602 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.901615 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.901623 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:19Z","lastTransitionTime":"2026-01-21T10:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.910538 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:19Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.926352 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:19Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.954197 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:19Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.975969 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:19Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:19 crc kubenswrapper[4881]: I0121 10:57:19.996531 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:19Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.013033 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.013080 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.013091 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.013107 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.013120 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:20Z","lastTransitionTime":"2026-01-21T10:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.015670 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.026729 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.044963 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.057125 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.074286 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.089328 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.101221 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.107904 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 19:15:34.523778449 +0000 UTC Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.112545 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.115455 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.115486 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.115495 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.115509 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.115519 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:20Z","lastTransitionTime":"2026-01-21T10:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.126647 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.139542 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.150069 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.161154 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.174271 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.217829 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.217852 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.217860 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.217872 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.217880 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:20Z","lastTransitionTime":"2026-01-21T10:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.310413 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:57:20 crc kubenswrapper[4881]: E0121 10:57:20.310816 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.310702 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:57:20 crc kubenswrapper[4881]: E0121 10:57:20.311057 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.320095 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.320354 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.320561 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.320766 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.320875 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:20Z","lastTransitionTime":"2026-01-21T10:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.423196 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.423556 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.423756 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.423990 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.424093 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:20Z","lastTransitionTime":"2026-01-21T10:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.585661 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.585893 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.585987 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.586068 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.586146 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:20Z","lastTransitionTime":"2026-01-21T10:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.699985 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.700341 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.700362 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.700384 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.700401 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:20Z","lastTransitionTime":"2026-01-21T10:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.795203 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" event={"ID":"e8bb6d97-b3b8-4e31-b704-8e565385ab26","Type":"ContainerStarted","Data":"d29c170fdf57f1f4c678e2902443d9ef467ac3157370e997fd5596b4eecfb54e"} Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.798000 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" event={"ID":"c14980d7-1b3b-463b-8f57-f1e1afbd258c","Type":"ContainerStarted","Data":"0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756"} Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.805852 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.805911 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.805923 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.805944 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.805963 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:20Z","lastTransitionTime":"2026-01-21T10:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.815641 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.828836 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.843343 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.861486 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.877286 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.908723 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.908753 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.908762 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.908783 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.908805 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:20Z","lastTransitionTime":"2026-01-21T10:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.908914 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.921052 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.966983 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:20 crc kubenswrapper[4881]: I0121 10:57:20.980299 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.004296 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.011868 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.011919 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.011931 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.011951 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.011967 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:21Z","lastTransitionTime":"2026-01-21T10:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.015825 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.032255 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.043170 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.109085 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 06:11:25.393152712 +0000 UTC Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.114930 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.114964 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.114975 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.114994 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.115005 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:21Z","lastTransitionTime":"2026-01-21T10:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.221190 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.221220 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.221227 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.221240 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.221248 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:21Z","lastTransitionTime":"2026-01-21T10:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.312017 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:21 crc kubenswrapper[4881]: E0121 10:57:21.312154 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.322928 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.322954 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.322962 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.322973 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.322982 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:21Z","lastTransitionTime":"2026-01-21T10:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.486007 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.486034 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.486044 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.486059 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.486071 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:21Z","lastTransitionTime":"2026-01-21T10:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.588679 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.588773 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.588809 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.588827 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.588839 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:21Z","lastTransitionTime":"2026-01-21T10:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.697102 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.697141 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.697153 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.697174 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.697185 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:21Z","lastTransitionTime":"2026-01-21T10:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.801161 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.801244 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.801260 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.801280 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.801298 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:21Z","lastTransitionTime":"2026-01-21T10:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.807639 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" event={"ID":"e8bb6d97-b3b8-4e31-b704-8e565385ab26","Type":"ContainerStarted","Data":"b2c1d1496a7f7e5fad6a3d4ade4104e8e0c40fd0ca4722ca5e1ca1fbf0ebf3cb"} Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.903737 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.903779 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.903805 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.903824 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:21 crc kubenswrapper[4881]: I0121 10:57:21.903833 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:21Z","lastTransitionTime":"2026-01-21T10:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.007361 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.007429 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.007447 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.007877 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.007918 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:22Z","lastTransitionTime":"2026-01-21T10:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.011506 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.012165 4881 scope.go:117] "RemoveContainer" containerID="676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570" Jan 21 10:57:22 crc kubenswrapper[4881]: E0121 10:57:22.012367 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.109642 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 10:48:35.057424569 +0000 UTC Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.110433 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.110501 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.110513 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.110526 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.110535 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:22Z","lastTransitionTime":"2026-01-21T10:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.214180 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.214231 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.214247 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.214272 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.214290 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:22Z","lastTransitionTime":"2026-01-21T10:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.263322 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-tjwf8"] Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.263697 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-tjwf8" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.265998 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.266875 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.266973 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.267824 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.285420 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.310486 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:57:22 crc kubenswrapper[4881]: E0121 10:57:22.310656 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.310742 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:57:22 crc kubenswrapper[4881]: E0121 10:57:22.310813 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.314702 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.317103 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.317184 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.317206 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.317235 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.317257 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:22Z","lastTransitionTime":"2026-01-21T10:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.331684 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.347043 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.363306 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.373725 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.383244 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cf4f6fc0-ed4c-47b7-b2bc-8033980781a3-host\") pod \"node-ca-tjwf8\" (UID: \"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\") " pod="openshift-image-registry/node-ca-tjwf8" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.383308 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/cf4f6fc0-ed4c-47b7-b2bc-8033980781a3-serviceca\") pod \"node-ca-tjwf8\" (UID: \"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\") " pod="openshift-image-registry/node-ca-tjwf8" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.383370 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57d55\" (UniqueName: \"kubernetes.io/projected/cf4f6fc0-ed4c-47b7-b2bc-8033980781a3-kube-api-access-57d55\") pod \"node-ca-tjwf8\" (UID: \"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\") " pod="openshift-image-registry/node-ca-tjwf8" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.384723 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.399935 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.411360 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.419502 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.419533 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.419541 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.419558 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.419567 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:22Z","lastTransitionTime":"2026-01-21T10:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.428359 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.445669 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.466274 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.481086 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.484746 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/cf4f6fc0-ed4c-47b7-b2bc-8033980781a3-serviceca\") pod \"node-ca-tjwf8\" (UID: \"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\") " pod="openshift-image-registry/node-ca-tjwf8" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.485042 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57d55\" (UniqueName: \"kubernetes.io/projected/cf4f6fc0-ed4c-47b7-b2bc-8033980781a3-kube-api-access-57d55\") pod \"node-ca-tjwf8\" (UID: \"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\") " pod="openshift-image-registry/node-ca-tjwf8" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.485154 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cf4f6fc0-ed4c-47b7-b2bc-8033980781a3-host\") pod \"node-ca-tjwf8\" (UID: \"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\") " pod="openshift-image-registry/node-ca-tjwf8" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.485279 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cf4f6fc0-ed4c-47b7-b2bc-8033980781a3-host\") pod \"node-ca-tjwf8\" (UID: \"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\") " pod="openshift-image-registry/node-ca-tjwf8" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.486456 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/cf4f6fc0-ed4c-47b7-b2bc-8033980781a3-serviceca\") pod \"node-ca-tjwf8\" (UID: \"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\") " pod="openshift-image-registry/node-ca-tjwf8" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.500247 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.509963 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57d55\" (UniqueName: \"kubernetes.io/projected/cf4f6fc0-ed4c-47b7-b2bc-8033980781a3-kube-api-access-57d55\") pod \"node-ca-tjwf8\" (UID: \"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\") " pod="openshift-image-registry/node-ca-tjwf8" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.521669 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.521706 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.521718 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.521734 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.521744 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:22Z","lastTransitionTime":"2026-01-21T10:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.576590 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-tjwf8" Jan 21 10:57:22 crc kubenswrapper[4881]: W0121 10:57:22.593868 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf4f6fc0_ed4c_47b7_b2bc_8033980781a3.slice/crio-01617000387d477911d9cb738c195ac6bfacdc21c7a315e15ef50fc5fb308e58 WatchSource:0}: Error finding container 01617000387d477911d9cb738c195ac6bfacdc21c7a315e15ef50fc5fb308e58: Status 404 returned error can't find the container with id 01617000387d477911d9cb738c195ac6bfacdc21c7a315e15ef50fc5fb308e58 Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.624575 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.624622 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.624632 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.624647 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.624660 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:22Z","lastTransitionTime":"2026-01-21T10:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.727762 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.727799 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.727810 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.727826 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.727836 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:22Z","lastTransitionTime":"2026-01-21T10:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.813875 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" event={"ID":"e8bb6d97-b3b8-4e31-b704-8e565385ab26","Type":"ContainerStarted","Data":"599bfaf6bfc412f779f4732cf9f4273382c1f0af20576581264ea4fc63b2ec38"} Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.814222 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" event={"ID":"e8bb6d97-b3b8-4e31-b704-8e565385ab26","Type":"ContainerStarted","Data":"e45d7cde8c26fc00bb1b5ad5cc88c41aff02acf0860856e86f3c3bfdc01e7045"} Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.815884 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-tjwf8" event={"ID":"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3","Type":"ContainerStarted","Data":"01617000387d477911d9cb738c195ac6bfacdc21c7a315e15ef50fc5fb308e58"} Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.831301 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.831361 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.831372 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.831391 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.831406 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:22Z","lastTransitionTime":"2026-01-21T10:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.890047 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.890226 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.890290 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:22 crc kubenswrapper[4881]: E0121 10:57:22.890421 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:57:30.890384275 +0000 UTC m=+38.150340744 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:57:22 crc kubenswrapper[4881]: E0121 10:57:22.890442 4881 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:57:22 crc kubenswrapper[4881]: E0121 10:57:22.890533 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:57:30.890526439 +0000 UTC m=+38.150482898 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:57:22 crc kubenswrapper[4881]: E0121 10:57:22.890445 4881 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:57:22 crc kubenswrapper[4881]: E0121 10:57:22.890676 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:57:30.890656172 +0000 UTC m=+38.150612831 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.932365 4881 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 21 10:57:22 crc kubenswrapper[4881]: W0121 10:57:22.938035 4881 reflector.go:484] object-"openshift-image-registry"/"node-ca-dockercfg-4777p": watch of *v1.Secret ended with: very short watch: object-"openshift-image-registry"/"node-ca-dockercfg-4777p": Unexpected watch close - watch lasted less than a second and no items received Jan 21 10:57:22 crc kubenswrapper[4881]: W0121 10:57:22.939297 4881 reflector.go:484] object-"openshift-image-registry"/"image-registry-certificates": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"image-registry-certificates": Unexpected watch close - watch lasted less than a second and no items received Jan 21 10:57:22 crc kubenswrapper[4881]: W0121 10:57:22.939497 4881 reflector.go:484] object-"openshift-image-registry"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 21 10:57:22 crc kubenswrapper[4881]: W0121 10:57:22.940136 4881 reflector.go:484] object-"openshift-image-registry"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.945251 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.945275 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.945283 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.945297 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.945307 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:22Z","lastTransitionTime":"2026-01-21T10:57:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.991876 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:57:22 crc kubenswrapper[4881]: I0121 10:57:22.991979 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:57:22 crc kubenswrapper[4881]: E0121 10:57:22.992179 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:57:22 crc kubenswrapper[4881]: E0121 10:57:22.992208 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:57:22 crc kubenswrapper[4881]: E0121 10:57:22.992226 4881 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:57:22 crc kubenswrapper[4881]: E0121 10:57:22.992297 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 10:57:30.992273905 +0000 UTC m=+38.252230374 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:57:22 crc kubenswrapper[4881]: E0121 10:57:22.992573 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:57:22 crc kubenswrapper[4881]: E0121 10:57:22.992665 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:57:22 crc kubenswrapper[4881]: E0121 10:57:22.992730 4881 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:57:22 crc kubenswrapper[4881]: E0121 10:57:22.992872 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 10:57:30.992848918 +0000 UTC m=+38.252805387 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.083009 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.083106 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.083117 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.083189 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.083217 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:23Z","lastTransitionTime":"2026-01-21T10:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.110560 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 21:24:37.429630896 +0000 UTC Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.185585 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.185622 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.185630 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.185643 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.185653 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:23Z","lastTransitionTime":"2026-01-21T10:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.289188 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.289260 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.289276 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.289308 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.289325 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:23Z","lastTransitionTime":"2026-01-21T10:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.309849 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:23 crc kubenswrapper[4881]: E0121 10:57:23.310057 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.333424 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.349594 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.367513 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.382274 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.391922 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.391973 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.391987 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.392007 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.392020 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:23Z","lastTransitionTime":"2026-01-21T10:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.399391 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.424715 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.449376 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.478260 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.494332 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.494388 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.494415 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.494449 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.494472 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:23Z","lastTransitionTime":"2026-01-21T10:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.507850 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.551550 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.565937 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.580206 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.589698 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.596753 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.596801 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.596813 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.596831 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.596842 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:23Z","lastTransitionTime":"2026-01-21T10:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.602716 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.698645 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.698688 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.698699 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.698721 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.698731 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:23Z","lastTransitionTime":"2026-01-21T10:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.801900 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.801943 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.801953 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.801968 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.801977 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:23Z","lastTransitionTime":"2026-01-21T10:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.822098 4881 generic.go:334] "Generic (PLEG): container finished" podID="c14980d7-1b3b-463b-8f57-f1e1afbd258c" containerID="0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756" exitCode=0 Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.822178 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" event={"ID":"c14980d7-1b3b-463b-8f57-f1e1afbd258c","Type":"ContainerDied","Data":"0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756"} Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.826807 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-tjwf8" event={"ID":"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3","Type":"ContainerStarted","Data":"9164acd9a91836c25b02986f204dd0302964f6bff6fe077971d37eebd4b560ac"} Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.838013 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" event={"ID":"e8bb6d97-b3b8-4e31-b704-8e565385ab26","Type":"ContainerStarted","Data":"9454db8c9af3187ee06e017ac192668036b1f565305a37de61c3cffe4d94e7db"} Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.838062 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" event={"ID":"e8bb6d97-b3b8-4e31-b704-8e565385ab26","Type":"ContainerStarted","Data":"f7fa3136ee876ebb14cd5164f9176c91c60cc04fd7882a3ad0957ce3a36184d6"} Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.847282 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.866417 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.873327 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.895405 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.904473 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.904561 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.904589 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.904628 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.904652 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:23Z","lastTransitionTime":"2026-01-21T10:57:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.911209 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.937933 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.956323 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.963577 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.966526 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.982195 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:23 crc kubenswrapper[4881]: I0121 10:57:23.997534 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.007345 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.007371 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.007379 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.007393 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.007402 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:24Z","lastTransitionTime":"2026-01-21T10:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.014754 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.027683 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.042821 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.055102 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.071287 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.073524 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.090254 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.098642 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.109733 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.110721 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 22:19:48.074328417 +0000 UTC Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.111318 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.111362 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.111381 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.111405 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.111422 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:24Z","lastTransitionTime":"2026-01-21T10:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.134062 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.155474 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9164acd9a91836c25b02986f204dd0302964f6bff6fe077971d37eebd4b560ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.172778 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.187444 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.201388 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.214341 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.214384 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.214396 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.214464 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.214475 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:24Z","lastTransitionTime":"2026-01-21T10:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.216304 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.229269 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.241884 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.253665 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.273267 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.292103 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.305010 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.310209 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.310217 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:57:24 crc kubenswrapper[4881]: E0121 10:57:24.310325 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:57:24 crc kubenswrapper[4881]: E0121 10:57:24.310427 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.317468 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.317520 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.317537 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.317556 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.317573 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:24Z","lastTransitionTime":"2026-01-21T10:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.420624 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.420702 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.420721 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.420746 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.420766 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:24Z","lastTransitionTime":"2026-01-21T10:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.525831 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.525902 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.525924 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.525953 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.525975 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:24Z","lastTransitionTime":"2026-01-21T10:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.629826 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.629893 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.629917 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.629949 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.629972 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:24Z","lastTransitionTime":"2026-01-21T10:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.698761 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.698873 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.698898 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.698927 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.698947 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:24Z","lastTransitionTime":"2026-01-21T10:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:24 crc kubenswrapper[4881]: E0121 10:57:24.729902 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.736220 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.736492 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.736995 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.737313 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.737382 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:24Z","lastTransitionTime":"2026-01-21T10:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:24 crc kubenswrapper[4881]: E0121 10:57:24.756927 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.762079 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.762252 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.762338 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.762551 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.762660 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:24Z","lastTransitionTime":"2026-01-21T10:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:24 crc kubenswrapper[4881]: E0121 10:57:24.782937 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.793566 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.793618 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.793637 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.793662 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.793680 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:24Z","lastTransitionTime":"2026-01-21T10:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:24 crc kubenswrapper[4881]: E0121 10:57:24.809551 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.813556 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.813597 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.813612 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.813630 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.813643 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:24Z","lastTransitionTime":"2026-01-21T10:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:24 crc kubenswrapper[4881]: E0121 10:57:24.828261 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: E0121 10:57:24.828466 4881 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.830660 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.830699 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.830711 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.830728 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.830739 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:24Z","lastTransitionTime":"2026-01-21T10:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.844403 4881 generic.go:334] "Generic (PLEG): container finished" podID="c14980d7-1b3b-463b-8f57-f1e1afbd258c" containerID="7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248" exitCode=0 Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.844478 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" event={"ID":"c14980d7-1b3b-463b-8f57-f1e1afbd258c","Type":"ContainerDied","Data":"7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248"} Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.858412 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9164acd9a91836c25b02986f204dd0302964f6bff6fe077971d37eebd4b560ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.877006 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.892807 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.907295 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.925942 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.933743 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.933778 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.933813 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.934010 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.934022 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:24Z","lastTransitionTime":"2026-01-21T10:57:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.943137 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.958310 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.979738 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:24 crc kubenswrapper[4881]: I0121 10:57:24.993635 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.007260 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.033406 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.035663 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.035694 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.035702 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.035716 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.035725 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:25Z","lastTransitionTime":"2026-01-21T10:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.048852 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.074830 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.088600 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.111951 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 13:09:46.212864965 +0000 UTC Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.138718 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.138761 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.138773 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.138802 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.138812 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:25Z","lastTransitionTime":"2026-01-21T10:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.240760 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.240858 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.240877 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.240899 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.240917 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:25Z","lastTransitionTime":"2026-01-21T10:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.311131 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:25 crc kubenswrapper[4881]: E0121 10:57:25.311242 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.342915 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.342958 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.342970 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.343009 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.343023 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:25Z","lastTransitionTime":"2026-01-21T10:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.445480 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.445517 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.445527 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.445543 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.445559 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:25Z","lastTransitionTime":"2026-01-21T10:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.548096 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.548170 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.548185 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.548249 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.548264 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:25Z","lastTransitionTime":"2026-01-21T10:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.650001 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.650039 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.650048 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.650061 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.650071 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:25Z","lastTransitionTime":"2026-01-21T10:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.752921 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.752978 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.752990 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.753008 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.753029 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:25Z","lastTransitionTime":"2026-01-21T10:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.852696 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" event={"ID":"e8bb6d97-b3b8-4e31-b704-8e565385ab26","Type":"ContainerStarted","Data":"47c176d51b5558e85897ac8c72c02bb6f152a972d8e941aeb114333f777e22ef"} Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.854754 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.854801 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.854812 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.854825 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.854835 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:25Z","lastTransitionTime":"2026-01-21T10:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.856555 4881 generic.go:334] "Generic (PLEG): container finished" podID="c14980d7-1b3b-463b-8f57-f1e1afbd258c" containerID="3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9" exitCode=0 Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.856622 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" event={"ID":"c14980d7-1b3b-463b-8f57-f1e1afbd258c","Type":"ContainerDied","Data":"3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9"} Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.876540 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.898077 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.912834 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.927895 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9164acd9a91836c25b02986f204dd0302964f6bff6fe077971d37eebd4b560ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.942303 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.956150 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.957230 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.957301 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.957311 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.957347 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.957358 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:25Z","lastTransitionTime":"2026-01-21T10:57:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.975668 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:25 crc kubenswrapper[4881]: I0121 10:57:25.989645 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.006542 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.021103 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.042844 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.059928 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.059974 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.059984 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.060001 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.060017 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:26Z","lastTransitionTime":"2026-01-21T10:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.060150 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.073367 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.085638 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.112162 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 16:47:07.34577816 +0000 UTC Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.162376 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.162407 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.162417 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.162431 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.162440 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:26Z","lastTransitionTime":"2026-01-21T10:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.264926 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.264979 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.264988 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.265004 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.265013 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:26Z","lastTransitionTime":"2026-01-21T10:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.309818 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.309884 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:57:26 crc kubenswrapper[4881]: E0121 10:57:26.310100 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:57:26 crc kubenswrapper[4881]: E0121 10:57:26.310231 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.367526 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.367570 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.367579 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.367595 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.367604 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:26Z","lastTransitionTime":"2026-01-21T10:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.471964 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.472034 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.472053 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.472080 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.472098 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:26Z","lastTransitionTime":"2026-01-21T10:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.574358 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.574502 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.574598 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.574670 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.574745 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:26Z","lastTransitionTime":"2026-01-21T10:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.677227 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.677263 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.677270 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.677285 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.677294 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:26Z","lastTransitionTime":"2026-01-21T10:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.779853 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.779903 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.779918 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.779946 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.779961 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:26Z","lastTransitionTime":"2026-01-21T10:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.872942 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" event={"ID":"c14980d7-1b3b-463b-8f57-f1e1afbd258c","Type":"ContainerStarted","Data":"c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b"} Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.883114 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.883162 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.883175 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.883194 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.883209 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:26Z","lastTransitionTime":"2026-01-21T10:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.889425 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.903500 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.921321 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.939962 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.955846 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.968655 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9164acd9a91836c25b02986f204dd0302964f6bff6fe077971d37eebd4b560ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.984555 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.986425 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.986465 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.986477 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.986497 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:26 crc kubenswrapper[4881]: I0121 10:57:26.986512 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:26Z","lastTransitionTime":"2026-01-21T10:57:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.001908 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.017510 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.031872 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.049065 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.060897 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.078801 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.089776 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.089831 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.089841 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.089854 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.089864 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:27Z","lastTransitionTime":"2026-01-21T10:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.093304 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.112705 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 11:59:26.875524956 +0000 UTC Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.193009 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.193066 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.193091 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.193116 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.193132 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:27Z","lastTransitionTime":"2026-01-21T10:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.296286 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.296374 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.296399 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.296430 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.296447 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:27Z","lastTransitionTime":"2026-01-21T10:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.309843 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:27 crc kubenswrapper[4881]: E0121 10:57:27.310025 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.398917 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.398959 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.398970 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.398988 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.399000 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:27Z","lastTransitionTime":"2026-01-21T10:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.501330 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.501733 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.501746 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.501764 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.501780 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:27Z","lastTransitionTime":"2026-01-21T10:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.608376 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.608445 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.608470 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.608502 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.608546 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:27Z","lastTransitionTime":"2026-01-21T10:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.711895 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.711948 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.711966 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.711992 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.712011 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:27Z","lastTransitionTime":"2026-01-21T10:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.815532 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.815573 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.815587 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.815606 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.815622 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:27Z","lastTransitionTime":"2026-01-21T10:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.881373 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" event={"ID":"e8bb6d97-b3b8-4e31-b704-8e565385ab26","Type":"ContainerStarted","Data":"58a840f0217d0e057d132d7debeba49b9c541f7f69f33178abee1a44909c83c5"} Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.882257 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.882384 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.886881 4881 generic.go:334] "Generic (PLEG): container finished" podID="c14980d7-1b3b-463b-8f57-f1e1afbd258c" containerID="c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b" exitCode=0 Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.886944 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" event={"ID":"c14980d7-1b3b-463b-8f57-f1e1afbd258c","Type":"ContainerDied","Data":"c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b"} Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.897852 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.919431 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.919490 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.919502 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.919527 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.919544 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:27Z","lastTransitionTime":"2026-01-21T10:57:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.920183 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e45d7cde8c26fc00bb1b5ad5cc88c41aff02acf0860856e86f3c3bfdc01e7045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599bfaf6bfc412f779f4732cf9f4273382c1f0af20576581264ea4fc63b2ec38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9454db8c9af3187ee06e017ac192668036b1f565305a37de61c3cffe4d94e7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa3136ee876ebb14cd5164f9176c91c60cc04fd7882a3ad0957ce3a36184d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2c1d1496a7f7e5fad6a3d4ade4104e8e0c40fd0ca4722ca5e1ca1fbf0ebf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29c170fdf57f1f4c678e2902443d9ef467ac3157370e997fd5596b4eecfb54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58a840f0217d0e057d132d7debeba49b9c541f7f69f33178abee1a44909c83c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47c176d51b5558e85897ac8c72c02bb6f152a972d8e941aeb114333f777e22ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.935302 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.955911 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.955992 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.964178 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.980637 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:27 crc kubenswrapper[4881]: I0121 10:57:27.996057 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.007221 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:28Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.023311 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:28Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.024531 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.024564 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.024599 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.024616 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.024627 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:28Z","lastTransitionTime":"2026-01-21T10:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.035395 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9164acd9a91836c25b02986f204dd0302964f6bff6fe077971d37eebd4b560ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:28Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.050425 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:28Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.064679 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:28Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.078682 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:28Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.091706 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:28Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.106547 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:28Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.117510 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 03:49:55.041160393 +0000 UTC Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.120626 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:28Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.130623 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.130650 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.130662 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.130677 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.130687 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:28Z","lastTransitionTime":"2026-01-21T10:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.131767 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:28Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.145483 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:28Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.166447 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:28Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.179597 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:28Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.204218 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9164acd9a91836c25b02986f204dd0302964f6bff6fe077971d37eebd4b560ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:28Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.216430 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:28Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.231151 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:28Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.232894 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.232925 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.232935 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.232952 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.232965 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:28Z","lastTransitionTime":"2026-01-21T10:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.246946 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:28Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.279061 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:28Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.289836 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:28Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.301392 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:28Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.310244 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.310301 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:57:28 crc kubenswrapper[4881]: E0121 10:57:28.310401 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:57:28 crc kubenswrapper[4881]: E0121 10:57:28.310493 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.316986 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:28Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.336654 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.336712 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.336726 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.336746 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.336761 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:28Z","lastTransitionTime":"2026-01-21T10:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.336941 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e45d7cde8c26fc00bb1b5ad5cc88c41aff02acf0860856e86f3c3bfdc01e7045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599bfaf6bfc412f779f4732cf9f4273382c1f0af20576581264ea4fc63b2ec38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9454db8c9af3187ee06e017ac192668036b1f565305a37de61c3cffe4d94e7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa3136ee876ebb14cd5164f9176c91c60cc04fd7882a3ad0957ce3a36184d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2c1d1496a7f7e5fad6a3d4ade4104e8e0c40fd0ca4722ca5e1ca1fbf0ebf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29c170fdf57f1f4c678e2902443d9ef467ac3157370e997fd5596b4eecfb54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58a840f0217d0e057d132d7debeba49b9c541f7f69f33178abee1a44909c83c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47c176d51b5558e85897ac8c72c02bb6f152a972d8e941aeb114333f777e22ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:28Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.439301 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.439336 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.439347 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.439362 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.439372 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:28Z","lastTransitionTime":"2026-01-21T10:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.542327 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.542673 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.542813 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.542951 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.543064 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:28Z","lastTransitionTime":"2026-01-21T10:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.646026 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.646074 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.646085 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.646104 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.646121 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:28Z","lastTransitionTime":"2026-01-21T10:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.749433 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.749497 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.749513 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.749552 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.749570 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:28Z","lastTransitionTime":"2026-01-21T10:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.852538 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.852591 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.852607 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.852631 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.852648 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:28Z","lastTransitionTime":"2026-01-21T10:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.892288 4881 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.956915 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.956956 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.956966 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.956981 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:28 crc kubenswrapper[4881]: I0121 10:57:28.956994 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:28Z","lastTransitionTime":"2026-01-21T10:57:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.061978 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.062024 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.062035 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.062053 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.062064 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:29Z","lastTransitionTime":"2026-01-21T10:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.117693 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 02:27:17.853793355 +0000 UTC Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.165143 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.165196 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.165233 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.165265 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.165286 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:29Z","lastTransitionTime":"2026-01-21T10:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.270168 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.270233 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.270250 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.270273 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.270290 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:29Z","lastTransitionTime":"2026-01-21T10:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.310373 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:29 crc kubenswrapper[4881]: E0121 10:57:29.310636 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.374026 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.374439 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.374648 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.374894 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.375103 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:29Z","lastTransitionTime":"2026-01-21T10:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.478647 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.478689 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.478700 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.478717 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.478729 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:29Z","lastTransitionTime":"2026-01-21T10:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.489898 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth"] Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.490690 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.493008 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.493436 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.511096 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.525631 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.539982 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.551161 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d379505c-c658-4dd5-b841-40c8443012c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.565227 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.578036 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d379505c-c658-4dd5-b841-40c8443012c6-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-qgrth\" (UID: \"d379505c-c658-4dd5-b841-40c8443012c6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.578087 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d379505c-c658-4dd5-b841-40c8443012c6-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-qgrth\" (UID: \"d379505c-c658-4dd5-b841-40c8443012c6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.578106 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d379505c-c658-4dd5-b841-40c8443012c6-env-overrides\") pod \"ovnkube-control-plane-749d76644c-qgrth\" (UID: \"d379505c-c658-4dd5-b841-40c8443012c6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.578132 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57krk\" (UniqueName: \"kubernetes.io/projected/d379505c-c658-4dd5-b841-40c8443012c6-kube-api-access-57krk\") pod \"ovnkube-control-plane-749d76644c-qgrth\" (UID: \"d379505c-c658-4dd5-b841-40c8443012c6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.581563 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.581616 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.581630 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.581650 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.581665 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:29Z","lastTransitionTime":"2026-01-21T10:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.581766 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.598212 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.611281 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.628965 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e45d7cde8c26fc00bb1b5ad5cc88c41aff02acf0860856e86f3c3bfdc01e7045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599bfaf6bfc412f779f4732cf9f4273382c1f0af20576581264ea4fc63b2ec38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9454db8c9af3187ee06e017ac192668036b1f565305a37de61c3cffe4d94e7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa3136ee876ebb14cd5164f9176c91c60cc04fd7882a3ad0957ce3a36184d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2c1d1496a7f7e5fad6a3d4ade4104e8e0c40fd0ca4722ca5e1ca1fbf0ebf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29c170fdf57f1f4c678e2902443d9ef467ac3157370e997fd5596b4eecfb54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58a840f0217d0e057d132d7debeba49b9c541f7f69f33178abee1a44909c83c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47c176d51b5558e85897ac8c72c02bb6f152a972d8e941aeb114333f777e22ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.642349 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.652920 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.666639 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.679490 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57krk\" (UniqueName: \"kubernetes.io/projected/d379505c-c658-4dd5-b841-40c8443012c6-kube-api-access-57krk\") pod \"ovnkube-control-plane-749d76644c-qgrth\" (UID: \"d379505c-c658-4dd5-b841-40c8443012c6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.679647 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d379505c-c658-4dd5-b841-40c8443012c6-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-qgrth\" (UID: \"d379505c-c658-4dd5-b841-40c8443012c6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.679750 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d379505c-c658-4dd5-b841-40c8443012c6-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-qgrth\" (UID: \"d379505c-c658-4dd5-b841-40c8443012c6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.679807 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d379505c-c658-4dd5-b841-40c8443012c6-env-overrides\") pod \"ovnkube-control-plane-749d76644c-qgrth\" (UID: \"d379505c-c658-4dd5-b841-40c8443012c6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.680773 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d379505c-c658-4dd5-b841-40c8443012c6-env-overrides\") pod \"ovnkube-control-plane-749d76644c-qgrth\" (UID: \"d379505c-c658-4dd5-b841-40c8443012c6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.680877 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d379505c-c658-4dd5-b841-40c8443012c6-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-qgrth\" (UID: \"d379505c-c658-4dd5-b841-40c8443012c6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.681442 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.685197 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.685573 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.685540 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d379505c-c658-4dd5-b841-40c8443012c6-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-qgrth\" (UID: \"d379505c-c658-4dd5-b841-40c8443012c6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.685585 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.685662 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.685675 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:29Z","lastTransitionTime":"2026-01-21T10:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.698917 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.702611 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57krk\" (UniqueName: \"kubernetes.io/projected/d379505c-c658-4dd5-b841-40c8443012c6-kube-api-access-57krk\") pod \"ovnkube-control-plane-749d76644c-qgrth\" (UID: \"d379505c-c658-4dd5-b841-40c8443012c6\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.711238 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9164acd9a91836c25b02986f204dd0302964f6bff6fe077971d37eebd4b560ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.788808 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.788844 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.788853 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.788869 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.788878 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:29Z","lastTransitionTime":"2026-01-21T10:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.808369 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.892020 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.892501 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.892675 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.892953 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.893117 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:29Z","lastTransitionTime":"2026-01-21T10:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.908164 4881 generic.go:334] "Generic (PLEG): container finished" podID="c14980d7-1b3b-463b-8f57-f1e1afbd258c" containerID="13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9" exitCode=0 Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.908347 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" event={"ID":"c14980d7-1b3b-463b-8f57-f1e1afbd258c","Type":"ContainerDied","Data":"13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9"} Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.910489 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" event={"ID":"d379505c-c658-4dd5-b841-40c8443012c6","Type":"ContainerStarted","Data":"940c09b091f4d8b17833fc9e9f36c4d8ff8768d518f48994774a58ed142f85da"} Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.910513 4881 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.925474 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.942920 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.954526 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.982695 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e45d7cde8c26fc00bb1b5ad5cc88c41aff02acf0860856e86f3c3bfdc01e7045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599bfaf6bfc412f779f4732cf9f4273382c1f0af20576581264ea4fc63b2ec38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9454db8c9af3187ee06e017ac192668036b1f565305a37de61c3cffe4d94e7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa3136ee876ebb14cd5164f9176c91c60cc04fd7882a3ad0957ce3a36184d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2c1d1496a7f7e5fad6a3d4ade4104e8e0c40fd0ca4722ca5e1ca1fbf0ebf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29c170fdf57f1f4c678e2902443d9ef467ac3157370e997fd5596b4eecfb54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58a840f0217d0e057d132d7debeba49b9c541f7f69f33178abee1a44909c83c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47c176d51b5558e85897ac8c72c02bb6f152a972d8e941aeb114333f777e22ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.998467 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.998542 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.998554 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.998582 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.998600 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:29Z","lastTransitionTime":"2026-01-21T10:57:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:29 crc kubenswrapper[4881]: I0121 10:57:29.998634 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.016753 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:30Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.032764 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:30Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.051729 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:30Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.068348 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:30Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.083657 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:30Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.096875 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9164acd9a91836c25b02986f204dd0302964f6bff6fe077971d37eebd4b560ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:30Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.101283 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.101321 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.101331 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.101348 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.101360 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:30Z","lastTransitionTime":"2026-01-21T10:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.109892 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:30Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.117964 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 12:32:01.827891641 +0000 UTC Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.128250 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:30Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.145979 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:30Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.158954 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d379505c-c658-4dd5-b841-40c8443012c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:30Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.204030 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.204074 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.204086 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.204101 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.204113 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:30Z","lastTransitionTime":"2026-01-21T10:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.310266 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.310356 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:57:30 crc kubenswrapper[4881]: E0121 10:57:30.310499 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:57:30 crc kubenswrapper[4881]: E0121 10:57:30.310637 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.318546 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.318575 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.318585 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.318600 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.318610 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:30Z","lastTransitionTime":"2026-01-21T10:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.421988 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.422056 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.422074 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.422100 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.422120 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:30Z","lastTransitionTime":"2026-01-21T10:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.524622 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.524651 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.524659 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.524680 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.524688 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:30Z","lastTransitionTime":"2026-01-21T10:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.627932 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.627990 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.628010 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.628038 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.628056 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:30Z","lastTransitionTime":"2026-01-21T10:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.731852 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.731937 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.731953 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.731975 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.731988 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:30Z","lastTransitionTime":"2026-01-21T10:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.867154 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.867189 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.867198 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.867213 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.867221 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:30Z","lastTransitionTime":"2026-01-21T10:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.894518 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.894684 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.894765 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:30 crc kubenswrapper[4881]: E0121 10:57:30.894961 4881 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:57:30 crc kubenswrapper[4881]: E0121 10:57:30.895042 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:57:46.89501955 +0000 UTC m=+54.154976029 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:57:30 crc kubenswrapper[4881]: E0121 10:57:30.895312 4881 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:57:30 crc kubenswrapper[4881]: E0121 10:57:30.895367 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:57:46.895353188 +0000 UTC m=+54.155309657 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:57:30 crc kubenswrapper[4881]: E0121 10:57:30.895437 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:57:46.89542912 +0000 UTC m=+54.155385589 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.938699 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" event={"ID":"c14980d7-1b3b-463b-8f57-f1e1afbd258c","Type":"ContainerStarted","Data":"fd18bd57e9f0f878f56164dee92c18a4fff62c83f518a96d7db735dcd488e052"} Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.940378 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" event={"ID":"d379505c-c658-4dd5-b841-40c8443012c6","Type":"ContainerStarted","Data":"d634cee9f543d3322f8cdc8bc62252096e789383c55d5d448cc53ab990ac9b52"} Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.940400 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" event={"ID":"d379505c-c658-4dd5-b841-40c8443012c6","Type":"ContainerStarted","Data":"51a2ec789636052b12e0fdb4e647d7e4f92d1e4b7436933f1529561ffc2021d9"} Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.953162 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:30Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.963992 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:30Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.970095 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.970119 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.970129 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.970145 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.970156 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:30Z","lastTransitionTime":"2026-01-21T10:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.986049 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9164acd9a91836c25b02986f204dd0302964f6bff6fe077971d37eebd4b560ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:30Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.996001 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:57:30 crc kubenswrapper[4881]: I0121 10:57:30.996080 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:57:30 crc kubenswrapper[4881]: E0121 10:57:30.996188 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:57:30 crc kubenswrapper[4881]: E0121 10:57:30.996203 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:57:30 crc kubenswrapper[4881]: E0121 10:57:30.996213 4881 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:57:30 crc kubenswrapper[4881]: E0121 10:57:30.996252 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 10:57:46.996239702 +0000 UTC m=+54.256196171 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:57:30 crc kubenswrapper[4881]: E0121 10:57:30.997531 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:57:30 crc kubenswrapper[4881]: E0121 10:57:30.997553 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:57:30 crc kubenswrapper[4881]: E0121 10:57:30.997562 4881 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:57:30 crc kubenswrapper[4881]: E0121 10:57:30.997586 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 10:57:46.997577165 +0000 UTC m=+54.257533634 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.005836 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.028549 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.059113 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.073030 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.073078 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.073090 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.073108 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.073118 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:31Z","lastTransitionTime":"2026-01-21T10:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.073566 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d379505c-c658-4dd5-b841-40c8443012c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.086474 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.100854 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.119914 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.132413 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 03:59:51.57649681 +0000 UTC Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.141432 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e45d7cde8c26fc00bb1b5ad5cc88c41aff02acf0860856e86f3c3bfdc01e7045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599bfaf6bfc412f779f4732cf9f4273382c1f0af20576581264ea4fc63b2ec38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9454db8c9af3187ee06e017ac192668036b1f565305a37de61c3cffe4d94e7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa3136ee876ebb14cd5164f9176c91c60cc04fd7882a3ad0957ce3a36184d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2c1d1496a7f7e5fad6a3d4ade4104e8e0c40fd0ca4722ca5e1ca1fbf0ebf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29c170fdf57f1f4c678e2902443d9ef467ac3157370e997fd5596b4eecfb54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58a840f0217d0e057d132d7debeba49b9c541f7f69f33178abee1a44909c83c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47c176d51b5558e85897ac8c72c02bb6f152a972d8e941aeb114333f777e22ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.156718 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.215477 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.234812 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd18bd57e9f0f878f56164dee92c18a4fff62c83f518a96d7db735dcd488e052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.249962 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.261722 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.272797 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.288445 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.288496 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.288506 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.288526 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.288537 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:31Z","lastTransitionTime":"2026-01-21T10:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.312353 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:31 crc kubenswrapper[4881]: E0121 10:57:31.312466 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.358902 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd18bd57e9f0f878f56164dee92c18a4fff62c83f518a96d7db735dcd488e052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.371585 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.390404 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e45d7cde8c26fc00bb1b5ad5cc88c41aff02acf0860856e86f3c3bfdc01e7045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599bfaf6bfc412f779f4732cf9f4273382c1f0af20576581264ea4fc63b2ec38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9454db8c9af3187ee06e017ac192668036b1f565305a37de61c3cffe4d94e7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa3136ee876ebb14cd5164f9176c91c60cc04fd7882a3ad0957ce3a36184d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2c1d1496a7f7e5fad6a3d4ade4104e8e0c40fd0ca4722ca5e1ca1fbf0ebf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29c170fdf57f1f4c678e2902443d9ef467ac3157370e997fd5596b4eecfb54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58a840f0217d0e057d132d7debeba49b9c541f7f69f33178abee1a44909c83c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47c176d51b5558e85897ac8c72c02bb6f152a972d8e941aeb114333f777e22ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.391886 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.391910 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.391919 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.391934 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.391947 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:31Z","lastTransitionTime":"2026-01-21T10:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.405707 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.418454 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.432337 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.446720 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.460380 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.473461 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9164acd9a91836c25b02986f204dd0302964f6bff6fe077971d37eebd4b560ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.488735 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.502655 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.518140 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.533596 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d379505c-c658-4dd5-b841-40c8443012c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a2ec789636052b12e0fdb4e647d7e4f92d1e4b7436933f1529561ffc2021d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d634cee9f543d3322f8cdc8bc62252096e789383c55d5d448cc53ab990ac9b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.612588 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.612613 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.612620 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.612635 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.612643 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:31Z","lastTransitionTime":"2026-01-21T10:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.715813 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.715868 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.715887 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.715915 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.715933 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:31Z","lastTransitionTime":"2026-01-21T10:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.894677 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-dtv4t"] Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.895200 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.895237 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.895248 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.895268 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.895280 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:31Z","lastTransitionTime":"2026-01-21T10:57:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:31 crc kubenswrapper[4881]: I0121 10:57:31.895347 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:57:31 crc kubenswrapper[4881]: E0121 10:57:31.895427 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtv4t" podUID="3552adbd-011f-4552-9e69-233b92c554c8" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:31.913425 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:31.932921 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:31.953358 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:31.967423 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9164acd9a91836c25b02986f204dd0302964f6bff6fe077971d37eebd4b560ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:31.981023 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dtv4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3552adbd-011f-4552-9e69-233b92c554c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dtv4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:31.995697 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.008213 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.021813 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.034323 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d379505c-c658-4dd5-b841-40c8443012c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a2ec789636052b12e0fdb4e647d7e4f92d1e4b7436933f1529561ffc2021d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d634cee9f543d3322f8cdc8bc62252096e789383c55d5d448cc53ab990ac9b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.047230 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.059547 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.075756 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd18bd57e9f0f878f56164dee92c18a4fff62c83f518a96d7db735dcd488e052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.089515 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.110928 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e45d7cde8c26fc00bb1b5ad5cc88c41aff02acf0860856e86f3c3bfdc01e7045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599bfaf6bfc412f779f4732cf9f4273382c1f0af20576581264ea4fc63b2ec38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9454db8c9af3187ee06e017ac192668036b1f565305a37de61c3cffe4d94e7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa3136ee876ebb14cd5164f9176c91c60cc04fd7882a3ad0957ce3a36184d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2c1d1496a7f7e5fad6a3d4ade4104e8e0c40fd0ca4722ca5e1ca1fbf0ebf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29c170fdf57f1f4c678e2902443d9ef467ac3157370e997fd5596b4eecfb54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58a840f0217d0e057d132d7debeba49b9c541f7f69f33178abee1a44909c83c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47c176d51b5558e85897ac8c72c02bb6f152a972d8e941aeb114333f777e22ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.130123 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.140165 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.162424 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 19:38:09.531043099 +0000 UTC Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.164459 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqlps\" (UniqueName: \"kubernetes.io/projected/3552adbd-011f-4552-9e69-233b92c554c8-kube-api-access-cqlps\") pod \"network-metrics-daemon-dtv4t\" (UID: \"3552adbd-011f-4552-9e69-233b92c554c8\") " pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.164529 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3552adbd-011f-4552-9e69-233b92c554c8-metrics-certs\") pod \"network-metrics-daemon-dtv4t\" (UID: \"3552adbd-011f-4552-9e69-233b92c554c8\") " pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.166769 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.166904 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.166979 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.167072 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.167150 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:32Z","lastTransitionTime":"2026-01-21T10:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.265930 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqlps\" (UniqueName: \"kubernetes.io/projected/3552adbd-011f-4552-9e69-233b92c554c8-kube-api-access-cqlps\") pod \"network-metrics-daemon-dtv4t\" (UID: \"3552adbd-011f-4552-9e69-233b92c554c8\") " pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.266387 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3552adbd-011f-4552-9e69-233b92c554c8-metrics-certs\") pod \"network-metrics-daemon-dtv4t\" (UID: \"3552adbd-011f-4552-9e69-233b92c554c8\") " pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:57:32 crc kubenswrapper[4881]: E0121 10:57:32.266754 4881 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 10:57:32 crc kubenswrapper[4881]: E0121 10:57:32.266958 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3552adbd-011f-4552-9e69-233b92c554c8-metrics-certs podName:3552adbd-011f-4552-9e69-233b92c554c8 nodeName:}" failed. No retries permitted until 2026-01-21 10:57:32.766932189 +0000 UTC m=+40.026888678 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3552adbd-011f-4552-9e69-233b92c554c8-metrics-certs") pod "network-metrics-daemon-dtv4t" (UID: "3552adbd-011f-4552-9e69-233b92c554c8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.274072 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.274563 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.275198 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.275352 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.275479 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:32Z","lastTransitionTime":"2026-01-21T10:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.298347 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqlps\" (UniqueName: \"kubernetes.io/projected/3552adbd-011f-4552-9e69-233b92c554c8-kube-api-access-cqlps\") pod \"network-metrics-daemon-dtv4t\" (UID: \"3552adbd-011f-4552-9e69-233b92c554c8\") " pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.312398 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:57:32 crc kubenswrapper[4881]: E0121 10:57:32.312976 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.313661 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:57:32 crc kubenswrapper[4881]: E0121 10:57:32.313877 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.434042 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.434083 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.434094 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.434111 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.434124 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:32Z","lastTransitionTime":"2026-01-21T10:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.548858 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.549664 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.549745 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.549841 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.549918 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:32Z","lastTransitionTime":"2026-01-21T10:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.653024 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.653073 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.653085 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.653107 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.653119 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:32Z","lastTransitionTime":"2026-01-21T10:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.756507 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.756552 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.756563 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.756577 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.756587 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:32Z","lastTransitionTime":"2026-01-21T10:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.828686 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3552adbd-011f-4552-9e69-233b92c554c8-metrics-certs\") pod \"network-metrics-daemon-dtv4t\" (UID: \"3552adbd-011f-4552-9e69-233b92c554c8\") " pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:57:32 crc kubenswrapper[4881]: E0121 10:57:32.828944 4881 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 10:57:32 crc kubenswrapper[4881]: E0121 10:57:32.829042 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3552adbd-011f-4552-9e69-233b92c554c8-metrics-certs podName:3552adbd-011f-4552-9e69-233b92c554c8 nodeName:}" failed. No retries permitted until 2026-01-21 10:57:33.829011254 +0000 UTC m=+41.088967723 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3552adbd-011f-4552-9e69-233b92c554c8-metrics-certs") pod "network-metrics-daemon-dtv4t" (UID: "3552adbd-011f-4552-9e69-233b92c554c8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.859272 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.859318 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.859329 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.859347 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.859363 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:32Z","lastTransitionTime":"2026-01-21T10:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.962303 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.962345 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.962354 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.962372 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:32 crc kubenswrapper[4881]: I0121 10:57:32.962385 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:32Z","lastTransitionTime":"2026-01-21T10:57:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.065452 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.065498 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.065525 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.065546 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.065562 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:33Z","lastTransitionTime":"2026-01-21T10:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.163612 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 11:54:31.547330546 +0000 UTC Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.169229 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.169271 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.169287 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.169353 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.169371 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:33Z","lastTransitionTime":"2026-01-21T10:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.272795 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.272835 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.272843 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.272859 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.272869 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:33Z","lastTransitionTime":"2026-01-21T10:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.310488 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.310997 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:57:33 crc kubenswrapper[4881]: E0121 10:57:33.311117 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.311319 4881 scope.go:117] "RemoveContainer" containerID="676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570" Jan 21 10:57:33 crc kubenswrapper[4881]: E0121 10:57:33.311429 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtv4t" podUID="3552adbd-011f-4552-9e69-233b92c554c8" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.326022 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.338098 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.357235 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd18bd57e9f0f878f56164dee92c18a4fff62c83f518a96d7db735dcd488e052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.369263 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.375467 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.375779 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.375956 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.376064 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.376174 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:33Z","lastTransitionTime":"2026-01-21T10:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.390546 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e45d7cde8c26fc00bb1b5ad5cc88c41aff02acf0860856e86f3c3bfdc01e7045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599bfaf6bfc412f779f4732cf9f4273382c1f0af20576581264ea4fc63b2ec38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9454db8c9af3187ee06e017ac192668036b1f565305a37de61c3cffe4d94e7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa3136ee876ebb14cd5164f9176c91c60cc04fd7882a3ad0957ce3a36184d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2c1d1496a7f7e5fad6a3d4ade4104e8e0c40fd0ca4722ca5e1ca1fbf0ebf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29c170fdf57f1f4c678e2902443d9ef467ac3157370e997fd5596b4eecfb54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58a840f0217d0e057d132d7debeba49b9c541f7f69f33178abee1a44909c83c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47c176d51b5558e85897ac8c72c02bb6f152a972d8e941aeb114333f777e22ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.404828 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.415495 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.428620 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.440766 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.451895 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.466132 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9164acd9a91836c25b02986f204dd0302964f6bff6fe077971d37eebd4b560ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.477778 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.478589 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.478621 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.478632 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.478651 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.478663 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:33Z","lastTransitionTime":"2026-01-21T10:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.489619 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.503122 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.515333 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d379505c-c658-4dd5-b841-40c8443012c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a2ec789636052b12e0fdb4e647d7e4f92d1e4b7436933f1529561ffc2021d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d634cee9f543d3322f8cdc8bc62252096e789383c55d5d448cc53ab990ac9b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.527194 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dtv4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3552adbd-011f-4552-9e69-233b92c554c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dtv4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.585509 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.585599 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.585622 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.585654 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.585693 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:33Z","lastTransitionTime":"2026-01-21T10:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.687528 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.688016 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.688094 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.688156 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.688217 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:33Z","lastTransitionTime":"2026-01-21T10:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.790995 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.791286 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.791370 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.791448 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.791510 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:33Z","lastTransitionTime":"2026-01-21T10:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.869122 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3552adbd-011f-4552-9e69-233b92c554c8-metrics-certs\") pod \"network-metrics-daemon-dtv4t\" (UID: \"3552adbd-011f-4552-9e69-233b92c554c8\") " pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:57:33 crc kubenswrapper[4881]: E0121 10:57:33.869257 4881 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 10:57:33 crc kubenswrapper[4881]: E0121 10:57:33.869312 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3552adbd-011f-4552-9e69-233b92c554c8-metrics-certs podName:3552adbd-011f-4552-9e69-233b92c554c8 nodeName:}" failed. No retries permitted until 2026-01-21 10:57:35.869295708 +0000 UTC m=+43.129252187 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3552adbd-011f-4552-9e69-233b92c554c8-metrics-certs") pod "network-metrics-daemon-dtv4t" (UID: "3552adbd-011f-4552-9e69-233b92c554c8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.893804 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.893839 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.893851 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.893868 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.893885 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:33Z","lastTransitionTime":"2026-01-21T10:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.953958 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.955732 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0e507b4c3c536bdc63360b1386748657584f739e09973ec33c998ac267ca2766"} Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.956069 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.968328 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.977215 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:33 crc kubenswrapper[4881]: I0121 10:57:33.992877 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e507b4c3c536bdc63360b1386748657584f739e09973ec33c998ac267ca2766\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:33.996698 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:33.996743 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:33.996755 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:33.996770 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:33.996800 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:33Z","lastTransitionTime":"2026-01-21T10:57:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.007797 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.019265 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.028912 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9164acd9a91836c25b02986f204dd0302964f6bff6fe077971d37eebd4b560ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.042944 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.053605 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.065112 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.075056 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d379505c-c658-4dd5-b841-40c8443012c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a2ec789636052b12e0fdb4e647d7e4f92d1e4b7436933f1529561ffc2021d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d634cee9f543d3322f8cdc8bc62252096e789383c55d5d448cc53ab990ac9b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.086229 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dtv4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3552adbd-011f-4552-9e69-233b92c554c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dtv4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.099664 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.099709 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.099720 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.099741 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.099753 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:34Z","lastTransitionTime":"2026-01-21T10:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.100856 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.115121 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd18bd57e9f0f878f56164dee92c18a4fff62c83f518a96d7db735dcd488e052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.128945 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.148090 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e45d7cde8c26fc00bb1b5ad5cc88c41aff02acf0860856e86f3c3bfdc01e7045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599bfaf6bfc412f779f4732cf9f4273382c1f0af20576581264ea4fc63b2ec38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9454db8c9af3187ee06e017ac192668036b1f565305a37de61c3cffe4d94e7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa3136ee876ebb14cd5164f9176c91c60cc04fd7882a3ad0957ce3a36184d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2c1d1496a7f7e5fad6a3d4ade4104e8e0c40fd0ca4722ca5e1ca1fbf0ebf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29c170fdf57f1f4c678e2902443d9ef467ac3157370e997fd5596b4eecfb54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58a840f0217d0e057d132d7debeba49b9c541f7f69f33178abee1a44909c83c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47c176d51b5558e85897ac8c72c02bb6f152a972d8e941aeb114333f777e22ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.160613 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.164729 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 21:32:04.944078768 +0000 UTC Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.201764 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.201865 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.201878 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.201896 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.201910 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:34Z","lastTransitionTime":"2026-01-21T10:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.304696 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.304734 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.304753 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.304770 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.304799 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:34Z","lastTransitionTime":"2026-01-21T10:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.309566 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:57:34 crc kubenswrapper[4881]: E0121 10:57:34.309702 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.309566 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:57:34 crc kubenswrapper[4881]: E0121 10:57:34.310122 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.406847 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.406889 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.406898 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.406912 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.406926 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:34Z","lastTransitionTime":"2026-01-21T10:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.508843 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.508889 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.508901 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.508916 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.508927 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:34Z","lastTransitionTime":"2026-01-21T10:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.611934 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.612005 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.612020 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.612037 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.612048 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:34Z","lastTransitionTime":"2026-01-21T10:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.714204 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.714251 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.714266 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.714288 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.714304 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:34Z","lastTransitionTime":"2026-01-21T10:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.816408 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.816461 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.816473 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.816494 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.816521 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:34Z","lastTransitionTime":"2026-01-21T10:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.868737 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.868797 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.868807 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.868821 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.868830 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:34Z","lastTransitionTime":"2026-01-21T10:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:34 crc kubenswrapper[4881]: E0121 10:57:34.883287 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.887393 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.887441 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.887453 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.887470 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.887482 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:34Z","lastTransitionTime":"2026-01-21T10:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:34 crc kubenswrapper[4881]: E0121 10:57:34.906122 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.909536 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.909588 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.909598 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.909613 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.909623 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:34Z","lastTransitionTime":"2026-01-21T10:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:34 crc kubenswrapper[4881]: E0121 10:57:34.921037 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.925426 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.925475 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.925488 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.925507 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.925518 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:34Z","lastTransitionTime":"2026-01-21T10:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:34 crc kubenswrapper[4881]: E0121 10:57:34.938390 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.942301 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.942336 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.942347 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.942362 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.942373 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:34Z","lastTransitionTime":"2026-01-21T10:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.961703 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bx64f_e8bb6d97-b3b8-4e31-b704-8e565385ab26/ovnkube-controller/0.log" Jan 21 10:57:34 crc kubenswrapper[4881]: E0121 10:57:34.961941 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:34 crc kubenswrapper[4881]: E0121 10:57:34.962175 4881 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.965737 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.965769 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.965779 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.965809 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.965818 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:34Z","lastTransitionTime":"2026-01-21T10:57:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.967699 4881 generic.go:334] "Generic (PLEG): container finished" podID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerID="58a840f0217d0e057d132d7debeba49b9c541f7f69f33178abee1a44909c83c5" exitCode=1 Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.967775 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" event={"ID":"e8bb6d97-b3b8-4e31-b704-8e565385ab26","Type":"ContainerDied","Data":"58a840f0217d0e057d132d7debeba49b9c541f7f69f33178abee1a44909c83c5"} Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.968573 4881 scope.go:117] "RemoveContainer" containerID="58a840f0217d0e057d132d7debeba49b9c541f7f69f33178abee1a44909c83c5" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.981570 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e507b4c3c536bdc63360b1386748657584f739e09973ec33c998ac267ca2766\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:34 crc kubenswrapper[4881]: I0121 10:57:34.997975 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.009109 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:35Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.018427 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9164acd9a91836c25b02986f204dd0302964f6bff6fe077971d37eebd4b560ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:35Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.030596 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:35Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.040896 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:35Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.051369 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:35Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.062587 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d379505c-c658-4dd5-b841-40c8443012c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a2ec789636052b12e0fdb4e647d7e4f92d1e4b7436933f1529561ffc2021d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d634cee9f543d3322f8cdc8bc62252096e789383c55d5d448cc53ab990ac9b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:35Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.068021 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.068066 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.068082 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.068105 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.068120 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:35Z","lastTransitionTime":"2026-01-21T10:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.072920 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dtv4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3552adbd-011f-4552-9e69-233b92c554c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dtv4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:35Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.086341 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:35Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.101969 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:35Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.120369 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd18bd57e9f0f878f56164dee92c18a4fff62c83f518a96d7db735dcd488e052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:35Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.133636 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:35Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.150363 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e45d7cde8c26fc00bb1b5ad5cc88c41aff02acf0860856e86f3c3bfdc01e7045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599bfaf6bfc412f779f4732cf9f4273382c1f0af20576581264ea4fc63b2ec38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9454db8c9af3187ee06e017ac192668036b1f565305a37de61c3cffe4d94e7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa3136ee876ebb14cd5164f9176c91c60cc04fd7882a3ad0957ce3a36184d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2c1d1496a7f7e5fad6a3d4ade4104e8e0c40fd0ca4722ca5e1ca1fbf0ebf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29c170fdf57f1f4c678e2902443d9ef467ac3157370e997fd5596b4eecfb54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58a840f0217d0e057d132d7debeba49b9c541f7f69f33178abee1a44909c83c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a840f0217d0e057d132d7debeba49b9c541f7f69f33178abee1a44909c83c5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"message\\\":\\\"rk-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0121 10:57:34.006073 6119 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 10:57:34.006081 6119 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 10:57:34.006288 6119 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:57:34.006459 6119 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:57:34.006687 6119 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:57:34.007039 6119 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:57:34.007082 6119 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 10:57:34.007124 6119 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:57:34.007122 6119 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0121 10:57:34.007524 6119 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0121 10:57:34.007540 6119 factory.go:656] Stopping watch factory\\\\nI0121 10:57:34.007557 6119 ovnkube.go:599] Stopped ovnkube\\\\nI0121 10:57:34.007612 6119 metrics.go:553] Stopping metrics server at address\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47c176d51b5558e85897ac8c72c02bb6f152a972d8e941aeb114333f777e22ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:35Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.161436 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:35Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.165701 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 01:35:01.503553354 +0000 UTC Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.171125 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.171170 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.171185 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.171206 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.171220 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:35Z","lastTransitionTime":"2026-01-21T10:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.172299 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:35Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.273410 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.273446 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.273459 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.273473 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.273482 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:35Z","lastTransitionTime":"2026-01-21T10:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.309843 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.309928 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:35 crc kubenswrapper[4881]: E0121 10:57:35.310040 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtv4t" podUID="3552adbd-011f-4552-9e69-233b92c554c8" Jan 21 10:57:35 crc kubenswrapper[4881]: E0121 10:57:35.310181 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.375854 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.375907 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.375923 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.375943 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.375955 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:35Z","lastTransitionTime":"2026-01-21T10:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.478915 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.478965 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.478982 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.479005 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.479023 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:35Z","lastTransitionTime":"2026-01-21T10:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.581182 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.581227 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.581241 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.581257 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.581269 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:35Z","lastTransitionTime":"2026-01-21T10:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.683839 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.683887 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.683896 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.683911 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.683921 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:35Z","lastTransitionTime":"2026-01-21T10:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.786422 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.786477 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.786487 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.786503 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.786513 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:35Z","lastTransitionTime":"2026-01-21T10:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.886761 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3552adbd-011f-4552-9e69-233b92c554c8-metrics-certs\") pod \"network-metrics-daemon-dtv4t\" (UID: \"3552adbd-011f-4552-9e69-233b92c554c8\") " pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:57:35 crc kubenswrapper[4881]: E0121 10:57:35.886951 4881 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 10:57:35 crc kubenswrapper[4881]: E0121 10:57:35.887009 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3552adbd-011f-4552-9e69-233b92c554c8-metrics-certs podName:3552adbd-011f-4552-9e69-233b92c554c8 nodeName:}" failed. No retries permitted until 2026-01-21 10:57:39.886991696 +0000 UTC m=+47.146948165 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3552adbd-011f-4552-9e69-233b92c554c8-metrics-certs") pod "network-metrics-daemon-dtv4t" (UID: "3552adbd-011f-4552-9e69-233b92c554c8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.888924 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.888965 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.888982 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.889015 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.889036 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:35Z","lastTransitionTime":"2026-01-21T10:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.981512 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bx64f_e8bb6d97-b3b8-4e31-b704-8e565385ab26/ovnkube-controller/0.log" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.984863 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" event={"ID":"e8bb6d97-b3b8-4e31-b704-8e565385ab26","Type":"ContainerStarted","Data":"5897125d6a1004cb4f0527359e8fc0328bff6bcc5ac563fdc3d85b094414c563"} Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.985073 4881 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.991080 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.991165 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.991191 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.991243 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:35 crc kubenswrapper[4881]: I0121 10:57:35.991259 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:35Z","lastTransitionTime":"2026-01-21T10:57:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.004749 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e507b4c3c536bdc63360b1386748657584f739e09973ec33c998ac267ca2766\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.020881 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.034242 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.043137 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9164acd9a91836c25b02986f204dd0302964f6bff6fe077971d37eebd4b560ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.055357 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.065806 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.081388 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.093109 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d379505c-c658-4dd5-b841-40c8443012c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a2ec789636052b12e0fdb4e647d7e4f92d1e4b7436933f1529561ffc2021d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d634cee9f543d3322f8cdc8bc62252096e789383c55d5d448cc53ab990ac9b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.094052 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.094086 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.094098 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.094113 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.094122 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:36Z","lastTransitionTime":"2026-01-21T10:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.106910 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dtv4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3552adbd-011f-4552-9e69-233b92c554c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dtv4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.122850 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.137431 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd18bd57e9f0f878f56164dee92c18a4fff62c83f518a96d7db735dcd488e052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.149979 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.166405 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 13:00:06.753174748 +0000 UTC Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.169592 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e45d7cde8c26fc00bb1b5ad5cc88c41aff02acf0860856e86f3c3bfdc01e7045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599bfaf6bfc412f779f4732cf9f4273382c1f0af20576581264ea4fc63b2ec38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9454db8c9af3187ee06e017ac192668036b1f565305a37de61c3cffe4d94e7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa3136ee876ebb14cd5164f9176c91c60cc04fd7882a3ad0957ce3a36184d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2c1d1496a7f7e5fad6a3d4ade4104e8e0c40fd0ca4722ca5e1ca1fbf0ebf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29c170fdf57f1f4c678e2902443d9ef467ac3157370e997fd5596b4eecfb54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5897125d6a1004cb4f0527359e8fc0328bff6bcc5ac563fdc3d85b094414c563\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a840f0217d0e057d132d7debeba49b9c541f7f69f33178abee1a44909c83c5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"message\\\":\\\"rk-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0121 10:57:34.006073 6119 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 10:57:34.006081 6119 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 10:57:34.006288 6119 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:57:34.006459 6119 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:57:34.006687 6119 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:57:34.007039 6119 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:57:34.007082 6119 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 10:57:34.007124 6119 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:57:34.007122 6119 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0121 10:57:34.007524 6119 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0121 10:57:34.007540 6119 factory.go:656] Stopping watch factory\\\\nI0121 10:57:34.007557 6119 ovnkube.go:599] Stopped ovnkube\\\\nI0121 10:57:34.007612 6119 metrics.go:553] Stopping metrics server at address\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47c176d51b5558e85897ac8c72c02bb6f152a972d8e941aeb114333f777e22ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.185169 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.196263 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.196320 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.196345 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.196369 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.196399 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:36Z","lastTransitionTime":"2026-01-21T10:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.197305 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.210548 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.298669 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.298709 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.298722 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.298739 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.298750 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:36Z","lastTransitionTime":"2026-01-21T10:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.309934 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:57:36 crc kubenswrapper[4881]: E0121 10:57:36.310027 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.309942 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:57:36 crc kubenswrapper[4881]: E0121 10:57:36.310136 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.401577 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.401636 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.401656 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.401684 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.401701 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:36Z","lastTransitionTime":"2026-01-21T10:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.503970 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.504028 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.504044 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.504063 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.504075 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:36Z","lastTransitionTime":"2026-01-21T10:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.606683 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.606741 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.606757 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.606808 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.606822 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:36Z","lastTransitionTime":"2026-01-21T10:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.709042 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.709094 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.709104 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.709121 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.709131 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:36Z","lastTransitionTime":"2026-01-21T10:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.811843 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.811886 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.811897 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.811911 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.811919 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:36Z","lastTransitionTime":"2026-01-21T10:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.914292 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.914345 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.914363 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.914384 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.914400 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:36Z","lastTransitionTime":"2026-01-21T10:57:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.990451 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bx64f_e8bb6d97-b3b8-4e31-b704-8e565385ab26/ovnkube-controller/1.log" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.991477 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bx64f_e8bb6d97-b3b8-4e31-b704-8e565385ab26/ovnkube-controller/0.log" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.996345 4881 generic.go:334] "Generic (PLEG): container finished" podID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerID="5897125d6a1004cb4f0527359e8fc0328bff6bcc5ac563fdc3d85b094414c563" exitCode=1 Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.996413 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" event={"ID":"e8bb6d97-b3b8-4e31-b704-8e565385ab26","Type":"ContainerDied","Data":"5897125d6a1004cb4f0527359e8fc0328bff6bcc5ac563fdc3d85b094414c563"} Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.996477 4881 scope.go:117] "RemoveContainer" containerID="58a840f0217d0e057d132d7debeba49b9c541f7f69f33178abee1a44909c83c5" Jan 21 10:57:36 crc kubenswrapper[4881]: I0121 10:57:36.997891 4881 scope.go:117] "RemoveContainer" containerID="5897125d6a1004cb4f0527359e8fc0328bff6bcc5ac563fdc3d85b094414c563" Jan 21 10:57:36 crc kubenswrapper[4881]: E0121 10:57:36.998219 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bx64f_openshift-ovn-kubernetes(e8bb6d97-b3b8-4e31-b704-8e565385ab26)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.017207 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.017266 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.017284 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.017310 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.017325 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:37Z","lastTransitionTime":"2026-01-21T10:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.018988 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:37Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.034623 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:37Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.054915 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:37Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.068151 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9164acd9a91836c25b02986f204dd0302964f6bff6fe077971d37eebd4b560ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:37Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.086931 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e507b4c3c536bdc63360b1386748657584f739e09973ec33c998ac267ca2766\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:37Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.101168 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:37Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.113860 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:37Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.121708 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.121738 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.121750 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.121768 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.121780 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:37Z","lastTransitionTime":"2026-01-21T10:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.127082 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d379505c-c658-4dd5-b841-40c8443012c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a2ec789636052b12e0fdb4e647d7e4f92d1e4b7436933f1529561ffc2021d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d634cee9f543d3322f8cdc8bc62252096e789383c55d5d448cc53ab990ac9b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:37Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.139305 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dtv4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3552adbd-011f-4552-9e69-233b92c554c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dtv4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:37Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.153911 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:37Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.167599 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 04:13:36.635350859 +0000 UTC Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.167766 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:37Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.187462 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:37Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.206926 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e45d7cde8c26fc00bb1b5ad5cc88c41aff02acf0860856e86f3c3bfdc01e7045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599bfaf6bfc412f779f4732cf9f4273382c1f0af20576581264ea4fc63b2ec38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9454db8c9af3187ee06e017ac192668036b1f565305a37de61c3cffe4d94e7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa3136ee876ebb14cd5164f9176c91c60cc04fd7882a3ad0957ce3a36184d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2c1d1496a7f7e5fad6a3d4ade4104e8e0c40fd0ca4722ca5e1ca1fbf0ebf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29c170fdf57f1f4c678e2902443d9ef467ac3157370e997fd5596b4eecfb54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5897125d6a1004cb4f0527359e8fc0328bff6bcc5ac563fdc3d85b094414c563\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a840f0217d0e057d132d7debeba49b9c541f7f69f33178abee1a44909c83c5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"message\\\":\\\"rk-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0121 10:57:34.006073 6119 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 10:57:34.006081 6119 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 10:57:34.006288 6119 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:57:34.006459 6119 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:57:34.006687 6119 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:57:34.007039 6119 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:57:34.007082 6119 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 10:57:34.007124 6119 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:57:34.007122 6119 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0121 10:57:34.007524 6119 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0121 10:57:34.007540 6119 factory.go:656] Stopping watch factory\\\\nI0121 10:57:34.007557 6119 ovnkube.go:599] Stopped ovnkube\\\\nI0121 10:57:34.007612 6119 metrics.go:553] Stopping metrics server at address\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5897125d6a1004cb4f0527359e8fc0328bff6bcc5ac563fdc3d85b094414c563\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:57:36Z\\\",\\\"message\\\":\\\"-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 10:57:35.987756 6363 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0121 10:57:35.987768 6363 handler.go:208] Removed *v1.Pod event handler 3\\\\nF0121 10:57:35.988839 6363 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47c176d51b5558e85897ac8c72c02bb6f152a972d8e941aeb114333f777e22ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:37Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.224301 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.224329 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.224338 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.224353 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.224362 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:37Z","lastTransitionTime":"2026-01-21T10:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.225778 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:37Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.238562 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:37Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.254027 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd18bd57e9f0f878f56164dee92c18a4fff62c83f518a96d7db735dcd488e052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:37Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.310642 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:37 crc kubenswrapper[4881]: E0121 10:57:37.310774 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.310643 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:57:37 crc kubenswrapper[4881]: E0121 10:57:37.311384 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtv4t" podUID="3552adbd-011f-4552-9e69-233b92c554c8" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.341866 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.341903 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.341918 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.341934 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.341946 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:37Z","lastTransitionTime":"2026-01-21T10:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.444893 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.444950 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.444961 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.444982 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.444998 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:37Z","lastTransitionTime":"2026-01-21T10:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.548278 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.548323 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.548338 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.548362 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.548379 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:37Z","lastTransitionTime":"2026-01-21T10:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.651685 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.651742 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.651759 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.651819 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.651837 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:37Z","lastTransitionTime":"2026-01-21T10:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.754447 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.754527 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.754570 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.754596 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.754615 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:37Z","lastTransitionTime":"2026-01-21T10:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.858144 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.858192 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.858203 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.858220 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.858233 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:37Z","lastTransitionTime":"2026-01-21T10:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.960255 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.960293 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.960306 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.960327 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:37 crc kubenswrapper[4881]: I0121 10:57:37.960345 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:37Z","lastTransitionTime":"2026-01-21T10:57:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.002169 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bx64f_e8bb6d97-b3b8-4e31-b704-8e565385ab26/ovnkube-controller/1.log" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.062517 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.062565 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.062576 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.062591 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.062602 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:38Z","lastTransitionTime":"2026-01-21T10:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.165830 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.165918 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.166010 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.166037 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.166055 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:38Z","lastTransitionTime":"2026-01-21T10:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.168218 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 05:29:32.182314304 +0000 UTC Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.268973 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.269015 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.269026 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.269043 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.269053 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:38Z","lastTransitionTime":"2026-01-21T10:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.309852 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.309877 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:57:38 crc kubenswrapper[4881]: E0121 10:57:38.310007 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:57:38 crc kubenswrapper[4881]: E0121 10:57:38.310169 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.372382 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.372433 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.372444 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.372462 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.372476 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:38Z","lastTransitionTime":"2026-01-21T10:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.475407 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.475487 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.475505 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.475532 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.475550 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:38Z","lastTransitionTime":"2026-01-21T10:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.578442 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.578518 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.578541 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.578572 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.578590 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:38Z","lastTransitionTime":"2026-01-21T10:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.681779 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.681927 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.681946 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.681970 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.681987 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:38Z","lastTransitionTime":"2026-01-21T10:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.785212 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.785263 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.785279 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.785301 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.785318 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:38Z","lastTransitionTime":"2026-01-21T10:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.888733 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.888867 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.888892 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.889368 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.889644 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:38Z","lastTransitionTime":"2026-01-21T10:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.992903 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.992956 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.992984 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.993007 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:38 crc kubenswrapper[4881]: I0121 10:57:38.993022 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:38Z","lastTransitionTime":"2026-01-21T10:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.095744 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.095825 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.095838 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.095861 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.095877 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:39Z","lastTransitionTime":"2026-01-21T10:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.168335 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 12:47:24.708743692 +0000 UTC Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.198357 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.198424 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.198443 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.198467 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.198484 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:39Z","lastTransitionTime":"2026-01-21T10:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.300589 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.300640 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.300651 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.300670 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.300683 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:39Z","lastTransitionTime":"2026-01-21T10:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.310176 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:57:39 crc kubenswrapper[4881]: E0121 10:57:39.310330 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtv4t" podUID="3552adbd-011f-4552-9e69-233b92c554c8" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.310178 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:39 crc kubenswrapper[4881]: E0121 10:57:39.310505 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.404729 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.404809 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.404821 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.404845 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.404858 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:39Z","lastTransitionTime":"2026-01-21T10:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.507359 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.507397 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.507406 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.507420 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.507429 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:39Z","lastTransitionTime":"2026-01-21T10:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.610261 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.610312 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.610340 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.610358 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.610367 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:39Z","lastTransitionTime":"2026-01-21T10:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.712913 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.712945 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.712953 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.712968 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.712977 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:39Z","lastTransitionTime":"2026-01-21T10:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.816066 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.816117 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.816127 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.816143 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.816155 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:39Z","lastTransitionTime":"2026-01-21T10:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.919846 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.919899 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.919909 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.919929 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.919939 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:39Z","lastTransitionTime":"2026-01-21T10:57:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:39 crc kubenswrapper[4881]: I0121 10:57:39.924293 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3552adbd-011f-4552-9e69-233b92c554c8-metrics-certs\") pod \"network-metrics-daemon-dtv4t\" (UID: \"3552adbd-011f-4552-9e69-233b92c554c8\") " pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:57:39 crc kubenswrapper[4881]: E0121 10:57:39.924428 4881 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 10:57:39 crc kubenswrapper[4881]: E0121 10:57:39.924481 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3552adbd-011f-4552-9e69-233b92c554c8-metrics-certs podName:3552adbd-011f-4552-9e69-233b92c554c8 nodeName:}" failed. No retries permitted until 2026-01-21 10:57:47.9244675 +0000 UTC m=+55.184423969 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3552adbd-011f-4552-9e69-233b92c554c8-metrics-certs") pod "network-metrics-daemon-dtv4t" (UID: "3552adbd-011f-4552-9e69-233b92c554c8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.022822 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.022885 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.022902 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.022924 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.022941 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:40Z","lastTransitionTime":"2026-01-21T10:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.126272 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.126332 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.126344 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.126365 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.126378 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:40Z","lastTransitionTime":"2026-01-21T10:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.169500 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 05:36:48.868631141 +0000 UTC Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.229198 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.229249 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.229259 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.229273 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.229287 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:40Z","lastTransitionTime":"2026-01-21T10:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.309731 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.309819 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:57:40 crc kubenswrapper[4881]: E0121 10:57:40.309886 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:57:40 crc kubenswrapper[4881]: E0121 10:57:40.309965 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.332010 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.332054 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.332066 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.332082 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.332094 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:40Z","lastTransitionTime":"2026-01-21T10:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.434731 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.434813 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.434831 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.434849 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.434860 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:40Z","lastTransitionTime":"2026-01-21T10:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.537775 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.537849 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.537864 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.537883 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.537894 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:40Z","lastTransitionTime":"2026-01-21T10:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.641071 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.641136 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.641154 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.641179 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.641201 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:40Z","lastTransitionTime":"2026-01-21T10:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.745078 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.745144 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.745156 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.745178 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.745194 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:40Z","lastTransitionTime":"2026-01-21T10:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.848765 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.848855 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.848866 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.848883 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.848894 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:40Z","lastTransitionTime":"2026-01-21T10:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.952187 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.952252 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.952266 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.952287 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:40 crc kubenswrapper[4881]: I0121 10:57:40.952301 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:40Z","lastTransitionTime":"2026-01-21T10:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.054894 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.054950 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.054967 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.054988 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.055005 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:41Z","lastTransitionTime":"2026-01-21T10:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.158166 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.158241 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.158265 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.158298 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.158321 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:41Z","lastTransitionTime":"2026-01-21T10:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.170358 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 22:00:57.017517171 +0000 UTC Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.261408 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.261647 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.261657 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.261677 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.261688 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:41Z","lastTransitionTime":"2026-01-21T10:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.309900 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:41 crc kubenswrapper[4881]: E0121 10:57:41.310046 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.309901 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:57:41 crc kubenswrapper[4881]: E0121 10:57:41.310301 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtv4t" podUID="3552adbd-011f-4552-9e69-233b92c554c8" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.364142 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.364190 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.364203 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.364219 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.364230 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:41Z","lastTransitionTime":"2026-01-21T10:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.468203 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.468286 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.468301 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.468326 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.468338 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:41Z","lastTransitionTime":"2026-01-21T10:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.572096 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.572159 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.572172 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.572194 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.572212 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:41Z","lastTransitionTime":"2026-01-21T10:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.676255 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.676328 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.676363 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.676400 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.676422 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:41Z","lastTransitionTime":"2026-01-21T10:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.779837 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.779896 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.779913 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.779947 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.779963 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:41Z","lastTransitionTime":"2026-01-21T10:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.882572 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.882609 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.882619 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.882633 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.882642 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:41Z","lastTransitionTime":"2026-01-21T10:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.985967 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.986071 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.986091 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.986122 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:41 crc kubenswrapper[4881]: I0121 10:57:41.986142 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:41Z","lastTransitionTime":"2026-01-21T10:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.089755 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.089861 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.089886 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.089917 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.089939 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:42Z","lastTransitionTime":"2026-01-21T10:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.171314 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 08:28:12.597517463 +0000 UTC Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.193540 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.193624 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.193641 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.193667 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.193687 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:42Z","lastTransitionTime":"2026-01-21T10:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.296603 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.296665 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.296683 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.296706 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.296723 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:42Z","lastTransitionTime":"2026-01-21T10:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.310091 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.310120 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:57:42 crc kubenswrapper[4881]: E0121 10:57:42.310330 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:57:42 crc kubenswrapper[4881]: E0121 10:57:42.310453 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.399901 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.399959 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.399996 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.400025 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.400049 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:42Z","lastTransitionTime":"2026-01-21T10:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.502872 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.502934 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.502952 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.502978 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.503002 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:42Z","lastTransitionTime":"2026-01-21T10:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.606327 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.606404 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.606424 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.606451 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.606468 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:42Z","lastTransitionTime":"2026-01-21T10:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.709267 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.709312 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.709323 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.709340 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.709351 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:42Z","lastTransitionTime":"2026-01-21T10:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.811171 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.811212 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.811221 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.811235 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.811244 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:42Z","lastTransitionTime":"2026-01-21T10:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.914864 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.914942 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.914964 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.914994 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:42 crc kubenswrapper[4881]: I0121 10:57:42.915014 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:42Z","lastTransitionTime":"2026-01-21T10:57:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.018319 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.018402 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.018439 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.018467 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.018482 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:43Z","lastTransitionTime":"2026-01-21T10:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.121854 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.121902 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.121912 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.121931 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.121941 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:43Z","lastTransitionTime":"2026-01-21T10:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.172066 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 09:54:26.802642859 +0000 UTC Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.224611 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.224700 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.224716 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.224861 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.224884 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:43Z","lastTransitionTime":"2026-01-21T10:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.310610 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.310609 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:43 crc kubenswrapper[4881]: E0121 10:57:43.310849 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtv4t" podUID="3552adbd-011f-4552-9e69-233b92c554c8" Jan 21 10:57:43 crc kubenswrapper[4881]: E0121 10:57:43.311161 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.333938 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.333986 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.334001 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.334020 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.334032 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:43Z","lastTransitionTime":"2026-01-21T10:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.338565 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e507b4c3c536bdc63360b1386748657584f739e09973ec33c998ac267ca2766\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:43Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.355527 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:43Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.372141 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:43Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.385323 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9164acd9a91836c25b02986f204dd0302964f6bff6fe077971d37eebd4b560ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:43Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.404271 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:43Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.419705 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:43Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.436370 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.436444 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.436462 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.436488 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.436508 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:43Z","lastTransitionTime":"2026-01-21T10:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.436936 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:43Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.450889 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d379505c-c658-4dd5-b841-40c8443012c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a2ec789636052b12e0fdb4e647d7e4f92d1e4b7436933f1529561ffc2021d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d634cee9f543d3322f8cdc8bc62252096e789383c55d5d448cc53ab990ac9b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:43Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.463321 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dtv4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3552adbd-011f-4552-9e69-233b92c554c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dtv4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:43Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.477775 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:43Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.500221 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd18bd57e9f0f878f56164dee92c18a4fff62c83f518a96d7db735dcd488e052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:43Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.515333 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:43Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.538520 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e45d7cde8c26fc00bb1b5ad5cc88c41aff02acf0860856e86f3c3bfdc01e7045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599bfaf6bfc412f779f4732cf9f4273382c1f0af20576581264ea4fc63b2ec38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9454db8c9af3187ee06e017ac192668036b1f565305a37de61c3cffe4d94e7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa3136ee876ebb14cd5164f9176c91c60cc04fd7882a3ad0957ce3a36184d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2c1d1496a7f7e5fad6a3d4ade4104e8e0c40fd0ca4722ca5e1ca1fbf0ebf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29c170fdf57f1f4c678e2902443d9ef467ac3157370e997fd5596b4eecfb54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5897125d6a1004cb4f0527359e8fc0328bff6bcc5ac563fdc3d85b094414c563\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a840f0217d0e057d132d7debeba49b9c541f7f69f33178abee1a44909c83c5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:57:34Z\\\",\\\"message\\\":\\\"rk-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0121 10:57:34.006073 6119 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 10:57:34.006081 6119 handler.go:208] Removed *v1.Node event handler 2\\\\nI0121 10:57:34.006288 6119 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:57:34.006459 6119 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:57:34.006687 6119 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:57:34.007039 6119 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:57:34.007082 6119 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 10:57:34.007124 6119 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:57:34.007122 6119 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0121 10:57:34.007524 6119 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0121 10:57:34.007540 6119 factory.go:656] Stopping watch factory\\\\nI0121 10:57:34.007557 6119 ovnkube.go:599] Stopped ovnkube\\\\nI0121 10:57:34.007612 6119 metrics.go:553] Stopping metrics server at address\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:27Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5897125d6a1004cb4f0527359e8fc0328bff6bcc5ac563fdc3d85b094414c563\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:57:36Z\\\",\\\"message\\\":\\\"-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 10:57:35.987756 6363 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0121 10:57:35.987768 6363 handler.go:208] Removed *v1.Pod event handler 3\\\\nF0121 10:57:35.988839 6363 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47c176d51b5558e85897ac8c72c02bb6f152a972d8e941aeb114333f777e22ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:43Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.540780 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.540857 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.540870 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.540890 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.540903 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:43Z","lastTransitionTime":"2026-01-21T10:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.556906 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:43Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.572975 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:43Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.587050 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:43Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.644381 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.644426 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.644436 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.644454 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.644464 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:43Z","lastTransitionTime":"2026-01-21T10:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.747632 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.747680 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.747701 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.747731 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.747752 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:43Z","lastTransitionTime":"2026-01-21T10:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.850823 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.851638 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.851849 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.852063 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.852201 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:43Z","lastTransitionTime":"2026-01-21T10:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.954699 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.954780 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.954806 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.954822 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:43 crc kubenswrapper[4881]: I0121 10:57:43.954833 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:43Z","lastTransitionTime":"2026-01-21T10:57:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.058133 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.058163 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.058171 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.058183 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.058210 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:44Z","lastTransitionTime":"2026-01-21T10:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.161275 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.161422 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.161441 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.161463 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.161479 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:44Z","lastTransitionTime":"2026-01-21T10:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.172634 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 19:02:53.702748256 +0000 UTC Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.264163 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.264395 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.264546 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.264648 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.264736 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:44Z","lastTransitionTime":"2026-01-21T10:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.310700 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.310735 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:57:44 crc kubenswrapper[4881]: E0121 10:57:44.310923 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:57:44 crc kubenswrapper[4881]: E0121 10:57:44.311066 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.367681 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.367727 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.367738 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.367755 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.367767 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:44Z","lastTransitionTime":"2026-01-21T10:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.470895 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.470968 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.470991 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.471029 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.471055 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:44Z","lastTransitionTime":"2026-01-21T10:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.573325 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.573620 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.573761 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.573906 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.573992 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:44Z","lastTransitionTime":"2026-01-21T10:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.676034 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.676087 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.676105 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.676127 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.676144 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:44Z","lastTransitionTime":"2026-01-21T10:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.779184 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.779234 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.779244 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.779331 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.779341 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:44Z","lastTransitionTime":"2026-01-21T10:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.882804 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.882852 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.882865 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.882884 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.882898 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:44Z","lastTransitionTime":"2026-01-21T10:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.986976 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.987033 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.987045 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.987062 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:44 crc kubenswrapper[4881]: I0121 10:57:44.987072 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:44Z","lastTransitionTime":"2026-01-21T10:57:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.057580 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.057644 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.057662 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.057686 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.057705 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:45Z","lastTransitionTime":"2026-01-21T10:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:45 crc kubenswrapper[4881]: E0121 10:57:45.083297 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:45Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.090192 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.090237 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.090254 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.090277 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.090294 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:45Z","lastTransitionTime":"2026-01-21T10:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:45 crc kubenswrapper[4881]: E0121 10:57:45.109510 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:45Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.115409 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.115469 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.115495 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.115524 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.115547 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:45Z","lastTransitionTime":"2026-01-21T10:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:45 crc kubenswrapper[4881]: E0121 10:57:45.137205 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:45Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.143383 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.143448 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.143472 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.143501 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.143524 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:45Z","lastTransitionTime":"2026-01-21T10:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:45 crc kubenswrapper[4881]: E0121 10:57:45.159327 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:45Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.164571 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.164654 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.164672 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.164695 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.164711 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:45Z","lastTransitionTime":"2026-01-21T10:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.172943 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 14:17:40.62071874 +0000 UTC Jan 21 10:57:45 crc kubenswrapper[4881]: E0121 10:57:45.179169 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:45Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:45 crc kubenswrapper[4881]: E0121 10:57:45.179388 4881 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.182102 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.182165 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.182188 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.182219 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.182243 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:45Z","lastTransitionTime":"2026-01-21T10:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.285038 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.285096 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.285113 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.285136 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.285155 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:45Z","lastTransitionTime":"2026-01-21T10:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.310496 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.310626 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:57:45 crc kubenswrapper[4881]: E0121 10:57:45.310823 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:57:45 crc kubenswrapper[4881]: E0121 10:57:45.311045 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtv4t" podUID="3552adbd-011f-4552-9e69-233b92c554c8" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.388620 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.388669 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.388680 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.388699 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.388711 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:45Z","lastTransitionTime":"2026-01-21T10:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.492086 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.492166 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.492185 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.492213 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.492231 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:45Z","lastTransitionTime":"2026-01-21T10:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.595541 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.595620 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.595634 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.595656 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.595672 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:45Z","lastTransitionTime":"2026-01-21T10:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.698932 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.698979 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.698990 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.699011 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.699025 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:45Z","lastTransitionTime":"2026-01-21T10:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.802328 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.802383 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.802393 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.802414 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.802427 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:45Z","lastTransitionTime":"2026-01-21T10:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.905133 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.905194 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.905209 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.905231 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:45 crc kubenswrapper[4881]: I0121 10:57:45.905244 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:45Z","lastTransitionTime":"2026-01-21T10:57:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.008272 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.008311 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.008319 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.008335 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.008346 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:46Z","lastTransitionTime":"2026-01-21T10:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.111558 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.111634 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.111648 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.111671 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.111701 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:46Z","lastTransitionTime":"2026-01-21T10:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.174180 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 04:08:34.821568812 +0000 UTC Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.214891 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.214948 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.214964 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.214982 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.214994 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:46Z","lastTransitionTime":"2026-01-21T10:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.241674 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.243033 4881 scope.go:117] "RemoveContainer" containerID="5897125d6a1004cb4f0527359e8fc0328bff6bcc5ac563fdc3d85b094414c563" Jan 21 10:57:46 crc kubenswrapper[4881]: E0121 10:57:46.243298 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bx64f_openshift-ovn-kubernetes(e8bb6d97-b3b8-4e31-b704-8e565385ab26)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.264651 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:46Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.294011 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd18bd57e9f0f878f56164dee92c18a4fff62c83f518a96d7db735dcd488e052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:46Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.310048 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.310118 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:46Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:46 crc kubenswrapper[4881]: E0121 10:57:46.310235 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.310321 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:57:46 crc kubenswrapper[4881]: E0121 10:57:46.310368 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.318853 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.318896 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.318907 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.318925 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.318937 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:46Z","lastTransitionTime":"2026-01-21T10:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.347582 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e45d7cde8c26fc00bb1b5ad5cc88c41aff02acf0860856e86f3c3bfdc01e7045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599bfaf6bfc412f779f4732cf9f4273382c1f0af20576581264ea4fc63b2ec38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9454db8c9af3187ee06e017ac192668036b1f565305a37de61c3cffe4d94e7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa3136ee876ebb14cd5164f9176c91c60cc04fd7882a3ad0957ce3a36184d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2c1d1496a7f7e5fad6a3d4ade4104e8e0c40fd0ca4722ca5e1ca1fbf0ebf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29c170fdf57f1f4c678e2902443d9ef467ac3157370e997fd5596b4eecfb54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5897125d6a1004cb4f0527359e8fc0328bff6bcc5ac563fdc3d85b094414c563\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5897125d6a1004cb4f0527359e8fc0328bff6bcc5ac563fdc3d85b094414c563\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:57:36Z\\\",\\\"message\\\":\\\"-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 10:57:35.987756 6363 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0121 10:57:35.987768 6363 handler.go:208] Removed *v1.Pod event handler 3\\\\nF0121 10:57:35.988839 6363 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bx64f_openshift-ovn-kubernetes(e8bb6d97-b3b8-4e31-b704-8e565385ab26)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47c176d51b5558e85897ac8c72c02bb6f152a972d8e941aeb114333f777e22ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:46Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.362399 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:46Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.378227 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:46Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.390910 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:46Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.407490 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e507b4c3c536bdc63360b1386748657584f739e09973ec33c998ac267ca2766\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:46Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.421833 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:46Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.423020 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.423069 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.423084 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.423105 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.423119 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:46Z","lastTransitionTime":"2026-01-21T10:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.437538 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:46Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.451487 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9164acd9a91836c25b02986f204dd0302964f6bff6fe077971d37eebd4b560ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:46Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.466217 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:46Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.481208 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:46Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.499599 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:46Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.512542 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d379505c-c658-4dd5-b841-40c8443012c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a2ec789636052b12e0fdb4e647d7e4f92d1e4b7436933f1529561ffc2021d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d634cee9f543d3322f8cdc8bc62252096e789383c55d5d448cc53ab990ac9b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:46Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.524426 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dtv4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3552adbd-011f-4552-9e69-233b92c554c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dtv4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:46Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.526568 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.526625 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.526638 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.526659 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.526669 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:46Z","lastTransitionTime":"2026-01-21T10:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.630119 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.630159 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.630170 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.630186 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.630196 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:46Z","lastTransitionTime":"2026-01-21T10:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.732596 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.732643 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.732683 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.732702 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.732712 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:46Z","lastTransitionTime":"2026-01-21T10:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.836324 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.836407 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.836427 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.836455 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.836472 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:46Z","lastTransitionTime":"2026-01-21T10:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.939678 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.939728 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.939745 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.939770 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.939829 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:46Z","lastTransitionTime":"2026-01-21T10:57:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.991683 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.991859 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:46 crc kubenswrapper[4881]: E0121 10:57:46.992012 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:18.9919588 +0000 UTC m=+86.251915269 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:57:46 crc kubenswrapper[4881]: E0121 10:57:46.992143 4881 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:57:46 crc kubenswrapper[4881]: I0121 10:57:46.992221 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:46 crc kubenswrapper[4881]: E0121 10:57:46.992254 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:58:18.992219356 +0000 UTC m=+86.252176015 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:57:46 crc kubenswrapper[4881]: E0121 10:57:46.992364 4881 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:57:46 crc kubenswrapper[4881]: E0121 10:57:46.992463 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:58:18.992439562 +0000 UTC m=+86.252396071 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.042074 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.042140 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.042163 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.042192 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.042213 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:47Z","lastTransitionTime":"2026-01-21T10:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.093708 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:57:47 crc kubenswrapper[4881]: E0121 10:57:47.094032 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:57:47 crc kubenswrapper[4881]: E0121 10:57:47.094376 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:57:47 crc kubenswrapper[4881]: E0121 10:57:47.094412 4881 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:57:47 crc kubenswrapper[4881]: E0121 10:57:47.094491 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 10:58:19.094466964 +0000 UTC m=+86.354423463 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:57:47 crc kubenswrapper[4881]: E0121 10:57:47.094726 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:57:47 crc kubenswrapper[4881]: E0121 10:57:47.094814 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:57:47 crc kubenswrapper[4881]: E0121 10:57:47.094835 4881 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:57:47 crc kubenswrapper[4881]: E0121 10:57:47.094922 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 10:58:19.094894874 +0000 UTC m=+86.354851343 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.095029 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.146069 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.146140 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.146165 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.146192 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.146212 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:47Z","lastTransitionTime":"2026-01-21T10:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.174451 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 22:59:45.550568886 +0000 UTC Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.250188 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.250273 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.250305 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.250340 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.250364 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:47Z","lastTransitionTime":"2026-01-21T10:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.310617 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.310685 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:57:47 crc kubenswrapper[4881]: E0121 10:57:47.310889 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:57:47 crc kubenswrapper[4881]: E0121 10:57:47.311119 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtv4t" podUID="3552adbd-011f-4552-9e69-233b92c554c8" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.354155 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.354288 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.354316 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.354348 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.354371 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:47Z","lastTransitionTime":"2026-01-21T10:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.444210 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.457695 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.457771 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.457829 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.457863 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.457886 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:47Z","lastTransitionTime":"2026-01-21T10:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.476706 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e45d7cde8c26fc00bb1b5ad5cc88c41aff02acf0860856e86f3c3bfdc01e7045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599bfaf6bfc412f779f4732cf9f4273382c1f0af20576581264ea4fc63b2ec38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9454db8c9af3187ee06e017ac192668036b1f565305a37de61c3cffe4d94e7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa3136ee876ebb14cd5164f9176c91c60cc04fd7882a3ad0957ce3a36184d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2c1d1496a7f7e5fad6a3d4ade4104e8e0c40fd0ca4722ca5e1ca1fbf0ebf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29c170fdf57f1f4c678e2902443d9ef467ac3157370e997fd5596b4eecfb54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5897125d6a1004cb4f0527359e8fc0328bff6bcc5ac563fdc3d85b094414c563\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5897125d6a1004cb4f0527359e8fc0328bff6bcc5ac563fdc3d85b094414c563\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:57:36Z\\\",\\\"message\\\":\\\"-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 10:57:35.987756 6363 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0121 10:57:35.987768 6363 handler.go:208] Removed *v1.Pod event handler 3\\\\nF0121 10:57:35.988839 6363 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bx64f_openshift-ovn-kubernetes(e8bb6d97-b3b8-4e31-b704-8e565385ab26)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47c176d51b5558e85897ac8c72c02bb6f152a972d8e941aeb114333f777e22ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:47Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.498256 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:47Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.519958 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:47Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.541890 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd18bd57e9f0f878f56164dee92c18a4fff62c83f518a96d7db735dcd488e052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:47Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.559297 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:47Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.560312 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.560387 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.560411 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.560440 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.560477 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:47Z","lastTransitionTime":"2026-01-21T10:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.578299 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:47Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.600647 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:47Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.615827 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9164acd9a91836c25b02986f204dd0302964f6bff6fe077971d37eebd4b560ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:47Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.633421 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e507b4c3c536bdc63360b1386748657584f739e09973ec33c998ac267ca2766\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:47Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.654709 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:47Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.663603 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.663655 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.663666 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.663686 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.663697 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:47Z","lastTransitionTime":"2026-01-21T10:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.674346 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:47Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.691918 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d379505c-c658-4dd5-b841-40c8443012c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a2ec789636052b12e0fdb4e647d7e4f92d1e4b7436933f1529561ffc2021d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d634cee9f543d3322f8cdc8bc62252096e789383c55d5d448cc53ab990ac9b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:47Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.708751 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dtv4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3552adbd-011f-4552-9e69-233b92c554c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dtv4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:47Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.726530 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:47Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.741389 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:47Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.762708 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:47Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.767351 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.767388 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.767427 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.767449 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.767465 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:47Z","lastTransitionTime":"2026-01-21T10:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.870387 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.870457 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.870469 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.870492 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.870507 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:47Z","lastTransitionTime":"2026-01-21T10:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.974040 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.974109 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.974129 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.974153 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:47 crc kubenswrapper[4881]: I0121 10:57:47.974170 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:47Z","lastTransitionTime":"2026-01-21T10:57:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.004833 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3552adbd-011f-4552-9e69-233b92c554c8-metrics-certs\") pod \"network-metrics-daemon-dtv4t\" (UID: \"3552adbd-011f-4552-9e69-233b92c554c8\") " pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:57:48 crc kubenswrapper[4881]: E0121 10:57:48.005098 4881 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 10:57:48 crc kubenswrapper[4881]: E0121 10:57:48.005254 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3552adbd-011f-4552-9e69-233b92c554c8-metrics-certs podName:3552adbd-011f-4552-9e69-233b92c554c8 nodeName:}" failed. No retries permitted until 2026-01-21 10:58:04.005208952 +0000 UTC m=+71.265165461 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3552adbd-011f-4552-9e69-233b92c554c8-metrics-certs") pod "network-metrics-daemon-dtv4t" (UID: "3552adbd-011f-4552-9e69-233b92c554c8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.077287 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.077330 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.077339 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.077354 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.077363 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:48Z","lastTransitionTime":"2026-01-21T10:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.175510 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 07:33:59.246286538 +0000 UTC Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.180264 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.180303 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.180317 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.180335 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.180347 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:48Z","lastTransitionTime":"2026-01-21T10:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.283349 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.283412 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.283435 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.283461 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.283479 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:48Z","lastTransitionTime":"2026-01-21T10:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.310260 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.310455 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:57:48 crc kubenswrapper[4881]: E0121 10:57:48.310614 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:57:48 crc kubenswrapper[4881]: E0121 10:57:48.310839 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.412495 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.412560 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.412577 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.412601 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.412619 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:48Z","lastTransitionTime":"2026-01-21T10:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.515586 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.515629 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.515643 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.515663 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.515674 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:48Z","lastTransitionTime":"2026-01-21T10:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.618315 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.618386 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.618407 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.618431 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.618443 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:48Z","lastTransitionTime":"2026-01-21T10:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.721228 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.721301 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.721324 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.721349 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.721367 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:48Z","lastTransitionTime":"2026-01-21T10:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.823901 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.823961 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.823978 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.824000 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.824017 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:48Z","lastTransitionTime":"2026-01-21T10:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.927235 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.927288 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.927300 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.927323 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:48 crc kubenswrapper[4881]: I0121 10:57:48.927335 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:48Z","lastTransitionTime":"2026-01-21T10:57:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.030877 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.031203 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.031257 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.031290 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.031312 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:49Z","lastTransitionTime":"2026-01-21T10:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.133726 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.133834 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.133858 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.133887 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.133909 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:49Z","lastTransitionTime":"2026-01-21T10:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.142566 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.157501 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.160001 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:49Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.174927 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:49Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.175938 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 18:29:54.958987606 +0000 UTC Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.190606 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:49Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.206477 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d379505c-c658-4dd5-b841-40c8443012c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a2ec789636052b12e0fdb4e647d7e4f92d1e4b7436933f1529561ffc2021d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d634cee9f543d3322f8cdc8bc62252096e789383c55d5d448cc53ab990ac9b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:49Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.221898 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dtv4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3552adbd-011f-4552-9e69-233b92c554c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dtv4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:49Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.236091 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.236146 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.236156 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.236173 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.236185 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:49Z","lastTransitionTime":"2026-01-21T10:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.236392 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:49Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.253164 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd18bd57e9f0f878f56164dee92c18a4fff62c83f518a96d7db735dcd488e052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:49Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.268923 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:49Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.291831 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e45d7cde8c26fc00bb1b5ad5cc88c41aff02acf0860856e86f3c3bfdc01e7045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599bfaf6bfc412f779f4732cf9f4273382c1f0af20576581264ea4fc63b2ec38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9454db8c9af3187ee06e017ac192668036b1f565305a37de61c3cffe4d94e7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa3136ee876ebb14cd5164f9176c91c60cc04fd7882a3ad0957ce3a36184d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2c1d1496a7f7e5fad6a3d4ade4104e8e0c40fd0ca4722ca5e1ca1fbf0ebf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29c170fdf57f1f4c678e2902443d9ef467ac3157370e997fd5596b4eecfb54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5897125d6a1004cb4f0527359e8fc0328bff6bcc5ac563fdc3d85b094414c563\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5897125d6a1004cb4f0527359e8fc0328bff6bcc5ac563fdc3d85b094414c563\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:57:36Z\\\",\\\"message\\\":\\\"-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 10:57:35.987756 6363 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0121 10:57:35.987768 6363 handler.go:208] Removed *v1.Pod event handler 3\\\\nF0121 10:57:35.988839 6363 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bx64f_openshift-ovn-kubernetes(e8bb6d97-b3b8-4e31-b704-8e565385ab26)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47c176d51b5558e85897ac8c72c02bb6f152a972d8e941aeb114333f777e22ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:49Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.308905 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:49Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.309946 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.309956 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:57:49 crc kubenswrapper[4881]: E0121 10:57:49.310123 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:57:49 crc kubenswrapper[4881]: E0121 10:57:49.310248 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtv4t" podUID="3552adbd-011f-4552-9e69-233b92c554c8" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.326415 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:49Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.338421 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.338461 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.338470 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.338485 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.338495 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:49Z","lastTransitionTime":"2026-01-21T10:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.340088 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:49Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.356248 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e507b4c3c536bdc63360b1386748657584f739e09973ec33c998ac267ca2766\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:49Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.371201 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:49Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.383700 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:49Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.394076 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9164acd9a91836c25b02986f204dd0302964f6bff6fe077971d37eebd4b560ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:49Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.441291 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.441326 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.441346 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.441364 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.441375 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:49Z","lastTransitionTime":"2026-01-21T10:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.568548 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.568585 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.568594 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.568608 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.568620 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:49Z","lastTransitionTime":"2026-01-21T10:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.672441 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.672484 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.672496 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.672518 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.672531 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:49Z","lastTransitionTime":"2026-01-21T10:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.774930 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.774989 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.775005 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.775047 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.775058 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:49Z","lastTransitionTime":"2026-01-21T10:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.877479 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.877521 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.877537 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.877558 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.877574 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:49Z","lastTransitionTime":"2026-01-21T10:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.980385 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.980444 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.980465 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.980484 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:49 crc kubenswrapper[4881]: I0121 10:57:49.980498 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:49Z","lastTransitionTime":"2026-01-21T10:57:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.083185 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.083263 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.083287 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.083322 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.083344 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:50Z","lastTransitionTime":"2026-01-21T10:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.176960 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 05:17:14.858632883 +0000 UTC Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.185840 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.185886 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.185899 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.185919 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.185930 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:50Z","lastTransitionTime":"2026-01-21T10:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.288316 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.288357 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.288370 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.288385 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.288397 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:50Z","lastTransitionTime":"2026-01-21T10:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.310623 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:57:50 crc kubenswrapper[4881]: E0121 10:57:50.310745 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.310627 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:57:50 crc kubenswrapper[4881]: E0121 10:57:50.310883 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.391003 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.391038 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.391047 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.391059 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.391068 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:50Z","lastTransitionTime":"2026-01-21T10:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.494063 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.494108 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.494117 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.494132 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.494149 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:50Z","lastTransitionTime":"2026-01-21T10:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.596660 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.596694 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.596703 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.596716 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.596727 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:50Z","lastTransitionTime":"2026-01-21T10:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.699228 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.699270 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.699281 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.699299 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.699312 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:50Z","lastTransitionTime":"2026-01-21T10:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.801892 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.801947 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.801958 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.801974 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.801985 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:50Z","lastTransitionTime":"2026-01-21T10:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.904441 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.904472 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.904480 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.904492 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:50 crc kubenswrapper[4881]: I0121 10:57:50.904501 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:50Z","lastTransitionTime":"2026-01-21T10:57:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.007077 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.007125 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.007138 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.007156 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.007194 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:51Z","lastTransitionTime":"2026-01-21T10:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.109924 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.109975 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.109988 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.110011 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.110025 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:51Z","lastTransitionTime":"2026-01-21T10:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.177766 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 11:45:34.335053704 +0000 UTC Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.214055 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.214092 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.214113 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.214126 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.214135 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:51Z","lastTransitionTime":"2026-01-21T10:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.310562 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.310723 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:51 crc kubenswrapper[4881]: E0121 10:57:51.310829 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtv4t" podUID="3552adbd-011f-4552-9e69-233b92c554c8" Jan 21 10:57:51 crc kubenswrapper[4881]: E0121 10:57:51.310951 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.320175 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.320244 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.320258 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.320296 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.320311 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:51Z","lastTransitionTime":"2026-01-21T10:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.422758 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.422829 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.422839 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.422854 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.422862 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:51Z","lastTransitionTime":"2026-01-21T10:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.525605 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.525642 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.525675 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.525688 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.525696 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:51Z","lastTransitionTime":"2026-01-21T10:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.629138 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.629171 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.629180 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.629193 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.629202 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:51Z","lastTransitionTime":"2026-01-21T10:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.731664 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.731699 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.731710 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.731725 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.731736 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:51Z","lastTransitionTime":"2026-01-21T10:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.840182 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.840219 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.840232 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.840250 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.840262 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:51Z","lastTransitionTime":"2026-01-21T10:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.942748 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.942804 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.942814 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.942827 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:51 crc kubenswrapper[4881]: I0121 10:57:51.942836 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:51Z","lastTransitionTime":"2026-01-21T10:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.045366 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.045404 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.045412 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.045425 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.045433 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:52Z","lastTransitionTime":"2026-01-21T10:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.148098 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.148157 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.148169 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.148189 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.148203 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:52Z","lastTransitionTime":"2026-01-21T10:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.178538 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 19:14:12.839094096 +0000 UTC Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.252286 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.252344 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.252355 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.252375 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.252388 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:52Z","lastTransitionTime":"2026-01-21T10:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.310646 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.310733 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:57:52 crc kubenswrapper[4881]: E0121 10:57:52.310827 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:57:52 crc kubenswrapper[4881]: E0121 10:57:52.310914 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.355298 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.355381 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.355406 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.355431 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.355451 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:52Z","lastTransitionTime":"2026-01-21T10:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.457431 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.457489 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.457498 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.457514 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.457525 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:52Z","lastTransitionTime":"2026-01-21T10:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.560555 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.560622 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.560655 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.560684 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.560708 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:52Z","lastTransitionTime":"2026-01-21T10:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.663821 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.663873 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.663893 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.663912 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.663924 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:52Z","lastTransitionTime":"2026-01-21T10:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.765709 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.765751 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.765762 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.765804 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.765817 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:52Z","lastTransitionTime":"2026-01-21T10:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.867962 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.868024 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.868035 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.868051 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.868062 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:52Z","lastTransitionTime":"2026-01-21T10:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.970318 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.970375 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.970388 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.970415 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:52 crc kubenswrapper[4881]: I0121 10:57:52.970447 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:52Z","lastTransitionTime":"2026-01-21T10:57:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.071958 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.071990 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.072000 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.072014 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.072025 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:53Z","lastTransitionTime":"2026-01-21T10:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.173774 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.173831 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.173841 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.173856 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.173866 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:53Z","lastTransitionTime":"2026-01-21T10:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.178970 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 13:39:00.616304803 +0000 UTC Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.275605 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.275650 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.275666 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.275683 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.275700 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:53Z","lastTransitionTime":"2026-01-21T10:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.310390 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.310484 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:57:53 crc kubenswrapper[4881]: E0121 10:57:53.310689 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:57:53 crc kubenswrapper[4881]: E0121 10:57:53.310842 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtv4t" podUID="3552adbd-011f-4552-9e69-233b92c554c8" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.329985 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13d0f0c4-fa31-44ba-bc94-c0a80fc1b2df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ef83fedf9cc77cf73fdd00486ec9b0b04712a60a5448402754a44ad46da439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36430b9d5b01b4a6f3b9e7b58bfbec0c258f34847b321cb45bc3b23f84cf09fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eba9cbb70fbd88687c81b18ad50f8386f836bf2fa2c8f9e1c503a20af985416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b7d6b79713c6f4718939d3679f1ba6e237045d653762b6de122ebecdfabbe35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b7d6b79713c6f4718939d3679f1ba6e237045d653762b6de122ebecdfabbe35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:53Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.348956 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:53Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.365241 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:53Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.379457 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.379512 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.379492 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9164acd9a91836c25b02986f204dd0302964f6bff6fe077971d37eebd4b560ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:53Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.379525 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.379664 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.379681 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:53Z","lastTransitionTime":"2026-01-21T10:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.396865 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e507b4c3c536bdc63360b1386748657584f739e09973ec33c998ac267ca2766\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:53Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.413763 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:53Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.430576 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:53Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.445719 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d379505c-c658-4dd5-b841-40c8443012c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a2ec789636052b12e0fdb4e647d7e4f92d1e4b7436933f1529561ffc2021d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d634cee9f543d3322f8cdc8bc62252096e789383c55d5d448cc53ab990ac9b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:53Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.460329 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dtv4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3552adbd-011f-4552-9e69-233b92c554c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dtv4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:53Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.476526 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:53Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.482817 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.482877 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.482887 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.482906 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.482921 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:53Z","lastTransitionTime":"2026-01-21T10:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.491874 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:53Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.508343 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:53Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.533504 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e45d7cde8c26fc00bb1b5ad5cc88c41aff02acf0860856e86f3c3bfdc01e7045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599bfaf6bfc412f779f4732cf9f4273382c1f0af20576581264ea4fc63b2ec38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9454db8c9af3187ee06e017ac192668036b1f565305a37de61c3cffe4d94e7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa3136ee876ebb14cd5164f9176c91c60cc04fd7882a3ad0957ce3a36184d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2c1d1496a7f7e5fad6a3d4ade4104e8e0c40fd0ca4722ca5e1ca1fbf0ebf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29c170fdf57f1f4c678e2902443d9ef467ac3157370e997fd5596b4eecfb54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5897125d6a1004cb4f0527359e8fc0328bff6bcc5ac563fdc3d85b094414c563\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5897125d6a1004cb4f0527359e8fc0328bff6bcc5ac563fdc3d85b094414c563\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:57:36Z\\\",\\\"message\\\":\\\"-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 10:57:35.987756 6363 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0121 10:57:35.987768 6363 handler.go:208] Removed *v1.Pod event handler 3\\\\nF0121 10:57:35.988839 6363 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-bx64f_openshift-ovn-kubernetes(e8bb6d97-b3b8-4e31-b704-8e565385ab26)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47c176d51b5558e85897ac8c72c02bb6f152a972d8e941aeb114333f777e22ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:53Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.550877 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:53Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.568081 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:53Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.586689 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd18bd57e9f0f878f56164dee92c18a4fff62c83f518a96d7db735dcd488e052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:53Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.587012 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.587056 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.587071 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.587095 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.587113 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:53Z","lastTransitionTime":"2026-01-21T10:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.602211 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:53Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.691340 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.691446 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.691463 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.691586 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.691609 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:53Z","lastTransitionTime":"2026-01-21T10:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.794389 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.794423 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.794431 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.794444 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.794454 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:53Z","lastTransitionTime":"2026-01-21T10:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.897121 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.897168 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.897181 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.897201 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:53 crc kubenswrapper[4881]: I0121 10:57:53.897213 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:53Z","lastTransitionTime":"2026-01-21T10:57:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.002037 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.002102 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.002116 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.002139 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.002151 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:54Z","lastTransitionTime":"2026-01-21T10:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.105344 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.105396 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.105412 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.105431 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.105445 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:54Z","lastTransitionTime":"2026-01-21T10:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.179805 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 21:56:41.084971917 +0000 UTC Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.208552 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.208603 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.208614 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.208633 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.208645 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:54Z","lastTransitionTime":"2026-01-21T10:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.309814 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.309852 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:57:54 crc kubenswrapper[4881]: E0121 10:57:54.309987 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:57:54 crc kubenswrapper[4881]: E0121 10:57:54.310120 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.311738 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.311777 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.311806 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.311825 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.311841 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:54Z","lastTransitionTime":"2026-01-21T10:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.414883 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.414936 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.414949 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.414967 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.414979 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:54Z","lastTransitionTime":"2026-01-21T10:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.517734 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.517847 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.517861 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.517890 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.517905 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:54Z","lastTransitionTime":"2026-01-21T10:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.620695 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.620753 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.620773 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.620823 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.620841 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:54Z","lastTransitionTime":"2026-01-21T10:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.725081 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.725216 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.725231 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.725253 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.725268 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:54Z","lastTransitionTime":"2026-01-21T10:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.827938 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.828005 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.828024 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.828051 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.828071 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:54Z","lastTransitionTime":"2026-01-21T10:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.932055 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.932115 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.932128 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.932157 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:54 crc kubenswrapper[4881]: I0121 10:57:54.932175 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:54Z","lastTransitionTime":"2026-01-21T10:57:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.037759 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.037826 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.037840 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.037861 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.037872 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:55Z","lastTransitionTime":"2026-01-21T10:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.141050 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.141086 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.141096 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.141112 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.141121 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:55Z","lastTransitionTime":"2026-01-21T10:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.180178 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 21:28:37.740777945 +0000 UTC Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.245075 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.245112 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.245125 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.245147 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.245160 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:55Z","lastTransitionTime":"2026-01-21T10:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.287291 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.287374 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.287418 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.287460 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.287484 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:55Z","lastTransitionTime":"2026-01-21T10:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:55 crc kubenswrapper[4881]: E0121 10:57:55.304909 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:55Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.310194 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.310265 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:55 crc kubenswrapper[4881]: E0121 10:57:55.310424 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtv4t" podUID="3552adbd-011f-4552-9e69-233b92c554c8" Jan 21 10:57:55 crc kubenswrapper[4881]: E0121 10:57:55.310692 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.315898 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.315963 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.315976 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.316002 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.316015 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:55Z","lastTransitionTime":"2026-01-21T10:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:55 crc kubenswrapper[4881]: E0121 10:57:55.333388 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:55Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.338569 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.338637 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.338648 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.338667 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.338678 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:55Z","lastTransitionTime":"2026-01-21T10:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:55 crc kubenswrapper[4881]: E0121 10:57:55.352397 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:55Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.356905 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.356955 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.356966 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.356991 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.357007 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:55Z","lastTransitionTime":"2026-01-21T10:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:55 crc kubenswrapper[4881]: E0121 10:57:55.371822 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:55Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.376921 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.376974 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.376987 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.377007 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.377022 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:55Z","lastTransitionTime":"2026-01-21T10:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:55 crc kubenswrapper[4881]: E0121 10:57:55.393172 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:55Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:55 crc kubenswrapper[4881]: E0121 10:57:55.393301 4881 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.395327 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.395377 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.395386 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.395407 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.395418 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:55Z","lastTransitionTime":"2026-01-21T10:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.499105 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.499174 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.499186 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.499209 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.499227 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:55Z","lastTransitionTime":"2026-01-21T10:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.602252 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.602306 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.602321 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.602343 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.602359 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:55Z","lastTransitionTime":"2026-01-21T10:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.705304 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.705353 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.705363 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.705381 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.705391 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:55Z","lastTransitionTime":"2026-01-21T10:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.809732 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.809815 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.809832 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.809854 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.809867 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:55Z","lastTransitionTime":"2026-01-21T10:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.913498 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.913554 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.913568 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.913620 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:55 crc kubenswrapper[4881]: I0121 10:57:55.913635 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:55Z","lastTransitionTime":"2026-01-21T10:57:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.016928 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.016993 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.017009 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.017035 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.017049 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:56Z","lastTransitionTime":"2026-01-21T10:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.120630 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.120715 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.120733 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.120754 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.120823 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:56Z","lastTransitionTime":"2026-01-21T10:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.181393 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 18:21:43.306836786 +0000 UTC Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.228314 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.228359 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.228369 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.228389 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.228401 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:56Z","lastTransitionTime":"2026-01-21T10:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.310585 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.310646 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:57:56 crc kubenswrapper[4881]: E0121 10:57:56.310731 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:57:56 crc kubenswrapper[4881]: E0121 10:57:56.310868 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.332932 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.332992 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.333006 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.333033 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.333047 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:56Z","lastTransitionTime":"2026-01-21T10:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.435718 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.435762 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.435774 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.435812 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.435825 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:56Z","lastTransitionTime":"2026-01-21T10:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.539137 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.539210 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.539226 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.539251 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.539265 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:56Z","lastTransitionTime":"2026-01-21T10:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.642933 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.642984 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.642995 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.643015 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.643030 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:56Z","lastTransitionTime":"2026-01-21T10:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.745690 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.745745 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.745758 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.745806 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.745819 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:56Z","lastTransitionTime":"2026-01-21T10:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.851235 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.851290 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.851300 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.851315 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.851325 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:56Z","lastTransitionTime":"2026-01-21T10:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.954504 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.954586 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.954601 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.954628 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:56 crc kubenswrapper[4881]: I0121 10:57:56.954642 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:56Z","lastTransitionTime":"2026-01-21T10:57:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.058746 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.058888 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.059204 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.059303 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.059343 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:57Z","lastTransitionTime":"2026-01-21T10:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.162920 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.162998 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.163009 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.163039 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.163054 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:57Z","lastTransitionTime":"2026-01-21T10:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.182206 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 12:41:34.915775908 +0000 UTC Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.266258 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.266320 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.266334 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.266358 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.266370 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:57Z","lastTransitionTime":"2026-01-21T10:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.312999 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.313210 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:57:57 crc kubenswrapper[4881]: E0121 10:57:57.313378 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:57:57 crc kubenswrapper[4881]: E0121 10:57:57.313685 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtv4t" podUID="3552adbd-011f-4552-9e69-233b92c554c8" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.369239 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.369281 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.369290 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.369311 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.369322 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:57Z","lastTransitionTime":"2026-01-21T10:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.473223 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.473300 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.473318 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.473357 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.473392 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:57Z","lastTransitionTime":"2026-01-21T10:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.577194 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.577260 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.577272 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.577290 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.577303 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:57Z","lastTransitionTime":"2026-01-21T10:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.680970 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.681018 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.681028 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.681229 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.681240 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:57Z","lastTransitionTime":"2026-01-21T10:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.784072 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.784154 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.784195 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.784228 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.784245 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:57Z","lastTransitionTime":"2026-01-21T10:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.887286 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.887348 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.887358 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.887381 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.887392 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:57Z","lastTransitionTime":"2026-01-21T10:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.990901 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.990958 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.990968 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.990994 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:57 crc kubenswrapper[4881]: I0121 10:57:57.991007 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:57Z","lastTransitionTime":"2026-01-21T10:57:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.094409 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.094454 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.094465 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.094484 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.094496 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:58Z","lastTransitionTime":"2026-01-21T10:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.183160 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 19:06:23.006003288 +0000 UTC Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.197274 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.197322 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.197334 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.197353 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.197363 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:58Z","lastTransitionTime":"2026-01-21T10:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.300612 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.300663 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.300674 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.300691 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.300704 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:58Z","lastTransitionTime":"2026-01-21T10:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.310094 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.310235 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:57:58 crc kubenswrapper[4881]: E0121 10:57:58.310348 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:57:58 crc kubenswrapper[4881]: E0121 10:57:58.310923 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.311242 4881 scope.go:117] "RemoveContainer" containerID="5897125d6a1004cb4f0527359e8fc0328bff6bcc5ac563fdc3d85b094414c563" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.403599 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.403645 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.403660 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.403681 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.403698 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:58Z","lastTransitionTime":"2026-01-21T10:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.507265 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.507317 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.507331 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.507351 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.507363 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:58Z","lastTransitionTime":"2026-01-21T10:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.610415 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.610465 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.610477 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.610496 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.610509 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:58Z","lastTransitionTime":"2026-01-21T10:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.714133 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.714189 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.714206 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.714231 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.714249 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:58Z","lastTransitionTime":"2026-01-21T10:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.817081 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.817124 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.817138 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.817156 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.817168 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:58Z","lastTransitionTime":"2026-01-21T10:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.921269 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.921329 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.921344 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.921369 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:58 crc kubenswrapper[4881]: I0121 10:57:58.921381 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:58Z","lastTransitionTime":"2026-01-21T10:57:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.024468 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.024521 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.024531 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.024550 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.024562 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:59Z","lastTransitionTime":"2026-01-21T10:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.093208 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bx64f_e8bb6d97-b3b8-4e31-b704-8e565385ab26/ovnkube-controller/1.log" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.096212 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" event={"ID":"e8bb6d97-b3b8-4e31-b704-8e565385ab26","Type":"ContainerStarted","Data":"ff735c08dae242cbd531e458695a99bcbe3a5e6c9753266141b14f67cb0799a2"} Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.096973 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.115452 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13d0f0c4-fa31-44ba-bc94-c0a80fc1b2df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ef83fedf9cc77cf73fdd00486ec9b0b04712a60a5448402754a44ad46da439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36430b9d5b01b4a6f3b9e7b58bfbec0c258f34847b321cb45bc3b23f84cf09fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eba9cbb70fbd88687c81b18ad50f8386f836bf2fa2c8f9e1c503a20af985416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b7d6b79713c6f4718939d3679f1ba6e237045d653762b6de122ebecdfabbe35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b7d6b79713c6f4718939d3679f1ba6e237045d653762b6de122ebecdfabbe35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:59Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.127894 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.128408 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.128421 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.128442 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.128454 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:59Z","lastTransitionTime":"2026-01-21T10:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.132623 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:59Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.147645 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:59Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.167759 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e507b4c3c536bdc63360b1386748657584f739e09973ec33c998ac267ca2766\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:59Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.183826 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 13:08:41.238398932 +0000 UTC Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.186529 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:59Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.215114 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:59Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.229441 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9164acd9a91836c25b02986f204dd0302964f6bff6fe077971d37eebd4b560ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:59Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.231274 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.231341 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.231358 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.231382 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.231396 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:59Z","lastTransitionTime":"2026-01-21T10:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.241364 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:59Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.258967 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:59Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.276451 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:59Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.289529 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d379505c-c658-4dd5-b841-40c8443012c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a2ec789636052b12e0fdb4e647d7e4f92d1e4b7436933f1529561ffc2021d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d634cee9f543d3322f8cdc8bc62252096e789383c55d5d448cc53ab990ac9b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:59Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.301323 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dtv4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3552adbd-011f-4552-9e69-233b92c554c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dtv4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:59Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.310809 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:57:59 crc kubenswrapper[4881]: E0121 10:57:59.310989 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.311305 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:57:59 crc kubenswrapper[4881]: E0121 10:57:59.311452 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtv4t" podUID="3552adbd-011f-4552-9e69-233b92c554c8" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.320336 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:59Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.334740 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.334823 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.334838 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.334857 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.334868 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:59Z","lastTransitionTime":"2026-01-21T10:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.335097 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:59Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.348300 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd18bd57e9f0f878f56164dee92c18a4fff62c83f518a96d7db735dcd488e052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:59Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.370413 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:59Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.395109 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e45d7cde8c26fc00bb1b5ad5cc88c41aff02acf0860856e86f3c3bfdc01e7045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599bfaf6bfc412f779f4732cf9f4273382c1f0af20576581264ea4fc63b2ec38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9454db8c9af3187ee06e017ac192668036b1f565305a37de61c3cffe4d94e7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa3136ee876ebb14cd5164f9176c91c60cc04fd7882a3ad0957ce3a36184d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2c1d1496a7f7e5fad6a3d4ade4104e8e0c40fd0ca4722ca5e1ca1fbf0ebf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29c170fdf57f1f4c678e2902443d9ef467ac3157370e997fd5596b4eecfb54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff735c08dae242cbd531e458695a99bcbe3a5e6c9753266141b14f67cb0799a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5897125d6a1004cb4f0527359e8fc0328bff6bcc5ac563fdc3d85b094414c563\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:57:36Z\\\",\\\"message\\\":\\\"-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 10:57:35.987756 6363 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0121 10:57:35.987768 6363 handler.go:208] Removed *v1.Pod event handler 3\\\\nF0121 10:57:35.988839 6363 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47c176d51b5558e85897ac8c72c02bb6f152a972d8e941aeb114333f777e22ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:57:59Z is after 2025-08-24T17:21:41Z" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.458087 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.458138 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.458148 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.458168 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.458181 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:59Z","lastTransitionTime":"2026-01-21T10:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.563558 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.563621 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.563635 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.563660 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.563670 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:59Z","lastTransitionTime":"2026-01-21T10:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.666988 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.667043 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.667054 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.667070 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.667108 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:59Z","lastTransitionTime":"2026-01-21T10:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.769923 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.769978 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.769989 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.770010 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.770050 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:59Z","lastTransitionTime":"2026-01-21T10:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.873721 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.873774 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.873801 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.873819 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.873830 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:59Z","lastTransitionTime":"2026-01-21T10:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.980226 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.980272 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.980282 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.980298 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:57:59 crc kubenswrapper[4881]: I0121 10:57:59.980309 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:57:59Z","lastTransitionTime":"2026-01-21T10:57:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.084205 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.084234 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.084242 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.084256 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.084266 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:00Z","lastTransitionTime":"2026-01-21T10:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.184209 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 21:54:49.103031506 +0000 UTC Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.188355 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.188392 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.188428 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.188446 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.188458 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:00Z","lastTransitionTime":"2026-01-21T10:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.291110 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.291226 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.291240 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.291265 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.291281 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:00Z","lastTransitionTime":"2026-01-21T10:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.310417 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:58:00 crc kubenswrapper[4881]: E0121 10:58:00.310542 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.310726 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:58:00 crc kubenswrapper[4881]: E0121 10:58:00.310772 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.394986 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.395059 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.395077 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.395100 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.395115 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:00Z","lastTransitionTime":"2026-01-21T10:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.498557 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.498612 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.498624 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.498644 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.498660 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:00Z","lastTransitionTime":"2026-01-21T10:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.602220 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.602263 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.602274 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.602291 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.602302 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:00Z","lastTransitionTime":"2026-01-21T10:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.704455 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.704482 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.704492 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.704506 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.704513 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:00Z","lastTransitionTime":"2026-01-21T10:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.808218 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.808266 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.808277 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.808296 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.808309 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:00Z","lastTransitionTime":"2026-01-21T10:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.911359 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.911430 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.911445 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.911466 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:00 crc kubenswrapper[4881]: I0121 10:58:00.911483 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:00Z","lastTransitionTime":"2026-01-21T10:58:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.014549 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.014594 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.014604 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.014622 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.014632 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:01Z","lastTransitionTime":"2026-01-21T10:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.117165 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.117209 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.117222 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.117239 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.117249 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:01Z","lastTransitionTime":"2026-01-21T10:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.122225 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bx64f_e8bb6d97-b3b8-4e31-b704-8e565385ab26/ovnkube-controller/2.log" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.123007 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bx64f_e8bb6d97-b3b8-4e31-b704-8e565385ab26/ovnkube-controller/1.log" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.126075 4881 generic.go:334] "Generic (PLEG): container finished" podID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerID="ff735c08dae242cbd531e458695a99bcbe3a5e6c9753266141b14f67cb0799a2" exitCode=1 Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.126127 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" event={"ID":"e8bb6d97-b3b8-4e31-b704-8e565385ab26","Type":"ContainerDied","Data":"ff735c08dae242cbd531e458695a99bcbe3a5e6c9753266141b14f67cb0799a2"} Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.126179 4881 scope.go:117] "RemoveContainer" containerID="5897125d6a1004cb4f0527359e8fc0328bff6bcc5ac563fdc3d85b094414c563" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.127103 4881 scope.go:117] "RemoveContainer" containerID="ff735c08dae242cbd531e458695a99bcbe3a5e6c9753266141b14f67cb0799a2" Jan 21 10:58:01 crc kubenswrapper[4881]: E0121 10:58:01.127297 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bx64f_openshift-ovn-kubernetes(e8bb6d97-b3b8-4e31-b704-8e565385ab26)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.145327 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13d0f0c4-fa31-44ba-bc94-c0a80fc1b2df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ef83fedf9cc77cf73fdd00486ec9b0b04712a60a5448402754a44ad46da439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36430b9d5b01b4a6f3b9e7b58bfbec0c258f34847b321cb45bc3b23f84cf09fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eba9cbb70fbd88687c81b18ad50f8386f836bf2fa2c8f9e1c503a20af985416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b7d6b79713c6f4718939d3679f1ba6e237045d653762b6de122ebecdfabbe35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b7d6b79713c6f4718939d3679f1ba6e237045d653762b6de122ebecdfabbe35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:01Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.161560 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:01Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.174801 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:01Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.184394 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 22:51:05.655376468 +0000 UTC Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.190864 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e507b4c3c536bdc63360b1386748657584f739e09973ec33c998ac267ca2766\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:01Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.205565 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:01Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.221107 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.221161 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.221172 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.221193 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.221211 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:01Z","lastTransitionTime":"2026-01-21T10:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.222697 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:01Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.234857 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9164acd9a91836c25b02986f204dd0302964f6bff6fe077971d37eebd4b560ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:01Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.246074 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dtv4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3552adbd-011f-4552-9e69-233b92c554c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dtv4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:01Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.261036 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:01Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.275184 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:01Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.288552 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:01Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.300158 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d379505c-c658-4dd5-b841-40c8443012c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a2ec789636052b12e0fdb4e647d7e4f92d1e4b7436933f1529561ffc2021d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d634cee9f543d3322f8cdc8bc62252096e789383c55d5d448cc53ab990ac9b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:01Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.310119 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.310115 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:58:01 crc kubenswrapper[4881]: E0121 10:58:01.310301 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:58:01 crc kubenswrapper[4881]: E0121 10:58:01.310468 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtv4t" podUID="3552adbd-011f-4552-9e69-233b92c554c8" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.318062 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:01Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.323774 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.323868 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.323880 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.323905 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.323923 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:01Z","lastTransitionTime":"2026-01-21T10:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.335455 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:01Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.354208 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd18bd57e9f0f878f56164dee92c18a4fff62c83f518a96d7db735dcd488e052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:01Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.369963 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:01Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.405149 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e45d7cde8c26fc00bb1b5ad5cc88c41aff02acf0860856e86f3c3bfdc01e7045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599bfaf6bfc412f779f4732cf9f4273382c1f0af20576581264ea4fc63b2ec38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9454db8c9af3187ee06e017ac192668036b1f565305a37de61c3cffe4d94e7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa3136ee876ebb14cd5164f9176c91c60cc04fd7882a3ad0957ce3a36184d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2c1d1496a7f7e5fad6a3d4ade4104e8e0c40fd0ca4722ca5e1ca1fbf0ebf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29c170fdf57f1f4c678e2902443d9ef467ac3157370e997fd5596b4eecfb54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff735c08dae242cbd531e458695a99bcbe3a5e6c9753266141b14f67cb0799a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5897125d6a1004cb4f0527359e8fc0328bff6bcc5ac563fdc3d85b094414c563\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:57:36Z\\\",\\\"message\\\":\\\"-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 10:57:35.987756 6363 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0121 10:57:35.987768 6363 handler.go:208] Removed *v1.Pod event handler 3\\\\nF0121 10:57:35.988839 6363 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff735c08dae242cbd531e458695a99bcbe3a5e6c9753266141b14f67cb0799a2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:58:00Z\\\",\\\"message\\\":\\\"work policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:00Z is after 2025-08-24T17:21:41Z]\\\\nI0121 10:58:00.782399 6726 services_controller.go:434] Service openshift-kube-controller-manager/kube-controller-manager retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{kube-controller-manager openshift-kube-controller-manager 90927ca1-43e2-420d-8485-a35952e82cd9 4812 0 2025-02-23 05:22:57 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[prometheus:kube-controller-manager] map[operator.openshift.io/spec-hash:bb05a56151ce98d11c8554843985ba99e0498dcafd98129435c2d982c5ea4c11 service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{Service\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47c176d51b5558e85897ac8c72c02bb6f152a972d8e941aeb114333f777e22ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:01Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.427352 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.427400 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.427420 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.427438 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.427449 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:01Z","lastTransitionTime":"2026-01-21T10:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.530228 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.530310 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.530323 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.530344 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.530357 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:01Z","lastTransitionTime":"2026-01-21T10:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.634750 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.634876 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.634888 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.634913 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.634927 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:01Z","lastTransitionTime":"2026-01-21T10:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.738022 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.738075 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.738088 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.738110 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.738122 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:01Z","lastTransitionTime":"2026-01-21T10:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.841118 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.841147 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.841156 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.841168 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.841178 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:01Z","lastTransitionTime":"2026-01-21T10:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.943524 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.943584 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.943597 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.943614 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:01 crc kubenswrapper[4881]: I0121 10:58:01.943626 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:01Z","lastTransitionTime":"2026-01-21T10:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.046488 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.046536 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.046546 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.046564 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.046574 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:02Z","lastTransitionTime":"2026-01-21T10:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.132210 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bx64f_e8bb6d97-b3b8-4e31-b704-8e565385ab26/ovnkube-controller/2.log" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.149375 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.149416 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.149425 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.149439 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.149450 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:02Z","lastTransitionTime":"2026-01-21T10:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.185245 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 03:20:52.539561661 +0000 UTC Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.253006 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.253082 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.253093 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.253115 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.253135 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:02Z","lastTransitionTime":"2026-01-21T10:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.309714 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.309777 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:58:02 crc kubenswrapper[4881]: E0121 10:58:02.309975 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:58:02 crc kubenswrapper[4881]: E0121 10:58:02.310180 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.356442 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.356500 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.356512 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.356534 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.356551 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:02Z","lastTransitionTime":"2026-01-21T10:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.459144 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.459180 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.459192 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.459206 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.459215 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:02Z","lastTransitionTime":"2026-01-21T10:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.562347 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.562403 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.562413 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.562433 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.562446 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:02Z","lastTransitionTime":"2026-01-21T10:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.665212 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.665255 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.665266 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.665286 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.665296 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:02Z","lastTransitionTime":"2026-01-21T10:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.768284 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.768322 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.768331 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.768347 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.768358 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:02Z","lastTransitionTime":"2026-01-21T10:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.871583 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.871615 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.871624 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.871640 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.871650 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:02Z","lastTransitionTime":"2026-01-21T10:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.974810 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.974851 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.974861 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.974878 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:02 crc kubenswrapper[4881]: I0121 10:58:02.974890 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:02Z","lastTransitionTime":"2026-01-21T10:58:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.078632 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.078677 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.078688 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.078705 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.078716 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:03Z","lastTransitionTime":"2026-01-21T10:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.182060 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.182112 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.182124 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.182145 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.182158 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:03Z","lastTransitionTime":"2026-01-21T10:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.186104 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 03:41:18.601177065 +0000 UTC Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.284976 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.285026 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.285038 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.285057 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.285071 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:03Z","lastTransitionTime":"2026-01-21T10:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.310919 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:58:03 crc kubenswrapper[4881]: E0121 10:58:03.311070 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.311568 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:58:03 crc kubenswrapper[4881]: E0121 10:58:03.311652 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtv4t" podUID="3552adbd-011f-4552-9e69-233b92c554c8" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.327255 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:03Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.343098 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:03Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.357256 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d379505c-c658-4dd5-b841-40c8443012c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a2ec789636052b12e0fdb4e647d7e4f92d1e4b7436933f1529561ffc2021d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d634cee9f543d3322f8cdc8bc62252096e789383c55d5d448cc53ab990ac9b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:03Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.371618 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dtv4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3552adbd-011f-4552-9e69-233b92c554c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dtv4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:03Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.386803 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:03Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.388815 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.388866 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.388883 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.388969 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.388988 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:03Z","lastTransitionTime":"2026-01-21T10:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.404804 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd18bd57e9f0f878f56164dee92c18a4fff62c83f518a96d7db735dcd488e052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:03Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.423853 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:03Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.444901 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e45d7cde8c26fc00bb1b5ad5cc88c41aff02acf0860856e86f3c3bfdc01e7045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599bfaf6bfc412f779f4732cf9f4273382c1f0af20576581264ea4fc63b2ec38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9454db8c9af3187ee06e017ac192668036b1f565305a37de61c3cffe4d94e7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa3136ee876ebb14cd5164f9176c91c60cc04fd7882a3ad0957ce3a36184d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2c1d1496a7f7e5fad6a3d4ade4104e8e0c40fd0ca4722ca5e1ca1fbf0ebf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29c170fdf57f1f4c678e2902443d9ef467ac3157370e997fd5596b4eecfb54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff735c08dae242cbd531e458695a99bcbe3a5e6c9753266141b14f67cb0799a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5897125d6a1004cb4f0527359e8fc0328bff6bcc5ac563fdc3d85b094414c563\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:57:36Z\\\",\\\"message\\\":\\\"-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 10:57:35.987756 6363 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0121 10:57:35.987768 6363 handler.go:208] Removed *v1.Pod event handler 3\\\\nF0121 10:57:35.988839 6363 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff735c08dae242cbd531e458695a99bcbe3a5e6c9753266141b14f67cb0799a2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:58:00Z\\\",\\\"message\\\":\\\"work policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:00Z is after 2025-08-24T17:21:41Z]\\\\nI0121 10:58:00.782399 6726 services_controller.go:434] Service openshift-kube-controller-manager/kube-controller-manager retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{kube-controller-manager openshift-kube-controller-manager 90927ca1-43e2-420d-8485-a35952e82cd9 4812 0 2025-02-23 05:22:57 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[prometheus:kube-controller-manager] map[operator.openshift.io/spec-hash:bb05a56151ce98d11c8554843985ba99e0498dcafd98129435c2d982c5ea4c11 service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{Service\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47c176d51b5558e85897ac8c72c02bb6f152a972d8e941aeb114333f777e22ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:03Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.459287 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:03Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.475091 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:03Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.486494 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:03Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.491615 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.491669 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.491679 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.491699 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.491711 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:03Z","lastTransitionTime":"2026-01-21T10:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.499621 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13d0f0c4-fa31-44ba-bc94-c0a80fc1b2df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ef83fedf9cc77cf73fdd00486ec9b0b04712a60a5448402754a44ad46da439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36430b9d5b01b4a6f3b9e7b58bfbec0c258f34847b321cb45bc3b23f84cf09fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eba9cbb70fbd88687c81b18ad50f8386f836bf2fa2c8f9e1c503a20af985416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b7d6b79713c6f4718939d3679f1ba6e237045d653762b6de122ebecdfabbe35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b7d6b79713c6f4718939d3679f1ba6e237045d653762b6de122ebecdfabbe35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:03Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.513859 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:03Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.531230 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:03Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.546405 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:03Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.561030 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9164acd9a91836c25b02986f204dd0302964f6bff6fe077971d37eebd4b560ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:03Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.577434 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e507b4c3c536bdc63360b1386748657584f739e09973ec33c998ac267ca2766\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:03Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.595100 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.595153 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.595166 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.595190 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.595205 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:03Z","lastTransitionTime":"2026-01-21T10:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.698391 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.698484 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.698508 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.698541 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.698567 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:03Z","lastTransitionTime":"2026-01-21T10:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.801267 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.801322 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.801331 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.801349 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.801364 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:03Z","lastTransitionTime":"2026-01-21T10:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.904641 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.904696 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.904706 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.904727 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:03 crc kubenswrapper[4881]: I0121 10:58:03.904740 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:03Z","lastTransitionTime":"2026-01-21T10:58:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.007599 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.007655 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.007666 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.007687 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.007702 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:04Z","lastTransitionTime":"2026-01-21T10:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.076838 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3552adbd-011f-4552-9e69-233b92c554c8-metrics-certs\") pod \"network-metrics-daemon-dtv4t\" (UID: \"3552adbd-011f-4552-9e69-233b92c554c8\") " pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:58:04 crc kubenswrapper[4881]: E0121 10:58:04.077214 4881 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 10:58:04 crc kubenswrapper[4881]: E0121 10:58:04.077305 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3552adbd-011f-4552-9e69-233b92c554c8-metrics-certs podName:3552adbd-011f-4552-9e69-233b92c554c8 nodeName:}" failed. No retries permitted until 2026-01-21 10:58:36.0772816 +0000 UTC m=+103.337238069 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3552adbd-011f-4552-9e69-233b92c554c8-metrics-certs") pod "network-metrics-daemon-dtv4t" (UID: "3552adbd-011f-4552-9e69-233b92c554c8") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.110307 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.110368 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.110382 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.110407 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.110422 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:04Z","lastTransitionTime":"2026-01-21T10:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.187155 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 22:53:28.710416456 +0000 UTC Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.214524 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.214626 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.214639 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.214660 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.214674 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:04Z","lastTransitionTime":"2026-01-21T10:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.310203 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:58:04 crc kubenswrapper[4881]: E0121 10:58:04.310425 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.310523 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:58:04 crc kubenswrapper[4881]: E0121 10:58:04.310676 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.318362 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.318402 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.318415 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.318436 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.318448 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:04Z","lastTransitionTime":"2026-01-21T10:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.421479 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.421583 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.421593 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.421608 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.421617 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:04Z","lastTransitionTime":"2026-01-21T10:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.524389 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.524444 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.524457 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.524475 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.524485 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:04Z","lastTransitionTime":"2026-01-21T10:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.627670 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.627726 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.627739 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.627762 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.627775 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:04Z","lastTransitionTime":"2026-01-21T10:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.730926 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.730964 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.730976 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.730992 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.731004 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:04Z","lastTransitionTime":"2026-01-21T10:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.834804 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.834860 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.834873 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.834893 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.834906 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:04Z","lastTransitionTime":"2026-01-21T10:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.937770 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.937839 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.937851 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.937871 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:04 crc kubenswrapper[4881]: I0121 10:58:04.937883 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:04Z","lastTransitionTime":"2026-01-21T10:58:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.040734 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.040778 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.040821 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.040837 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.040848 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:05Z","lastTransitionTime":"2026-01-21T10:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.143854 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.143909 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.143919 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.143943 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.143955 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:05Z","lastTransitionTime":"2026-01-21T10:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.187576 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 21:54:54.862743303 +0000 UTC Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.247005 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.247051 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.247061 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.247078 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.247087 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:05Z","lastTransitionTime":"2026-01-21T10:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.310266 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.310343 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:58:05 crc kubenswrapper[4881]: E0121 10:58:05.310477 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:58:05 crc kubenswrapper[4881]: E0121 10:58:05.310612 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtv4t" podUID="3552adbd-011f-4552-9e69-233b92c554c8" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.351380 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.351458 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.351476 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.351502 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.351518 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:05Z","lastTransitionTime":"2026-01-21T10:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.454292 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.454335 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.454347 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.454363 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.454375 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:05Z","lastTransitionTime":"2026-01-21T10:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.556842 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.556912 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.556933 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.556963 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.556980 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:05Z","lastTransitionTime":"2026-01-21T10:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.560881 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.560934 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.560943 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.560961 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.560979 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:05Z","lastTransitionTime":"2026-01-21T10:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:05 crc kubenswrapper[4881]: E0121 10:58:05.578277 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:05Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.582002 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.582358 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.582469 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.582584 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.582686 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:05Z","lastTransitionTime":"2026-01-21T10:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:05 crc kubenswrapper[4881]: E0121 10:58:05.596805 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:05Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.603129 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.603178 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.603194 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.603215 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.603235 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:05Z","lastTransitionTime":"2026-01-21T10:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:05 crc kubenswrapper[4881]: E0121 10:58:05.623232 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:05Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.628323 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.628392 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.628406 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.628425 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.628438 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:05Z","lastTransitionTime":"2026-01-21T10:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:05 crc kubenswrapper[4881]: E0121 10:58:05.646352 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:05Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.651111 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.651161 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.651172 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.651190 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.651201 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:05Z","lastTransitionTime":"2026-01-21T10:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:05 crc kubenswrapper[4881]: E0121 10:58:05.667381 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:05Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:05 crc kubenswrapper[4881]: E0121 10:58:05.667978 4881 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.670015 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.670041 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.670070 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.670086 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.670096 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:05Z","lastTransitionTime":"2026-01-21T10:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.774057 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.774104 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.774115 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.774130 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.774142 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:05Z","lastTransitionTime":"2026-01-21T10:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.876608 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.876667 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.876681 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.876703 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.876716 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:05Z","lastTransitionTime":"2026-01-21T10:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.980525 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.980592 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.980607 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.980629 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:05 crc kubenswrapper[4881]: I0121 10:58:05.980660 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:05Z","lastTransitionTime":"2026-01-21T10:58:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.084452 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.084504 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.084515 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.084537 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.084562 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:06Z","lastTransitionTime":"2026-01-21T10:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.187568 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.187625 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.187635 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.187659 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.187670 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:06Z","lastTransitionTime":"2026-01-21T10:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.187732 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 18:11:22.371403159 +0000 UTC Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.292203 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.292248 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.292263 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.292285 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.292305 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:06Z","lastTransitionTime":"2026-01-21T10:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.309877 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:58:06 crc kubenswrapper[4881]: E0121 10:58:06.310038 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.310320 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:58:06 crc kubenswrapper[4881]: E0121 10:58:06.310410 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.395114 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.395159 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.395220 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.395241 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.395253 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:06Z","lastTransitionTime":"2026-01-21T10:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.497894 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.497963 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.497978 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.497999 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.498013 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:06Z","lastTransitionTime":"2026-01-21T10:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.601684 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.601759 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.601772 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.601804 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.601817 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:06Z","lastTransitionTime":"2026-01-21T10:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.706208 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.706303 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.706324 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.706356 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.706377 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:06Z","lastTransitionTime":"2026-01-21T10:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.809140 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.809195 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.809211 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.809232 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.809244 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:06Z","lastTransitionTime":"2026-01-21T10:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.914886 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.914942 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.914952 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.914974 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:06 crc kubenswrapper[4881]: I0121 10:58:06.914989 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:06Z","lastTransitionTime":"2026-01-21T10:58:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.018122 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.018183 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.018261 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.018285 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.018300 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:07Z","lastTransitionTime":"2026-01-21T10:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.121426 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.121481 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.121496 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.121518 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.121533 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:07Z","lastTransitionTime":"2026-01-21T10:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.188417 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 15:13:35.991515569 +0000 UTC Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.224120 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.224180 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.224198 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.224226 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.224248 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:07Z","lastTransitionTime":"2026-01-21T10:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.309747 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.309871 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:58:07 crc kubenswrapper[4881]: E0121 10:58:07.309996 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtv4t" podUID="3552adbd-011f-4552-9e69-233b92c554c8" Jan 21 10:58:07 crc kubenswrapper[4881]: E0121 10:58:07.310107 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.327076 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.327113 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.327123 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.327136 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.327149 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:07Z","lastTransitionTime":"2026-01-21T10:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.430898 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.430960 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.430976 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.431005 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.431019 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:07Z","lastTransitionTime":"2026-01-21T10:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.534595 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.534648 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.534667 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.534687 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.534702 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:07Z","lastTransitionTime":"2026-01-21T10:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.638741 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.638807 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.638817 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.638836 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.638850 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:07Z","lastTransitionTime":"2026-01-21T10:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.743100 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.743149 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.743160 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.743180 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.743193 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:07Z","lastTransitionTime":"2026-01-21T10:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.845984 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.846056 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.846069 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.846094 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.846109 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:07Z","lastTransitionTime":"2026-01-21T10:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.949026 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.949072 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.949084 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.949104 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:07 crc kubenswrapper[4881]: I0121 10:58:07.949119 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:07Z","lastTransitionTime":"2026-01-21T10:58:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.052920 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.052988 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.053006 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.053046 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.053063 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:08Z","lastTransitionTime":"2026-01-21T10:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.156371 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.156417 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.156432 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.156462 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.156477 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:08Z","lastTransitionTime":"2026-01-21T10:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.188730 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 21:55:24.625785758 +0000 UTC Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.259712 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.259760 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.259774 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.259807 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.259823 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:08Z","lastTransitionTime":"2026-01-21T10:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.309658 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:58:08 crc kubenswrapper[4881]: E0121 10:58:08.309849 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.310377 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:58:08 crc kubenswrapper[4881]: E0121 10:58:08.310883 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.363236 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.363313 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.363332 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.363357 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.363373 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:08Z","lastTransitionTime":"2026-01-21T10:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.468254 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.468329 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.468354 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.468386 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.468435 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:08Z","lastTransitionTime":"2026-01-21T10:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.576278 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.576346 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.576365 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.576388 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.576404 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:08Z","lastTransitionTime":"2026-01-21T10:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.681677 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.681762 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.681772 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.681832 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.681845 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:08Z","lastTransitionTime":"2026-01-21T10:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.785254 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.785299 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.785314 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.785330 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.785344 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:08Z","lastTransitionTime":"2026-01-21T10:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.890124 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.890181 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.890199 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.890221 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.890235 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:08Z","lastTransitionTime":"2026-01-21T10:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.993533 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.993586 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.993600 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.993619 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:08 crc kubenswrapper[4881]: I0121 10:58:08.993631 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:08Z","lastTransitionTime":"2026-01-21T10:58:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.097442 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.097543 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.097563 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.097590 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.097609 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:09Z","lastTransitionTime":"2026-01-21T10:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.166830 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fs42r_09da9e14-f6d5-4346-a4a0-c17711e3b603/kube-multus/0.log" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.166897 4881 generic.go:334] "Generic (PLEG): container finished" podID="09da9e14-f6d5-4346-a4a0-c17711e3b603" containerID="821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb" exitCode=1 Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.166974 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fs42r" event={"ID":"09da9e14-f6d5-4346-a4a0-c17711e3b603","Type":"ContainerDied","Data":"821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb"} Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.167754 4881 scope.go:117] "RemoveContainer" containerID="821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.188586 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:09Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.189577 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 21:12:26.801613395 +0000 UTC Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.200246 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.200298 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.200310 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.200334 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.200347 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:09Z","lastTransitionTime":"2026-01-21T10:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.210617 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:09Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.230004 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:58:08Z\\\",\\\"message\\\":\\\"2026-01-21T10:57:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0cb853ce-7a29-40b7-96bf-1304acd74419\\\\n2026-01-21T10:57:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0cb853ce-7a29-40b7-96bf-1304acd74419 to /host/opt/cni/bin/\\\\n2026-01-21T10:57:23Z [verbose] multus-daemon started\\\\n2026-01-21T10:57:23Z [verbose] Readiness Indicator file check\\\\n2026-01-21T10:58:08Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:09Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.247817 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d379505c-c658-4dd5-b841-40c8443012c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a2ec789636052b12e0fdb4e647d7e4f92d1e4b7436933f1529561ffc2021d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d634cee9f543d3322f8cdc8bc62252096e789383c55d5d448cc53ab990ac9b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:09Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.260526 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dtv4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3552adbd-011f-4552-9e69-233b92c554c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dtv4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:09Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.277006 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:09Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.293048 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:09Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.303489 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.303551 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.303566 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.303584 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.303594 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:09Z","lastTransitionTime":"2026-01-21T10:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.307682 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd18bd57e9f0f878f56164dee92c18a4fff62c83f518a96d7db735dcd488e052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:09Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.309925 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.309922 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:58:09 crc kubenswrapper[4881]: E0121 10:58:09.310057 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtv4t" podUID="3552adbd-011f-4552-9e69-233b92c554c8" Jan 21 10:58:09 crc kubenswrapper[4881]: E0121 10:58:09.310130 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.324896 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:09Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.389282 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e45d7cde8c26fc00bb1b5ad5cc88c41aff02acf0860856e86f3c3bfdc01e7045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599bfaf6bfc412f779f4732cf9f4273382c1f0af20576581264ea4fc63b2ec38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9454db8c9af3187ee06e017ac192668036b1f565305a37de61c3cffe4d94e7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa3136ee876ebb14cd5164f9176c91c60cc04fd7882a3ad0957ce3a36184d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2c1d1496a7f7e5fad6a3d4ade4104e8e0c40fd0ca4722ca5e1ca1fbf0ebf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29c170fdf57f1f4c678e2902443d9ef467ac3157370e997fd5596b4eecfb54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff735c08dae242cbd531e458695a99bcbe3a5e6c9753266141b14f67cb0799a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5897125d6a1004cb4f0527359e8fc0328bff6bcc5ac563fdc3d85b094414c563\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:57:36Z\\\",\\\"message\\\":\\\"-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 10:57:35.987756 6363 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0121 10:57:35.987768 6363 handler.go:208] Removed *v1.Pod event handler 3\\\\nF0121 10:57:35.988839 6363 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff735c08dae242cbd531e458695a99bcbe3a5e6c9753266141b14f67cb0799a2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:58:00Z\\\",\\\"message\\\":\\\"work policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:00Z is after 2025-08-24T17:21:41Z]\\\\nI0121 10:58:00.782399 6726 services_controller.go:434] Service openshift-kube-controller-manager/kube-controller-manager retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{kube-controller-manager openshift-kube-controller-manager 90927ca1-43e2-420d-8485-a35952e82cd9 4812 0 2025-02-23 05:22:57 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[prometheus:kube-controller-manager] map[operator.openshift.io/spec-hash:bb05a56151ce98d11c8554843985ba99e0498dcafd98129435c2d982c5ea4c11 service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{Service\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47c176d51b5558e85897ac8c72c02bb6f152a972d8e941aeb114333f777e22ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:09Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.405432 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.405469 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.405480 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.405493 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.405503 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:09Z","lastTransitionTime":"2026-01-21T10:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.406449 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13d0f0c4-fa31-44ba-bc94-c0a80fc1b2df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ef83fedf9cc77cf73fdd00486ec9b0b04712a60a5448402754a44ad46da439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36430b9d5b01b4a6f3b9e7b58bfbec0c258f34847b321cb45bc3b23f84cf09fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eba9cbb70fbd88687c81b18ad50f8386f836bf2fa2c8f9e1c503a20af985416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b7d6b79713c6f4718939d3679f1ba6e237045d653762b6de122ebecdfabbe35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b7d6b79713c6f4718939d3679f1ba6e237045d653762b6de122ebecdfabbe35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:09Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.424514 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:09Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.436728 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:09Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.450454 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e507b4c3c536bdc63360b1386748657584f739e09973ec33c998ac267ca2766\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:09Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.464431 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:09Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.475663 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:09Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.486721 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9164acd9a91836c25b02986f204dd0302964f6bff6fe077971d37eebd4b560ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:09Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.507988 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.508022 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.508032 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.508048 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.508060 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:09Z","lastTransitionTime":"2026-01-21T10:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.611011 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.611079 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.611096 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.611117 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.611133 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:09Z","lastTransitionTime":"2026-01-21T10:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.716664 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.716711 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.716720 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.716733 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.716742 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:09Z","lastTransitionTime":"2026-01-21T10:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.818608 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.818673 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.818682 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.818695 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.818703 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:09Z","lastTransitionTime":"2026-01-21T10:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.922316 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.922362 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.922374 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.922392 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:09 crc kubenswrapper[4881]: I0121 10:58:09.922402 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:09Z","lastTransitionTime":"2026-01-21T10:58:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.025736 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.025817 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.025831 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.025848 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.025857 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:10Z","lastTransitionTime":"2026-01-21T10:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.129881 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.129940 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.129964 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.129995 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.130026 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:10Z","lastTransitionTime":"2026-01-21T10:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.175284 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fs42r_09da9e14-f6d5-4346-a4a0-c17711e3b603/kube-multus/0.log" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.175345 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fs42r" event={"ID":"09da9e14-f6d5-4346-a4a0-c17711e3b603","Type":"ContainerStarted","Data":"e44307f5cc08335dc686c05c12b4ac57aeb2211a1072fff108a06b37b2e1461b"} Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.189945 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 11:44:40.005019262 +0000 UTC Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.201064 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e507b4c3c536bdc63360b1386748657584f739e09973ec33c998ac267ca2766\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:10Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.219688 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:10Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.232573 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.232619 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.232631 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.232647 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.232657 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:10Z","lastTransitionTime":"2026-01-21T10:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.237452 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:10Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.252586 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9164acd9a91836c25b02986f204dd0302964f6bff6fe077971d37eebd4b560ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:10Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.272963 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:10Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.288960 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:10Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.307626 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e44307f5cc08335dc686c05c12b4ac57aeb2211a1072fff108a06b37b2e1461b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:58:08Z\\\",\\\"message\\\":\\\"2026-01-21T10:57:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0cb853ce-7a29-40b7-96bf-1304acd74419\\\\n2026-01-21T10:57:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0cb853ce-7a29-40b7-96bf-1304acd74419 to /host/opt/cni/bin/\\\\n2026-01-21T10:57:23Z [verbose] multus-daemon started\\\\n2026-01-21T10:57:23Z [verbose] Readiness Indicator file check\\\\n2026-01-21T10:58:08Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:10Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.309733 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:58:10 crc kubenswrapper[4881]: E0121 10:58:10.309955 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.310137 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:58:10 crc kubenswrapper[4881]: E0121 10:58:10.310238 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.324873 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d379505c-c658-4dd5-b841-40c8443012c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a2ec789636052b12e0fdb4e647d7e4f92d1e4b7436933f1529561ffc2021d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d634cee9f543d3322f8cdc8bc62252096e789383c55d5d448cc53ab990ac9b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:10Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.335280 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.335336 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.335348 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.335373 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.335386 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:10Z","lastTransitionTime":"2026-01-21T10:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.342537 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dtv4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3552adbd-011f-4552-9e69-233b92c554c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dtv4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:10Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.357546 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:10Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.375615 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:10Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.395069 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd18bd57e9f0f878f56164dee92c18a4fff62c83f518a96d7db735dcd488e052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:10Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.410549 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:10Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.437564 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e45d7cde8c26fc00bb1b5ad5cc88c41aff02acf0860856e86f3c3bfdc01e7045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599bfaf6bfc412f779f4732cf9f4273382c1f0af20576581264ea4fc63b2ec38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9454db8c9af3187ee06e017ac192668036b1f565305a37de61c3cffe4d94e7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa3136ee876ebb14cd5164f9176c91c60cc04fd7882a3ad0957ce3a36184d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2c1d1496a7f7e5fad6a3d4ade4104e8e0c40fd0ca4722ca5e1ca1fbf0ebf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29c170fdf57f1f4c678e2902443d9ef467ac3157370e997fd5596b4eecfb54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff735c08dae242cbd531e458695a99bcbe3a5e6c9753266141b14f67cb0799a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5897125d6a1004cb4f0527359e8fc0328bff6bcc5ac563fdc3d85b094414c563\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:57:36Z\\\",\\\"message\\\":\\\"-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 10:57:35.987756 6363 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0121 10:57:35.987768 6363 handler.go:208] Removed *v1.Pod event handler 3\\\\nF0121 10:57:35.988839 6363 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff735c08dae242cbd531e458695a99bcbe3a5e6c9753266141b14f67cb0799a2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:58:00Z\\\",\\\"message\\\":\\\"work policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:00Z is after 2025-08-24T17:21:41Z]\\\\nI0121 10:58:00.782399 6726 services_controller.go:434] Service openshift-kube-controller-manager/kube-controller-manager retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{kube-controller-manager openshift-kube-controller-manager 90927ca1-43e2-420d-8485-a35952e82cd9 4812 0 2025-02-23 05:22:57 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[prometheus:kube-controller-manager] map[operator.openshift.io/spec-hash:bb05a56151ce98d11c8554843985ba99e0498dcafd98129435c2d982c5ea4c11 service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{Service\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47c176d51b5558e85897ac8c72c02bb6f152a972d8e941aeb114333f777e22ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:10Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.438525 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.438600 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.438621 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.438646 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.438663 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:10Z","lastTransitionTime":"2026-01-21T10:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.455939 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13d0f0c4-fa31-44ba-bc94-c0a80fc1b2df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ef83fedf9cc77cf73fdd00486ec9b0b04712a60a5448402754a44ad46da439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36430b9d5b01b4a6f3b9e7b58bfbec0c258f34847b321cb45bc3b23f84cf09fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eba9cbb70fbd88687c81b18ad50f8386f836bf2fa2c8f9e1c503a20af985416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b7d6b79713c6f4718939d3679f1ba6e237045d653762b6de122ebecdfabbe35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b7d6b79713c6f4718939d3679f1ba6e237045d653762b6de122ebecdfabbe35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:10Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.477758 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:10Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.493625 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:10Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.542675 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.542752 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.542762 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.542803 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.542817 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:10Z","lastTransitionTime":"2026-01-21T10:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.645935 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.646001 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.646019 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.646044 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.646062 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:10Z","lastTransitionTime":"2026-01-21T10:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.749502 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.749588 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.749607 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.749635 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.749654 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:10Z","lastTransitionTime":"2026-01-21T10:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.854259 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.854311 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.854328 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.854349 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.854366 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:10Z","lastTransitionTime":"2026-01-21T10:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.957381 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.957422 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.957437 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.957453 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:10 crc kubenswrapper[4881]: I0121 10:58:10.957464 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:10Z","lastTransitionTime":"2026-01-21T10:58:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.060613 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.060660 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.060672 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.060688 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.060700 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:11Z","lastTransitionTime":"2026-01-21T10:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.163054 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.163092 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.163100 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.163113 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.163123 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:11Z","lastTransitionTime":"2026-01-21T10:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.190570 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 13:48:32.519156667 +0000 UTC Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.266601 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.266648 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.266659 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.266705 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.266720 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:11Z","lastTransitionTime":"2026-01-21T10:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.310715 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.310871 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:58:11 crc kubenswrapper[4881]: E0121 10:58:11.310913 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:58:11 crc kubenswrapper[4881]: E0121 10:58:11.311080 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtv4t" podUID="3552adbd-011f-4552-9e69-233b92c554c8" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.372098 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.372163 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.372174 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.372192 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.372204 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:11Z","lastTransitionTime":"2026-01-21T10:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.476259 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.476324 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.476337 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.476359 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.476371 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:11Z","lastTransitionTime":"2026-01-21T10:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.579340 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.579396 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.579408 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.579428 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.579441 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:11Z","lastTransitionTime":"2026-01-21T10:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.682619 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.682667 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.682692 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.682712 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.682724 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:11Z","lastTransitionTime":"2026-01-21T10:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.786886 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.787014 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.787085 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.787120 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.787143 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:11Z","lastTransitionTime":"2026-01-21T10:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.891456 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.891528 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.891546 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.891572 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.891594 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:11Z","lastTransitionTime":"2026-01-21T10:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.995485 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.995520 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.995529 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.995543 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:11 crc kubenswrapper[4881]: I0121 10:58:11.995553 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:11Z","lastTransitionTime":"2026-01-21T10:58:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.099067 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.099143 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.099164 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.099194 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.099212 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:12Z","lastTransitionTime":"2026-01-21T10:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.191693 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 13:36:44.713958976 +0000 UTC Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.202514 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.202570 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.202580 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.202599 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.202611 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:12Z","lastTransitionTime":"2026-01-21T10:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.307222 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.307291 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.307317 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.307342 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.307359 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:12Z","lastTransitionTime":"2026-01-21T10:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.310071 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.310145 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:58:12 crc kubenswrapper[4881]: E0121 10:58:12.310387 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:58:12 crc kubenswrapper[4881]: E0121 10:58:12.310547 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.329422 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.411646 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.411706 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.411725 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.411752 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.411770 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:12Z","lastTransitionTime":"2026-01-21T10:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.515218 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.515279 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.515296 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.515321 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.515338 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:12Z","lastTransitionTime":"2026-01-21T10:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.618480 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.618526 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.618543 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.618564 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.618581 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:12Z","lastTransitionTime":"2026-01-21T10:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.722236 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.722305 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.722324 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.722351 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.722369 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:12Z","lastTransitionTime":"2026-01-21T10:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.826283 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.826336 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.826347 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.826368 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.826382 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:12Z","lastTransitionTime":"2026-01-21T10:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.929856 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.929905 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.929917 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.929934 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:12 crc kubenswrapper[4881]: I0121 10:58:12.929943 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:12Z","lastTransitionTime":"2026-01-21T10:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.033105 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.033180 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.033189 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.033215 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.033239 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:13Z","lastTransitionTime":"2026-01-21T10:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.140534 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.140600 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.140612 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.140631 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.140645 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:13Z","lastTransitionTime":"2026-01-21T10:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.192828 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 21:24:06.121710163 +0000 UTC Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.243557 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.243606 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.243618 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.243636 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.243649 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:13Z","lastTransitionTime":"2026-01-21T10:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.310525 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.310653 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:58:13 crc kubenswrapper[4881]: E0121 10:58:13.310852 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:58:13 crc kubenswrapper[4881]: E0121 10:58:13.311004 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtv4t" podUID="3552adbd-011f-4552-9e69-233b92c554c8" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.311742 4881 scope.go:117] "RemoveContainer" containerID="ff735c08dae242cbd531e458695a99bcbe3a5e6c9753266141b14f67cb0799a2" Jan 21 10:58:13 crc kubenswrapper[4881]: E0121 10:58:13.311929 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bx64f_openshift-ovn-kubernetes(e8bb6d97-b3b8-4e31-b704-8e565385ab26)\"" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.327122 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d379505c-c658-4dd5-b841-40c8443012c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a2ec789636052b12e0fdb4e647d7e4f92d1e4b7436933f1529561ffc2021d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d634cee9f543d3322f8cdc8bc62252096e789383c55d5d448cc53ab990ac9b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.342583 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dtv4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3552adbd-011f-4552-9e69-233b92c554c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dtv4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.347223 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.347462 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.348240 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.348332 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.348893 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:13Z","lastTransitionTime":"2026-01-21T10:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.356280 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f9987a1-d9f5-467c-82b2-533a714c4c62\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cf7bf06a11465e04a80fe7ae667f9c15741137062514a621955622d2b339dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://988de7ed33eebe3cf67b8c6362d70c761e509feb2c3b72e6f6a4ffb9cddbf421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://988de7ed33eebe3cf67b8c6362d70c761e509feb2c3b72e6f6a4ffb9cddbf421\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.374234 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.390539 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.408117 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e44307f5cc08335dc686c05c12b4ac57aeb2211a1072fff108a06b37b2e1461b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:58:08Z\\\",\\\"message\\\":\\\"2026-01-21T10:57:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0cb853ce-7a29-40b7-96bf-1304acd74419\\\\n2026-01-21T10:57:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0cb853ce-7a29-40b7-96bf-1304acd74419 to /host/opt/cni/bin/\\\\n2026-01-21T10:57:23Z [verbose] multus-daemon started\\\\n2026-01-21T10:57:23Z [verbose] Readiness Indicator file check\\\\n2026-01-21T10:58:08Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.433074 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e45d7cde8c26fc00bb1b5ad5cc88c41aff02acf0860856e86f3c3bfdc01e7045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599bfaf6bfc412f779f4732cf9f4273382c1f0af20576581264ea4fc63b2ec38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9454db8c9af3187ee06e017ac192668036b1f565305a37de61c3cffe4d94e7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa3136ee876ebb14cd5164f9176c91c60cc04fd7882a3ad0957ce3a36184d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2c1d1496a7f7e5fad6a3d4ade4104e8e0c40fd0ca4722ca5e1ca1fbf0ebf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29c170fdf57f1f4c678e2902443d9ef467ac3157370e997fd5596b4eecfb54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff735c08dae242cbd531e458695a99bcbe3a5e6c9753266141b14f67cb0799a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5897125d6a1004cb4f0527359e8fc0328bff6bcc5ac563fdc3d85b094414c563\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:57:36Z\\\",\\\"message\\\":\\\"-machine-config-operator/machine-config-operator]} name:Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.183:9001:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {5b85277d-d9b7-4a68-8e4e-2b80594d9347}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0121 10:57:35.987756 6363 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0121 10:57:35.987768 6363 handler.go:208] Removed *v1.Pod event handler 3\\\\nF0121 10:57:35.988839 6363 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff735c08dae242cbd531e458695a99bcbe3a5e6c9753266141b14f67cb0799a2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:58:00Z\\\",\\\"message\\\":\\\"work policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:00Z is after 2025-08-24T17:21:41Z]\\\\nI0121 10:58:00.782399 6726 services_controller.go:434] Service openshift-kube-controller-manager/kube-controller-manager retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{kube-controller-manager openshift-kube-controller-manager 90927ca1-43e2-420d-8485-a35952e82cd9 4812 0 2025-02-23 05:22:57 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[prometheus:kube-controller-manager] map[operator.openshift.io/spec-hash:bb05a56151ce98d11c8554843985ba99e0498dcafd98129435c2d982c5ea4c11 service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{Service\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47c176d51b5558e85897ac8c72c02bb6f152a972d8e941aeb114333f777e22ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.452005 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.452040 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.452048 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.452062 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.452073 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:13Z","lastTransitionTime":"2026-01-21T10:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.452951 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.467279 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.485459 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd18bd57e9f0f878f56164dee92c18a4fff62c83f518a96d7db735dcd488e052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.504916 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.526482 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13d0f0c4-fa31-44ba-bc94-c0a80fc1b2df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ef83fedf9cc77cf73fdd00486ec9b0b04712a60a5448402754a44ad46da439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36430b9d5b01b4a6f3b9e7b58bfbec0c258f34847b321cb45bc3b23f84cf09fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eba9cbb70fbd88687c81b18ad50f8386f836bf2fa2c8f9e1c503a20af985416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b7d6b79713c6f4718939d3679f1ba6e237045d653762b6de122ebecdfabbe35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b7d6b79713c6f4718939d3679f1ba6e237045d653762b6de122ebecdfabbe35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.541418 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.554362 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.554396 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.554407 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.554424 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.554437 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:13Z","lastTransitionTime":"2026-01-21T10:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.555233 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.566404 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9164acd9a91836c25b02986f204dd0302964f6bff6fe077971d37eebd4b560ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.582924 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e507b4c3c536bdc63360b1386748657584f739e09973ec33c998ac267ca2766\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.599599 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.613568 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.624348 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d379505c-c658-4dd5-b841-40c8443012c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a2ec789636052b12e0fdb4e647d7e4f92d1e4b7436933f1529561ffc2021d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d634cee9f543d3322f8cdc8bc62252096e789383c55d5d448cc53ab990ac9b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.634022 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dtv4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3552adbd-011f-4552-9e69-233b92c554c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dtv4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.643983 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f9987a1-d9f5-467c-82b2-533a714c4c62\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cf7bf06a11465e04a80fe7ae667f9c15741137062514a621955622d2b339dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://988de7ed33eebe3cf67b8c6362d70c761e509feb2c3b72e6f6a4ffb9cddbf421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://988de7ed33eebe3cf67b8c6362d70c761e509feb2c3b72e6f6a4ffb9cddbf421\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.656670 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.657067 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.657136 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.657147 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.657162 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.657171 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:13Z","lastTransitionTime":"2026-01-21T10:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.674179 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.690841 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e44307f5cc08335dc686c05c12b4ac57aeb2211a1072fff108a06b37b2e1461b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:58:08Z\\\",\\\"message\\\":\\\"2026-01-21T10:57:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0cb853ce-7a29-40b7-96bf-1304acd74419\\\\n2026-01-21T10:57:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0cb853ce-7a29-40b7-96bf-1304acd74419 to /host/opt/cni/bin/\\\\n2026-01-21T10:57:23Z [verbose] multus-daemon started\\\\n2026-01-21T10:57:23Z [verbose] Readiness Indicator file check\\\\n2026-01-21T10:58:08Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.717492 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e45d7cde8c26fc00bb1b5ad5cc88c41aff02acf0860856e86f3c3bfdc01e7045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599bfaf6bfc412f779f4732cf9f4273382c1f0af20576581264ea4fc63b2ec38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9454db8c9af3187ee06e017ac192668036b1f565305a37de61c3cffe4d94e7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa3136ee876ebb14cd5164f9176c91c60cc04fd7882a3ad0957ce3a36184d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2c1d1496a7f7e5fad6a3d4ade4104e8e0c40fd0ca4722ca5e1ca1fbf0ebf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29c170fdf57f1f4c678e2902443d9ef467ac3157370e997fd5596b4eecfb54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff735c08dae242cbd531e458695a99bcbe3a5e6c9753266141b14f67cb0799a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff735c08dae242cbd531e458695a99bcbe3a5e6c9753266141b14f67cb0799a2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:58:00Z\\\",\\\"message\\\":\\\"work policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:00Z is after 2025-08-24T17:21:41Z]\\\\nI0121 10:58:00.782399 6726 services_controller.go:434] Service openshift-kube-controller-manager/kube-controller-manager retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{kube-controller-manager openshift-kube-controller-manager 90927ca1-43e2-420d-8485-a35952e82cd9 4812 0 2025-02-23 05:22:57 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[prometheus:kube-controller-manager] map[operator.openshift.io/spec-hash:bb05a56151ce98d11c8554843985ba99e0498dcafd98129435c2d982c5ea4c11 service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{Service\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bx64f_openshift-ovn-kubernetes(e8bb6d97-b3b8-4e31-b704-8e565385ab26)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47c176d51b5558e85897ac8c72c02bb6f152a972d8e941aeb114333f777e22ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.734036 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.749608 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.759949 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.759990 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.760000 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.760013 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.760021 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:13Z","lastTransitionTime":"2026-01-21T10:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.766847 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd18bd57e9f0f878f56164dee92c18a4fff62c83f518a96d7db735dcd488e052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.779948 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.795359 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13d0f0c4-fa31-44ba-bc94-c0a80fc1b2df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ef83fedf9cc77cf73fdd00486ec9b0b04712a60a5448402754a44ad46da439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36430b9d5b01b4a6f3b9e7b58bfbec0c258f34847b321cb45bc3b23f84cf09fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eba9cbb70fbd88687c81b18ad50f8386f836bf2fa2c8f9e1c503a20af985416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b7d6b79713c6f4718939d3679f1ba6e237045d653762b6de122ebecdfabbe35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b7d6b79713c6f4718939d3679f1ba6e237045d653762b6de122ebecdfabbe35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.812053 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.824531 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.835054 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9164acd9a91836c25b02986f204dd0302964f6bff6fe077971d37eebd4b560ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.850825 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e507b4c3c536bdc63360b1386748657584f739e09973ec33c998ac267ca2766\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.862547 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.862599 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.862613 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.862635 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.862650 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:13Z","lastTransitionTime":"2026-01-21T10:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.864017 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.878297 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:13Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.965547 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.965621 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.965639 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.965665 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:13 crc kubenswrapper[4881]: I0121 10:58:13.965682 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:13Z","lastTransitionTime":"2026-01-21T10:58:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.068843 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.068912 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.068932 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.068958 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.068975 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:14Z","lastTransitionTime":"2026-01-21T10:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.171524 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.171575 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.171592 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.171616 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.171636 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:14Z","lastTransitionTime":"2026-01-21T10:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.193839 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 16:55:32.885581457 +0000 UTC Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.274146 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.274180 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.274190 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.274207 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.274218 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:14Z","lastTransitionTime":"2026-01-21T10:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.310071 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:58:14 crc kubenswrapper[4881]: E0121 10:58:14.310257 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.310699 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:58:14 crc kubenswrapper[4881]: E0121 10:58:14.310774 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.376501 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.376542 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.376551 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.376565 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.376574 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:14Z","lastTransitionTime":"2026-01-21T10:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.480202 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.480260 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.480271 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.480290 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.480303 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:14Z","lastTransitionTime":"2026-01-21T10:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.583684 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.583728 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.583741 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.583829 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.583853 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:14Z","lastTransitionTime":"2026-01-21T10:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.688097 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.688178 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.688227 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.688274 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.688285 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:14Z","lastTransitionTime":"2026-01-21T10:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.793346 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.793517 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.793596 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.793653 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.793683 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:14Z","lastTransitionTime":"2026-01-21T10:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.899374 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.899428 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.899441 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.899458 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:14 crc kubenswrapper[4881]: I0121 10:58:14.899471 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:14Z","lastTransitionTime":"2026-01-21T10:58:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.002912 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.002959 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.002971 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.002998 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.003013 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:15Z","lastTransitionTime":"2026-01-21T10:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.106325 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.106400 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.106420 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.106494 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.106513 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:15Z","lastTransitionTime":"2026-01-21T10:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.194515 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 16:20:43.357645818 +0000 UTC Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.209398 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.209447 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.209460 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.209477 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.209489 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:15Z","lastTransitionTime":"2026-01-21T10:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.310391 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:58:15 crc kubenswrapper[4881]: E0121 10:58:15.310678 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtv4t" podUID="3552adbd-011f-4552-9e69-233b92c554c8" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.310834 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:58:15 crc kubenswrapper[4881]: E0121 10:58:15.311130 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.312154 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.312192 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.312202 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.312215 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.312226 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:15Z","lastTransitionTime":"2026-01-21T10:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.415236 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.415319 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.415344 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.415374 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.415398 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:15Z","lastTransitionTime":"2026-01-21T10:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.518079 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.518127 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.518138 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.518155 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.518167 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:15Z","lastTransitionTime":"2026-01-21T10:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.622593 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.622634 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.622645 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.622666 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.622677 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:15Z","lastTransitionTime":"2026-01-21T10:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.726246 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.726325 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.726342 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.726367 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.726384 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:15Z","lastTransitionTime":"2026-01-21T10:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.829702 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.829773 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.829831 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.829855 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.829873 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:15Z","lastTransitionTime":"2026-01-21T10:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.934608 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.934650 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.934662 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.934681 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.934692 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:15Z","lastTransitionTime":"2026-01-21T10:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.977149 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.977209 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.977220 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.977237 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.977252 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:15Z","lastTransitionTime":"2026-01-21T10:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:15 crc kubenswrapper[4881]: E0121 10:58:15.993595 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:15Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.998638 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.998677 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.998689 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.998706 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:15 crc kubenswrapper[4881]: I0121 10:58:15.998717 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:15Z","lastTransitionTime":"2026-01-21T10:58:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:16 crc kubenswrapper[4881]: E0121 10:58:16.016409 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:16Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.021098 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.021143 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.021154 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.021171 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.021182 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:16Z","lastTransitionTime":"2026-01-21T10:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:16 crc kubenswrapper[4881]: E0121 10:58:16.037209 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:16Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.041716 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.041757 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.041767 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.041808 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.041818 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:16Z","lastTransitionTime":"2026-01-21T10:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:16 crc kubenswrapper[4881]: E0121 10:58:16.057506 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:16Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.061876 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.061952 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.061969 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.061991 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.062006 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:16Z","lastTransitionTime":"2026-01-21T10:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:16 crc kubenswrapper[4881]: E0121 10:58:16.077843 4881 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:58:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"26a8a75a-20da-43b0-891d-353287c7b817\\\",\\\"systemUUID\\\":\\\"5fb73d3d-5879-4958-af84-1cb776cbe5bd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:16Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:16 crc kubenswrapper[4881]: E0121 10:58:16.078053 4881 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.079998 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.080040 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.080058 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.080078 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.080103 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:16Z","lastTransitionTime":"2026-01-21T10:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.182967 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.183070 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.183092 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.183118 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.183137 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:16Z","lastTransitionTime":"2026-01-21T10:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.195648 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 01:05:18.113764085 +0000 UTC Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.286325 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.286403 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.286492 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.286559 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.286578 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:16Z","lastTransitionTime":"2026-01-21T10:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.310144 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.310215 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:58:16 crc kubenswrapper[4881]: E0121 10:58:16.310396 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:58:16 crc kubenswrapper[4881]: E0121 10:58:16.310578 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.389557 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.389650 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.389671 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.389698 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.389716 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:16Z","lastTransitionTime":"2026-01-21T10:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.492899 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.492947 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.493077 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.493098 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.493109 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:16Z","lastTransitionTime":"2026-01-21T10:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.596163 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.596240 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.596265 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.596297 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.596320 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:16Z","lastTransitionTime":"2026-01-21T10:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.699511 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.699574 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.699597 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.699625 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.699644 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:16Z","lastTransitionTime":"2026-01-21T10:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.802704 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.803049 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.803061 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.803077 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.803089 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:16Z","lastTransitionTime":"2026-01-21T10:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.906915 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.906993 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.907016 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.907048 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:16 crc kubenswrapper[4881]: I0121 10:58:16.907071 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:16Z","lastTransitionTime":"2026-01-21T10:58:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.010212 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.010291 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.010315 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.010348 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.010373 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:17Z","lastTransitionTime":"2026-01-21T10:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.114184 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.114242 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.114264 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.114293 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.114314 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:17Z","lastTransitionTime":"2026-01-21T10:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.196386 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 19:40:40.432416603 +0000 UTC Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.217267 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.217344 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.217368 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.217397 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.217422 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:17Z","lastTransitionTime":"2026-01-21T10:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.310455 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.310455 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:58:17 crc kubenswrapper[4881]: E0121 10:58:17.310712 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtv4t" podUID="3552adbd-011f-4552-9e69-233b92c554c8" Jan 21 10:58:17 crc kubenswrapper[4881]: E0121 10:58:17.310919 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.320146 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.320248 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.320278 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.320348 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.320378 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:17Z","lastTransitionTime":"2026-01-21T10:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.425521 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.425604 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.425636 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.425669 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.425695 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:17Z","lastTransitionTime":"2026-01-21T10:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.528611 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.528667 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.528681 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.528698 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.528710 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:17Z","lastTransitionTime":"2026-01-21T10:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.632893 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.632963 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.632980 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.633005 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.633021 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:17Z","lastTransitionTime":"2026-01-21T10:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.736554 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.736623 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.736641 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.736666 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.736684 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:17Z","lastTransitionTime":"2026-01-21T10:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.840822 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.840910 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.840934 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.840967 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.840989 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:17Z","lastTransitionTime":"2026-01-21T10:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.943592 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.943859 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.943873 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.943891 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:17 crc kubenswrapper[4881]: I0121 10:58:17.943903 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:17Z","lastTransitionTime":"2026-01-21T10:58:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.046361 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.046432 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.046451 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.046477 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.046497 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:18Z","lastTransitionTime":"2026-01-21T10:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.149872 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.149941 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.149962 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.149991 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.150014 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:18Z","lastTransitionTime":"2026-01-21T10:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.197371 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 12:07:56.493725846 +0000 UTC Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.253270 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.253318 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.253334 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.253359 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.253373 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:18Z","lastTransitionTime":"2026-01-21T10:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.310324 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.310421 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:58:18 crc kubenswrapper[4881]: E0121 10:58:18.310493 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:58:18 crc kubenswrapper[4881]: E0121 10:58:18.310612 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.356257 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.356310 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.356327 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.356352 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.356370 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:18Z","lastTransitionTime":"2026-01-21T10:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.459261 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.459357 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.459379 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.459406 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.459424 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:18Z","lastTransitionTime":"2026-01-21T10:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.562164 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.562208 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.562219 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.562236 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.562284 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:18Z","lastTransitionTime":"2026-01-21T10:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.665110 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.665196 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.665219 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.665250 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.665272 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:18Z","lastTransitionTime":"2026-01-21T10:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.768355 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.768401 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.768415 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.768435 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.768448 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:18Z","lastTransitionTime":"2026-01-21T10:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.872454 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.872525 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.872542 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.872572 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.872591 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:18Z","lastTransitionTime":"2026-01-21T10:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.975606 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.975664 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.975677 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.975698 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:18 crc kubenswrapper[4881]: I0121 10:58:18.975710 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:18Z","lastTransitionTime":"2026-01-21T10:58:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.066615 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.066743 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:58:19 crc kubenswrapper[4881]: E0121 10:58:19.066821 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:59:23.066741869 +0000 UTC m=+150.326698368 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.066967 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:58:19 crc kubenswrapper[4881]: E0121 10:58:19.067020 4881 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:58:19 crc kubenswrapper[4881]: E0121 10:58:19.067074 4881 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:58:19 crc kubenswrapper[4881]: E0121 10:58:19.067140 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:59:23.067114108 +0000 UTC m=+150.327070607 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:58:19 crc kubenswrapper[4881]: E0121 10:58:19.067169 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:59:23.067156879 +0000 UTC m=+150.327113378 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.078103 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.078177 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.078212 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.078243 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.078269 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:19Z","lastTransitionTime":"2026-01-21T10:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.168191 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.168376 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:58:19 crc kubenswrapper[4881]: E0121 10:58:19.168460 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:58:19 crc kubenswrapper[4881]: E0121 10:58:19.168513 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:58:19 crc kubenswrapper[4881]: E0121 10:58:19.168534 4881 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:58:19 crc kubenswrapper[4881]: E0121 10:58:19.168575 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:58:19 crc kubenswrapper[4881]: E0121 10:58:19.168605 4881 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:58:19 crc kubenswrapper[4881]: E0121 10:58:19.168624 4881 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:58:19 crc kubenswrapper[4881]: E0121 10:58:19.168706 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 10:59:23.168682963 +0000 UTC m=+150.428639472 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:58:19 crc kubenswrapper[4881]: E0121 10:58:19.168824 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 10:59:23.168760585 +0000 UTC m=+150.428717084 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.181182 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.181240 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.181259 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.181282 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.181302 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:19Z","lastTransitionTime":"2026-01-21T10:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.197875 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 19:36:19.839832633 +0000 UTC Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.284585 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.284649 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.284666 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.284689 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.285039 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:19Z","lastTransitionTime":"2026-01-21T10:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.309856 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.309930 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:58:19 crc kubenswrapper[4881]: E0121 10:58:19.310032 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:58:19 crc kubenswrapper[4881]: E0121 10:58:19.310302 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtv4t" podUID="3552adbd-011f-4552-9e69-233b92c554c8" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.389076 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.389144 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.389161 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.389182 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.389197 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:19Z","lastTransitionTime":"2026-01-21T10:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.492273 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.492342 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.492366 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.492396 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.492419 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:19Z","lastTransitionTime":"2026-01-21T10:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.595568 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.595657 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.595724 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.595821 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.595849 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:19Z","lastTransitionTime":"2026-01-21T10:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.699266 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.699323 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.699339 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.699361 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.699380 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:19Z","lastTransitionTime":"2026-01-21T10:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.802590 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.802659 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.802676 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.802706 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.802723 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:19Z","lastTransitionTime":"2026-01-21T10:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.906098 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.906171 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.906195 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.906224 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:19 crc kubenswrapper[4881]: I0121 10:58:19.906245 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:19Z","lastTransitionTime":"2026-01-21T10:58:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.009300 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.009329 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.009337 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.009355 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.009374 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:20Z","lastTransitionTime":"2026-01-21T10:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.113111 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.113283 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.113294 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.113319 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.113333 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:20Z","lastTransitionTime":"2026-01-21T10:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.198554 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 13:50:55.116003222 +0000 UTC Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.215269 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.215319 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.215328 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.215346 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.215359 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:20Z","lastTransitionTime":"2026-01-21T10:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.309924 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.310019 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:58:20 crc kubenswrapper[4881]: E0121 10:58:20.310079 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:58:20 crc kubenswrapper[4881]: E0121 10:58:20.310244 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.317877 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.317938 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.317957 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.317980 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.318000 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:20Z","lastTransitionTime":"2026-01-21T10:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.421574 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.421648 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.421668 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.421696 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.421753 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:20Z","lastTransitionTime":"2026-01-21T10:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.525426 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.525513 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.525544 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.525576 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.525599 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:20Z","lastTransitionTime":"2026-01-21T10:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.628200 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.628277 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.628295 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.628322 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.628340 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:20Z","lastTransitionTime":"2026-01-21T10:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.731220 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.731293 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.731313 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.731338 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.731356 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:20Z","lastTransitionTime":"2026-01-21T10:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.835147 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.835227 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.835253 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.835288 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.835315 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:20Z","lastTransitionTime":"2026-01-21T10:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.938957 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.939002 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.939015 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.939034 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:20 crc kubenswrapper[4881]: I0121 10:58:20.939046 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:20Z","lastTransitionTime":"2026-01-21T10:58:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.042285 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.042447 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.042472 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.042502 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.042522 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:21Z","lastTransitionTime":"2026-01-21T10:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.146460 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.146516 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.146534 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.146556 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.146574 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:21Z","lastTransitionTime":"2026-01-21T10:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.199019 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 07:52:56.714836852 +0000 UTC Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.249767 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.249905 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.249925 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.249952 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.249971 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:21Z","lastTransitionTime":"2026-01-21T10:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.311121 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:58:21 crc kubenswrapper[4881]: E0121 10:58:21.311334 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.311605 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:58:21 crc kubenswrapper[4881]: E0121 10:58:21.312285 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtv4t" podUID="3552adbd-011f-4552-9e69-233b92c554c8" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.335181 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.352861 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.352924 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.352942 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.352972 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.352991 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:21Z","lastTransitionTime":"2026-01-21T10:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.455930 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.455991 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.456003 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.456023 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.456036 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:21Z","lastTransitionTime":"2026-01-21T10:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.559918 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.559967 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.559976 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.559991 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.560001 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:21Z","lastTransitionTime":"2026-01-21T10:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.662655 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.662720 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.662735 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.662757 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.662771 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:21Z","lastTransitionTime":"2026-01-21T10:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.766611 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.766688 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.766707 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.766732 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.766750 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:21Z","lastTransitionTime":"2026-01-21T10:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.870558 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.870631 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.870643 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.870661 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.870675 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:21Z","lastTransitionTime":"2026-01-21T10:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.973447 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.973526 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.973536 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.973555 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:21 crc kubenswrapper[4881]: I0121 10:58:21.973570 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:21Z","lastTransitionTime":"2026-01-21T10:58:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.076462 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.076516 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.076528 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.076546 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.076559 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:22Z","lastTransitionTime":"2026-01-21T10:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.180311 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.180367 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.180377 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.180397 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.180409 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:22Z","lastTransitionTime":"2026-01-21T10:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.199797 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 00:43:30.833902988 +0000 UTC Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.284159 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.284231 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.284250 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.284276 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.284295 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:22Z","lastTransitionTime":"2026-01-21T10:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.310210 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.310258 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:58:22 crc kubenswrapper[4881]: E0121 10:58:22.310361 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:58:22 crc kubenswrapper[4881]: E0121 10:58:22.310522 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.388323 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.388374 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.388385 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.388412 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.388425 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:22Z","lastTransitionTime":"2026-01-21T10:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.491258 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.491328 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.491345 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.491374 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.491394 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:22Z","lastTransitionTime":"2026-01-21T10:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.595204 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.595280 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.595297 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.595323 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.595341 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:22Z","lastTransitionTime":"2026-01-21T10:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.699263 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.699321 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.699338 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.699364 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.699385 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:22Z","lastTransitionTime":"2026-01-21T10:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.802126 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.802193 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.802209 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.802225 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.802237 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:22Z","lastTransitionTime":"2026-01-21T10:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.906797 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.906880 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.906903 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.906948 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:22 crc kubenswrapper[4881]: I0121 10:58:22.906964 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:22Z","lastTransitionTime":"2026-01-21T10:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.010356 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.010404 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.010419 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.010441 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.010457 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:23Z","lastTransitionTime":"2026-01-21T10:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.114086 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.114137 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.114151 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.114171 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.114184 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:23Z","lastTransitionTime":"2026-01-21T10:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.200092 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 02:27:49.131611695 +0000 UTC Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.217056 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.217094 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.217104 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.217116 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.217125 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:23Z","lastTransitionTime":"2026-01-21T10:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.310252 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:58:23 crc kubenswrapper[4881]: E0121 10:58:23.310350 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.310425 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:58:23 crc kubenswrapper[4881]: E0121 10:58:23.310768 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtv4t" podUID="3552adbd-011f-4552-9e69-233b92c554c8" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.319929 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.319985 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.320002 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.320026 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.320047 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:23Z","lastTransitionTime":"2026-01-21T10:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.331332 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13d0f0c4-fa31-44ba-bc94-c0a80fc1b2df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ef83fedf9cc77cf73fdd00486ec9b0b04712a60a5448402754a44ad46da439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36430b9d5b01b4a6f3b9e7b58bfbec0c258f34847b321cb45bc3b23f84cf09fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eba9cbb70fbd88687c81b18ad50f8386f836bf2fa2c8f9e1c503a20af985416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b7d6b79713c6f4718939d3679f1ba6e237045d653762b6de122ebecdfabbe35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b7d6b79713c6f4718939d3679f1ba6e237045d653762b6de122ebecdfabbe35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.353335 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.368845 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.396364 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18076c9a-f18b-4640-a048-68b6dbbfa85e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfeb13ada78bc1504e657a94ab793ae27d4dbd9f333df47b951323f4e642e869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c05c062aefb9117f9f961f35221b8fa36b3374a184edcedea404d33539be0b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06c96476b642e401c90a3f6810ea1624e2914188ba139b9303b963f1d5bc1f30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cc29934ce0927ee4fdd2c97ca3bbbcaaf6287060d05447572edeefa8a66af25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c781ff2e87fbae055bac0e3f8f77e2eeee8aa4e38c83ff4b49645798949c550c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a0569ab7ed4586aadd7deab6398db98bfc9a6afd3903d5466c05021a41632a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10a0569ab7ed4586aadd7deab6398db98bfc9a6afd3903d5466c05021a41632a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae46ac7909a717555defd27b6fa785f9c7f927fd7806c7941529c2e64ee3700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dae46ac7909a717555defd27b6fa785f9c7f927fd7806c7941529c2e64ee3700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6b3e4e88955652dacaa965ab4ff099595a6bb920836bfd4ad703984e00029b98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b3e4e88955652dacaa965ab4ff099595a6bb920836bfd4ad703984e00029b98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.424168 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.424237 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.424259 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.424289 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.424311 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:23Z","lastTransitionTime":"2026-01-21T10:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.425886 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5da31bf1-60a6-4d73-a425-97fe36cd40ee\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0e507b4c3c536bdc63360b1386748657584f739e09973ec33c998ac267ca2766\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0121 10:57:08.940903 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0121 10:57:08.942374 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2558080694/tls.crt::/tmp/serving-cert-2558080694/tls.key\\\\\\\"\\\\nI0121 10:57:14.590178 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0121 10:57:14.594387 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0121 10:57:14.594447 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0121 10:57:14.594564 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0121 10:57:14.594575 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0121 10:57:14.615981 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0121 10:57:14.616035 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616045 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0121 10:57:14.616060 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0121 10:57:14.616065 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0121 10:57:14.616071 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0121 10:57:14.616077 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0121 10:57:14.616503 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0121 10:57:14.623960 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.475309 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.492237 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.506921 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9164acd9a91836c25b02986f204dd0302964f6bff6fe077971d37eebd4b560ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.522618 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f9987a1-d9f5-467c-82b2-533a714c4c62\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6cf7bf06a11465e04a80fe7ae667f9c15741137062514a621955622d2b339dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://988de7ed33eebe3cf67b8c6362d70c761e509feb2c3b72e6f6a4ffb9cddbf421\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://988de7ed33eebe3cf67b8c6362d70c761e509feb2c3b72e6f6a4ffb9cddbf421\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.526841 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.526887 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.526903 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.526924 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.526938 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:23Z","lastTransitionTime":"2026-01-21T10:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.540143 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adf42b80-04ba-4164-b847-3b7f8b94816b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5f503293f457017fbc87cd1525eaaf06d41044a37f87127e1398bd50a228ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05a4a29ae6a2b0d4acf843ae40de1e262c2149dac28e15697dbcd6d237235f29\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfff7b0c4eae65a4ed6e28b94feef5a5b0c4fb224f9cde2cfd9072727968e754\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.557335 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.573207 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-fs42r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09da9e14-f6d5-4346-a4a0-c17711e3b603\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:58:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e44307f5cc08335dc686c05c12b4ac57aeb2211a1072fff108a06b37b2e1461b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:58:08Z\\\",\\\"message\\\":\\\"2026-01-21T10:57:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0cb853ce-7a29-40b7-96bf-1304acd74419\\\\n2026-01-21T10:57:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0cb853ce-7a29-40b7-96bf-1304acd74419 to /host/opt/cni/bin/\\\\n2026-01-21T10:57:23Z [verbose] multus-daemon started\\\\n2026-01-21T10:57:23Z [verbose] Readiness Indicator file check\\\\n2026-01-21T10:58:08Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:58:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7kt6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-fs42r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.586028 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d379505c-c658-4dd5-b841-40c8443012c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51a2ec789636052b12e0fdb4e647d7e4f92d1e4b7436933f1529561ffc2021d9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d634cee9f543d3322f8cdc8bc62252096e789383c55d5d448cc53ab990ac9b52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57krk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-qgrth\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.597013 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dtv4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3552adbd-011f-4552-9e69-233b92c554c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqlps\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dtv4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.616182 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.630252 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.630290 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.630303 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.630330 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.630343 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:23Z","lastTransitionTime":"2026-01-21T10:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.637159 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.663495 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd18bd57e9f0f878f56164dee92c18a4fff62c83f518a96d7db735dcd488e052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.682263 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.712212 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e45d7cde8c26fc00bb1b5ad5cc88c41aff02acf0860856e86f3c3bfdc01e7045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599bfaf6bfc412f779f4732cf9f4273382c1f0af20576581264ea4fc63b2ec38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9454db8c9af3187ee06e017ac192668036b1f565305a37de61c3cffe4d94e7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa3136ee876ebb14cd5164f9176c91c60cc04fd7882a3ad0957ce3a36184d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2c1d1496a7f7e5fad6a3d4ade4104e8e0c40fd0ca4722ca5e1ca1fbf0ebf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29c170fdf57f1f4c678e2902443d9ef467ac3157370e997fd5596b4eecfb54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff735c08dae242cbd531e458695a99bcbe3a5e6c9753266141b14f67cb0799a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff735c08dae242cbd531e458695a99bcbe3a5e6c9753266141b14f67cb0799a2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:58:00Z\\\",\\\"message\\\":\\\"work policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:00Z is after 2025-08-24T17:21:41Z]\\\\nI0121 10:58:00.782399 6726 services_controller.go:434] Service openshift-kube-controller-manager/kube-controller-manager retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{kube-controller-manager openshift-kube-controller-manager 90927ca1-43e2-420d-8485-a35952e82cd9 4812 0 2025-02-23 05:22:57 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[prometheus:kube-controller-manager] map[operator.openshift.io/spec-hash:bb05a56151ce98d11c8554843985ba99e0498dcafd98129435c2d982c5ea4c11 service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{Service\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-bx64f_openshift-ovn-kubernetes(e8bb6d97-b3b8-4e31-b704-8e565385ab26)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47c176d51b5558e85897ac8c72c02bb6f152a972d8e941aeb114333f777e22ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.732744 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.732806 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.732820 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.732838 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.732850 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:23Z","lastTransitionTime":"2026-01-21T10:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.835566 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.835699 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.835723 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.835745 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.835762 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:23Z","lastTransitionTime":"2026-01-21T10:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.938298 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.938343 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.938357 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.938373 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:23 crc kubenswrapper[4881]: I0121 10:58:23.938385 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:23Z","lastTransitionTime":"2026-01-21T10:58:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.040602 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.040650 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.040661 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.040677 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.040690 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:24Z","lastTransitionTime":"2026-01-21T10:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.143554 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.143617 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.143634 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.143657 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.143675 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:24Z","lastTransitionTime":"2026-01-21T10:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.201113 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 21:14:58.781548819 +0000 UTC Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.246809 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.246841 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.246851 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.246867 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.246879 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:24Z","lastTransitionTime":"2026-01-21T10:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.310640 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.310685 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:58:24 crc kubenswrapper[4881]: E0121 10:58:24.310900 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:58:24 crc kubenswrapper[4881]: E0121 10:58:24.311076 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.312556 4881 scope.go:117] "RemoveContainer" containerID="ff735c08dae242cbd531e458695a99bcbe3a5e6c9753266141b14f67cb0799a2" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.352176 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.352219 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.352236 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.352260 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.352277 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:24Z","lastTransitionTime":"2026-01-21T10:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.455690 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.455741 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.455755 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.455773 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.455806 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:24Z","lastTransitionTime":"2026-01-21T10:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.558571 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.558643 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.558673 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.558707 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.558728 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:24Z","lastTransitionTime":"2026-01-21T10:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.661438 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.661504 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.661520 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.661547 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.661568 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:24Z","lastTransitionTime":"2026-01-21T10:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.764655 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.764716 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.764735 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.764759 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.764777 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:24Z","lastTransitionTime":"2026-01-21T10:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.868921 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.868987 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.869005 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.869030 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.869052 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:24Z","lastTransitionTime":"2026-01-21T10:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.972263 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.972334 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.972355 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.972378 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:24 crc kubenswrapper[4881]: I0121 10:58:24.972398 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:24Z","lastTransitionTime":"2026-01-21T10:58:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.075425 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.075490 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.075507 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.075533 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.075550 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:25Z","lastTransitionTime":"2026-01-21T10:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.181939 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.182272 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.182281 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.182298 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.182308 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:25Z","lastTransitionTime":"2026-01-21T10:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.202308 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 15:21:01.798624178 +0000 UTC Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.235746 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bx64f_e8bb6d97-b3b8-4e31-b704-8e565385ab26/ovnkube-controller/2.log" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.239852 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" event={"ID":"e8bb6d97-b3b8-4e31-b704-8e565385ab26","Type":"ContainerStarted","Data":"d5e11e8e5cd4b0f5d5b59050f20100006189356085839bd098e65e66ddf3accb"} Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.241779 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.259342 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3687b313-1df2-4274-80db-8c758b51bf2d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d5b27143ea6007376f5989ccc3a11947ede80be1b5fac1b738ffbf27fa05c6d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hml99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-fb4fr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.283097 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8bb6d97-b3b8-4e31-b704-8e565385ab26\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e45d7cde8c26fc00bb1b5ad5cc88c41aff02acf0860856e86f3c3bfdc01e7045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://599bfaf6bfc412f779f4732cf9f4273382c1f0af20576581264ea4fc63b2ec38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9454db8c9af3187ee06e017ac192668036b1f565305a37de61c3cffe4d94e7db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa3136ee876ebb14cd5164f9176c91c60cc04fd7882a3ad0957ce3a36184d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2c1d1496a7f7e5fad6a3d4ade4104e8e0c40fd0ca4722ca5e1ca1fbf0ebf3cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d29c170fdf57f1f4c678e2902443d9ef467ac3157370e997fd5596b4eecfb54e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5e11e8e5cd4b0f5d5b59050f20100006189356085839bd098e65e66ddf3accb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff735c08dae242cbd531e458695a99bcbe3a5e6c9753266141b14f67cb0799a2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:58:00Z\\\",\\\"message\\\":\\\"work policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:00Z is after 2025-08-24T17:21:41Z]\\\\nI0121 10:58:00.782399 6726 services_controller.go:434] Service openshift-kube-controller-manager/kube-controller-manager retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{kube-controller-manager openshift-kube-controller-manager 90927ca1-43e2-420d-8485-a35952e82cd9 4812 0 2025-02-23 05:22:57 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[prometheus:kube-controller-manager] map[operator.openshift.io/spec-hash:bb05a56151ce98d11c8554843985ba99e0498dcafd98129435c2d982c5ea4c11 service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-secret-name:serving-cert service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{Service\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:58:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47c176d51b5558e85897ac8c72c02bb6f152a972d8e941aeb114333f777e22ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kz6fb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bx64f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.286473 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.286514 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.286526 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.286540 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.286553 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:25Z","lastTransitionTime":"2026-01-21T10:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.299209 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14b423e6f144bf9e1e30c3f0a074ca9edd3c1e2cfa1674ea5b047a6a8fb01d92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c179ea565cc19d51f2becaf302623c94acbd70749ace24c754af51e3f0101033\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.309985 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:58:25 crc kubenswrapper[4881]: E0121 10:58:25.310181 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.310487 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:58:25 crc kubenswrapper[4881]: E0121 10:58:25.310600 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtv4t" podUID="3552adbd-011f-4552-9e69-233b92c554c8" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.315975 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.334798 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c14980d7-1b3b-463b-8f57-f1e1afbd258c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd18bd57e9f0f878f56164dee92c18a4fff62c83f518a96d7db735dcd488e052\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dffa6cfe62b953df5f9734726e4b93967d3fce7fa5743b1adef693e4f75e48\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0ceabfac4703edf7fe55ec5dd41e3fc1736424533c54ae12adcdb1f077e1f756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a2fff9245b8148b515dd4af52db2ec776cca710b28074e8955162220448d248\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f6276883835d2a1fbab11df2fd15684b8ee850b6f264f3f194f6de68cc27cf9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c51b231b861d87536aaa44cd0e1018fc15464809b5a3bf30c1d2a25dd9a12a8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13503428fec90dad2c9ce1086fe7c71c7910f145a6e7a8d546791e11002f36f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:57:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:57:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t6nwq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-v4wxp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.362726 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13d0f0c4-fa31-44ba-bc94-c0a80fc1b2df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17ef83fedf9cc77cf73fdd00486ec9b0b04712a60a5448402754a44ad46da439\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36430b9d5b01b4a6f3b9e7b58bfbec0c258f34847b321cb45bc3b23f84cf09fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eba9cbb70fbd88687c81b18ad50f8386f836bf2fa2c8f9e1c503a20af985416\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1b7d6b79713c6f4718939d3679f1ba6e237045d653762b6de122ebecdfabbe35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1b7d6b79713c6f4718939d3679f1ba6e237045d653762b6de122ebecdfabbe35\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.376959 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f8092488edbd3b7f307819f2cad06e6d1a6721d8c2fd8fa05a8f7e652949ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.388935 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.388984 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.388996 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.389012 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.389023 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:25Z","lastTransitionTime":"2026-01-21T10:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.392190 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-8sptw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f19f480e-331f-42f5-a3b6-fd0c6847b157\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21b25675a85765cfd4abe361687eedfdf5dfffc1c9b83069f14986821acb12d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hjr7g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:16Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-8sptw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.407996 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ddaa3aae50a05142125742b3cc13e65433fcadbef5b1e273232cffb660cc700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.421367 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-tjwf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cf4f6fc0-ed4c-47b7-b2bc-8033980781a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9164acd9a91836c25b02986f204dd0302964f6bff6fe077971d37eebd4b560ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:57:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57d55\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:57:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-tjwf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.492809 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.492860 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.492873 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.492895 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.492912 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:25Z","lastTransitionTime":"2026-01-21T10:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.493699 4881 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18076c9a-f18b-4640-a048-68b6dbbfa85e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:57:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:56:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dfeb13ada78bc1504e657a94ab793ae27d4dbd9f333df47b951323f4e642e869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c05c062aefb9117f9f961f35221b8fa36b3374a184edcedea404d33539be0b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://06c96476b642e401c90a3f6810ea1624e2914188ba139b9303b963f1d5bc1f30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cc29934ce0927ee4fdd2c97ca3bbbcaaf6287060d05447572edeefa8a66af25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c781ff2e87fbae055bac0e3f8f77e2eeee8aa4e38c83ff4b49645798949c550c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://10a0569ab7ed4586aadd7deab6398db98bfc9a6afd3903d5466c05021a41632a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://10a0569ab7ed4586aadd7deab6398db98bfc9a6afd3903d5466c05021a41632a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae46ac7909a717555defd27b6fa785f9c7f927fd7806c7941529c2e64ee3700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dae46ac7909a717555defd27b6fa785f9c7f927fd7806c7941529c2e64ee3700\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:55Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://6b3e4e88955652dacaa965ab4ff099595a6bb920836bfd4ad703984e00029b98\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b3e4e88955652dacaa965ab4ff099595a6bb920836bfd4ad703984e00029b98\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:56:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:56:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:56:54Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:58:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.528007 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=71.527940963 podStartE2EDuration="1m11.527940963s" podCreationTimestamp="2026-01-21 10:57:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:25.525703518 +0000 UTC m=+92.785660007" watchObservedRunningTime="2026-01-21 10:58:25.527940963 +0000 UTC m=+92.787897432" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.595708 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.595826 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.595842 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.595863 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.595877 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:25Z","lastTransitionTime":"2026-01-21T10:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.596628 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-fs42r" podStartSLOduration=71.59661583 podStartE2EDuration="1m11.59661583s" podCreationTimestamp="2026-01-21 10:57:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:25.595201545 +0000 UTC m=+92.855158014" watchObservedRunningTime="2026-01-21 10:58:25.59661583 +0000 UTC m=+92.856572299" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.623852 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-qgrth" podStartSLOduration=69.623821518 podStartE2EDuration="1m9.623821518s" podCreationTimestamp="2026-01-21 10:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:25.611112275 +0000 UTC m=+92.871068744" watchObservedRunningTime="2026-01-21 10:58:25.623821518 +0000 UTC m=+92.883777987" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.655448 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=13.655425294 podStartE2EDuration="13.655425294s" podCreationTimestamp="2026-01-21 10:58:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:25.637537305 +0000 UTC m=+92.897493784" watchObservedRunningTime="2026-01-21 10:58:25.655425294 +0000 UTC m=+92.915381763" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.655554 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=71.655550597 podStartE2EDuration="1m11.655550597s" podCreationTimestamp="2026-01-21 10:57:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:25.654289096 +0000 UTC m=+92.914245565" watchObservedRunningTime="2026-01-21 10:58:25.655550597 +0000 UTC m=+92.915507066" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.699075 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.699129 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.699140 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.699156 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.699167 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:25Z","lastTransitionTime":"2026-01-21T10:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.802207 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.802288 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.802302 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.802326 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.802342 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:25Z","lastTransitionTime":"2026-01-21T10:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.905115 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.905184 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.905197 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.905220 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:25 crc kubenswrapper[4881]: I0121 10:58:25.905233 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:25Z","lastTransitionTime":"2026-01-21T10:58:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.008725 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.008773 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.008804 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.008828 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.008841 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:26Z","lastTransitionTime":"2026-01-21T10:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.112180 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.112227 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.112238 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.112256 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.112266 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:26Z","lastTransitionTime":"2026-01-21T10:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.202469 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.202545 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.202557 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.202582 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.202594 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:26Z","lastTransitionTime":"2026-01-21T10:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.202760 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 06:04:47.605299495 +0000 UTC Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.230461 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.230507 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.230520 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.230538 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.230547 4881 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:58:26Z","lastTransitionTime":"2026-01-21T10:58:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.287617 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-9d9kz"] Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.288258 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9d9kz" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.298196 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.298199 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.298210 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.299679 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.310374 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.310431 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:58:26 crc kubenswrapper[4881]: E0121 10:58:26.310639 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:58:26 crc kubenswrapper[4881]: E0121 10:58:26.310928 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.311876 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=37.311849297 podStartE2EDuration="37.311849297s" podCreationTimestamp="2026-01-21 10:57:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:26.310244457 +0000 UTC m=+93.570200946" watchObservedRunningTime="2026-01-21 10:58:26.311849297 +0000 UTC m=+93.571805766" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.342941 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-8sptw" podStartSLOduration=72.34292151 podStartE2EDuration="1m12.34292151s" podCreationTimestamp="2026-01-21 10:57:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:26.342242953 +0000 UTC m=+93.602199422" watchObservedRunningTime="2026-01-21 10:58:26.34292151 +0000 UTC m=+93.602877969" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.375846 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4642cf40-137f-4659-9190-d17f93aac69f-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-9d9kz\" (UID: \"4642cf40-137f-4659-9190-d17f93aac69f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9d9kz" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.376010 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4642cf40-137f-4659-9190-d17f93aac69f-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-9d9kz\" (UID: \"4642cf40-137f-4659-9190-d17f93aac69f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9d9kz" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.376085 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4642cf40-137f-4659-9190-d17f93aac69f-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-9d9kz\" (UID: \"4642cf40-137f-4659-9190-d17f93aac69f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9d9kz" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.376300 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4642cf40-137f-4659-9190-d17f93aac69f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-9d9kz\" (UID: \"4642cf40-137f-4659-9190-d17f93aac69f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9d9kz" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.376381 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4642cf40-137f-4659-9190-d17f93aac69f-service-ca\") pod \"cluster-version-operator-5c965bbfc6-9d9kz\" (UID: \"4642cf40-137f-4659-9190-d17f93aac69f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9d9kz" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.381343 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=5.381331443 podStartE2EDuration="5.381331443s" podCreationTimestamp="2026-01-21 10:58:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:26.381243081 +0000 UTC m=+93.641199560" watchObservedRunningTime="2026-01-21 10:58:26.381331443 +0000 UTC m=+93.641287912" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.417753 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-tjwf8" podStartSLOduration=71.417729437 podStartE2EDuration="1m11.417729437s" podCreationTimestamp="2026-01-21 10:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:26.416767824 +0000 UTC m=+93.676724303" watchObservedRunningTime="2026-01-21 10:58:26.417729437 +0000 UTC m=+93.677685906" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.472576 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-v4wxp" podStartSLOduration=72.472558763 podStartE2EDuration="1m12.472558763s" podCreationTimestamp="2026-01-21 10:57:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:26.471772154 +0000 UTC m=+93.731728643" watchObservedRunningTime="2026-01-21 10:58:26.472558763 +0000 UTC m=+93.732515222" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.477034 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4642cf40-137f-4659-9190-d17f93aac69f-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-9d9kz\" (UID: \"4642cf40-137f-4659-9190-d17f93aac69f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9d9kz" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.477073 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4642cf40-137f-4659-9190-d17f93aac69f-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-9d9kz\" (UID: \"4642cf40-137f-4659-9190-d17f93aac69f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9d9kz" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.477105 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4642cf40-137f-4659-9190-d17f93aac69f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-9d9kz\" (UID: \"4642cf40-137f-4659-9190-d17f93aac69f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9d9kz" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.477131 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4642cf40-137f-4659-9190-d17f93aac69f-service-ca\") pod \"cluster-version-operator-5c965bbfc6-9d9kz\" (UID: \"4642cf40-137f-4659-9190-d17f93aac69f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9d9kz" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.477163 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4642cf40-137f-4659-9190-d17f93aac69f-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-9d9kz\" (UID: \"4642cf40-137f-4659-9190-d17f93aac69f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9d9kz" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.477224 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4642cf40-137f-4659-9190-d17f93aac69f-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-9d9kz\" (UID: \"4642cf40-137f-4659-9190-d17f93aac69f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9d9kz" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.478099 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4642cf40-137f-4659-9190-d17f93aac69f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-9d9kz\" (UID: \"4642cf40-137f-4659-9190-d17f93aac69f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9d9kz" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.479079 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4642cf40-137f-4659-9190-d17f93aac69f-service-ca\") pod \"cluster-version-operator-5c965bbfc6-9d9kz\" (UID: \"4642cf40-137f-4659-9190-d17f93aac69f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9d9kz" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.488966 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4642cf40-137f-4659-9190-d17f93aac69f-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-9d9kz\" (UID: \"4642cf40-137f-4659-9190-d17f93aac69f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9d9kz" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.489824 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podStartSLOduration=72.489777347 podStartE2EDuration="1m12.489777347s" podCreationTimestamp="2026-01-21 10:57:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:26.489401467 +0000 UTC m=+93.749357956" watchObservedRunningTime="2026-01-21 10:58:26.489777347 +0000 UTC m=+93.749733816" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.508467 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4642cf40-137f-4659-9190-d17f93aac69f-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-9d9kz\" (UID: \"4642cf40-137f-4659-9190-d17f93aac69f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9d9kz" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.525652 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" podStartSLOduration=71.525631777 podStartE2EDuration="1m11.525631777s" podCreationTimestamp="2026-01-21 10:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:26.525225077 +0000 UTC m=+93.785181566" watchObservedRunningTime="2026-01-21 10:58:26.525631777 +0000 UTC m=+93.785588246" Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.605721 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9d9kz" Jan 21 10:58:26 crc kubenswrapper[4881]: W0121 10:58:26.623327 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4642cf40_137f_4659_9190_d17f93aac69f.slice/crio-3422ee5a6ecb64985051553fca84ce5f8a4ff36db844b9adb5c24988571cc841 WatchSource:0}: Error finding container 3422ee5a6ecb64985051553fca84ce5f8a4ff36db844b9adb5c24988571cc841: Status 404 returned error can't find the container with id 3422ee5a6ecb64985051553fca84ce5f8a4ff36db844b9adb5c24988571cc841 Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.859197 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-dtv4t"] Jan 21 10:58:26 crc kubenswrapper[4881]: I0121 10:58:26.859297 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:58:26 crc kubenswrapper[4881]: E0121 10:58:26.859412 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtv4t" podUID="3552adbd-011f-4552-9e69-233b92c554c8" Jan 21 10:58:27 crc kubenswrapper[4881]: I0121 10:58:27.203563 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 18:07:49.843039945 +0000 UTC Jan 21 10:58:27 crc kubenswrapper[4881]: I0121 10:58:27.204162 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 21 10:58:27 crc kubenswrapper[4881]: I0121 10:58:27.212096 4881 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 21 10:58:27 crc kubenswrapper[4881]: I0121 10:58:27.253682 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9d9kz" event={"ID":"4642cf40-137f-4659-9190-d17f93aac69f","Type":"ContainerStarted","Data":"8b1a0621c0a5179658baaa5fc83f26a2cce4e83d35c2291f306deffc9f29be15"} Jan 21 10:58:27 crc kubenswrapper[4881]: I0121 10:58:27.253736 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9d9kz" event={"ID":"4642cf40-137f-4659-9190-d17f93aac69f","Type":"ContainerStarted","Data":"3422ee5a6ecb64985051553fca84ce5f8a4ff36db844b9adb5c24988571cc841"} Jan 21 10:58:27 crc kubenswrapper[4881]: I0121 10:58:27.270765 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9d9kz" podStartSLOduration=73.270742847 podStartE2EDuration="1m13.270742847s" podCreationTimestamp="2026-01-21 10:57:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:27.270100562 +0000 UTC m=+94.530057051" watchObservedRunningTime="2026-01-21 10:58:27.270742847 +0000 UTC m=+94.530699336" Jan 21 10:58:27 crc kubenswrapper[4881]: I0121 10:58:27.310251 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:58:27 crc kubenswrapper[4881]: E0121 10:58:27.310464 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:58:28 crc kubenswrapper[4881]: I0121 10:58:28.310921 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:58:28 crc kubenswrapper[4881]: I0121 10:58:28.310937 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:58:28 crc kubenswrapper[4881]: I0121 10:58:28.310938 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:58:28 crc kubenswrapper[4881]: E0121 10:58:28.311260 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:58:28 crc kubenswrapper[4881]: E0121 10:58:28.311319 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dtv4t" podUID="3552adbd-011f-4552-9e69-233b92c554c8" Jan 21 10:58:28 crc kubenswrapper[4881]: E0121 10:58:28.311073 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.311905 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:58:29 crc kubenswrapper[4881]: E0121 10:58:29.312019 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.537152 4881 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.537435 4881 kubelet_node_status.go:538] "Fast updating node status as it just became ready" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.584590 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5xwk8"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.585663 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5xwk8" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.585804 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-rslv2"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.586889 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rslv2" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.587136 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-cclnc"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.587983 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-svmbc"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.588266 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-cclnc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.588843 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.589075 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.589895 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.590904 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.592170 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-ntqvz"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.592588 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wjlxh"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.592619 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ntqvz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.593207 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-wjlxh" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.593697 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lm4k2"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.594070 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lm4k2" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.596561 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-zjqz6"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.597006 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-phm68"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.597456 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-phm68" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.597875 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-zjqz6" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.597946 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-qxzd9"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.598303 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-qxzd9" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.601924 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vfcd9"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.602745 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8kvzw"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.603067 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vfcd9" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.603353 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8kvzw" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.605719 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.609265 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-n2h44"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.610416 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-w5l6w"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.610860 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-n2h44" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.610980 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-w5l6w" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.612617 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-jvxv4"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.613195 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pjbh7"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.613692 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pjbh7" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.614095 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-jvxv4" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.614175 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7cs59"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.614763 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7cs59" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.619466 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.621618 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-wrqpb"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.622182 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-wrqpb" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.624993 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.625271 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.625626 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.625684 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.626049 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.626224 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.626504 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.626819 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.626973 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.627176 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.626893 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.627508 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.627706 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.628018 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.628023 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.628320 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.628820 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.628979 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.629174 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.629221 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.629260 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.629287 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.637347 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.639403 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-h97cd"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.640437 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.640739 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.641118 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.642187 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.642498 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-h97cd" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.642602 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.642219 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.643383 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.644078 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.655904 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.663619 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.663989 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.664117 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.664269 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.664356 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.666109 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.666313 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.666530 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.666607 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.666956 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.667041 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.667128 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.667326 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.667445 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.667460 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.668756 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-v7wnh"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.669451 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-v7wnh" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.677527 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.677825 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.677843 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.677994 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.678082 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.678288 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.678376 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.678451 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.678584 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.678749 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.679011 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.679098 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.679223 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.679309 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.679391 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.679529 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.679661 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.679743 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.680903 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.681087 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.681164 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.681319 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.681394 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.681435 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.681526 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.681575 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.681616 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.681691 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.681727 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.681737 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.681847 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.681987 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.682000 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.682079 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.682104 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.682156 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.682242 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.679661 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.682342 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-n98tz"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.683146 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-whh46"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.683573 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-j4s5w"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.684062 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-vwqwb"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.684630 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vwqwb" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.684708 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.684946 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-j4s5w" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.685133 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.694031 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.694297 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.694859 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.696055 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.696180 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.696312 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.696533 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.697038 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.697866 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.698828 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vp6qk"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.699382 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-hqjnl"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.699961 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hqjnl" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.699974 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vp6qk" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.711432 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n2nt\" (UniqueName: \"kubernetes.io/projected/52d94566-7844-4414-bf48-9122c2207dd6-kube-api-access-2n2nt\") pod \"router-default-5444994796-v7wnh\" (UID: \"52d94566-7844-4414-bf48-9122c2207dd6\") " pod="openshift-ingress/router-default-5444994796-v7wnh" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.711472 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-oauth-serving-cert\") pod \"console-f9d7485db-qxzd9\" (UID: \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\") " pod="openshift-console/console-f9d7485db-qxzd9" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.711490 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1f74368-89f6-44fb-aaa2-9159a217b4d7-config\") pod \"console-operator-58897d9998-zjqz6\" (UID: \"f1f74368-89f6-44fb-aaa2-9159a217b4d7\") " pod="openshift-console-operator/console-operator-58897d9998-zjqz6" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.711510 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/706c6a3b-823b-4ea3-b7a8-e20d571d3ace-client-ca\") pod \"route-controller-manager-6576b87f9c-5xwk8\" (UID: \"706c6a3b-823b-4ea3-b7a8-e20d571d3ace\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5xwk8" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.711531 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-image-import-ca\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.711566 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-service-ca\") pod \"console-f9d7485db-qxzd9\" (UID: \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\") " pod="openshift-console/console-f9d7485db-qxzd9" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.711602 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1d6b8080-9c3f-4f6e-bcb4-3d1d0edaaa7c-machine-approver-tls\") pod \"machine-approver-56656f9798-ntqvz\" (UID: \"1d6b8080-9c3f-4f6e-bcb4-3d1d0edaaa7c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ntqvz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.711618 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3201b51c-af63-40e7-8037-9e581791d495-etcd-client\") pod \"etcd-operator-b45778765-h97cd\" (UID: \"3201b51c-af63-40e7-8037-9e581791d495\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h97cd" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.711636 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/706c6a3b-823b-4ea3-b7a8-e20d571d3ace-serving-cert\") pod \"route-controller-manager-6576b87f9c-5xwk8\" (UID: \"706c6a3b-823b-4ea3-b7a8-e20d571d3ace\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5xwk8" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.711656 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8465162e-dd9f-45b4-83a6-94666ac2b87b-config\") pod \"machine-api-operator-5694c8668f-cclnc\" (UID: \"8465162e-dd9f-45b4-83a6-94666ac2b87b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cclnc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.711677 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96e1443d-dd18-4343-b200-756f9511c163-service-ca-bundle\") pod \"authentication-operator-69f744f599-jvxv4\" (UID: \"96e1443d-dd18-4343-b200-756f9511c163\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jvxv4" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.711705 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-trusted-ca-bundle\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.711722 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1f74368-89f6-44fb-aaa2-9159a217b4d7-serving-cert\") pod \"console-operator-58897d9998-zjqz6\" (UID: \"f1f74368-89f6-44fb-aaa2-9159a217b4d7\") " pod="openshift-console-operator/console-operator-58897d9998-zjqz6" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.711741 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f1f74368-89f6-44fb-aaa2-9159a217b4d7-trusted-ca\") pod \"console-operator-58897d9998-zjqz6\" (UID: \"f1f74368-89f6-44fb-aaa2-9159a217b4d7\") " pod="openshift-console-operator/console-operator-58897d9998-zjqz6" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.711770 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhrlb\" (UniqueName: \"kubernetes.io/projected/1d6b8080-9c3f-4f6e-bcb4-3d1d0edaaa7c-kube-api-access-mhrlb\") pod \"machine-approver-56656f9798-ntqvz\" (UID: \"1d6b8080-9c3f-4f6e-bcb4-3d1d0edaaa7c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ntqvz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.711804 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cldhz\" (UniqueName: \"kubernetes.io/projected/628cb8f4-a587-498f-9398-403e0af5eec4-kube-api-access-cldhz\") pod \"downloads-7954f5f757-wrqpb\" (UID: \"628cb8f4-a587-498f-9398-403e0af5eec4\") " pod="openshift-console/downloads-7954f5f757-wrqpb" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.711823 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/863eda44-9a47-42de-b2de-49234ac647f0-metrics-tls\") pod \"dns-operator-744455d44c-n2h44\" (UID: \"863eda44-9a47-42de-b2de-49234ac647f0\") " pod="openshift-dns-operator/dns-operator-744455d44c-n2h44" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.711844 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z962\" (UniqueName: \"kubernetes.io/projected/537a87a4-8f58-441f-9199-62c5849c693c-kube-api-access-4z962\") pod \"openshift-config-operator-7777fb866f-rslv2\" (UID: \"537a87a4-8f58-441f-9199-62c5849c693c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-rslv2" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.711865 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27c4b3cb-57d3-4282-93fe-16cfab039277-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-lm4k2\" (UID: \"27c4b3cb-57d3-4282-93fe-16cfab039277\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lm4k2" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.711888 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72phf\" (UniqueName: \"kubernetes.io/projected/29dca8bf-7bce-455b-812f-fca8861518ca-kube-api-access-72phf\") pod \"openshift-apiserver-operator-796bbdcf4f-vfcd9\" (UID: \"29dca8bf-7bce-455b-812f-fca8861518ca\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vfcd9" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.711911 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6mtd\" (UniqueName: \"kubernetes.io/projected/5d68a50c-6a38-4aba-bb02-9a25712d2212-kube-api-access-r6mtd\") pod \"cluster-image-registry-operator-dc59b4c8b-8kvzw\" (UID: \"5d68a50c-6a38-4aba-bb02-9a25712d2212\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8kvzw" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.711932 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0ceebcd8-2c53-4e4d-97bb-5d81008a6442-metrics-tls\") pod \"ingress-operator-5b745b69d9-w5l6w\" (UID: \"0ceebcd8-2c53-4e4d-97bb-5d81008a6442\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-w5l6w" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.711948 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6d131df-3eb3-4bb1-a45a-ff6ae44b5ecb-config\") pod \"kube-controller-manager-operator-78b949d7b-pjbh7\" (UID: \"e6d131df-3eb3-4bb1-a45a-ff6ae44b5ecb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pjbh7" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.711968 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-console-serving-cert\") pod \"console-f9d7485db-qxzd9\" (UID: \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\") " pod="openshift-console/console-f9d7485db-qxzd9" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.711989 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b745a377-4575-45fb-a206-ea4754ecff76-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-phm68\" (UID: \"b745a377-4575-45fb-a206-ea4754ecff76\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-phm68" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712011 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4f8p\" (UniqueName: \"kubernetes.io/projected/8465162e-dd9f-45b4-83a6-94666ac2b87b-kube-api-access-d4f8p\") pod \"machine-api-operator-5694c8668f-cclnc\" (UID: \"8465162e-dd9f-45b4-83a6-94666ac2b87b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cclnc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712030 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-trusted-ca-bundle\") pod \"console-f9d7485db-qxzd9\" (UID: \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\") " pod="openshift-console/console-f9d7485db-qxzd9" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712048 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hxmk\" (UniqueName: \"kubernetes.io/projected/863eda44-9a47-42de-b2de-49234ac647f0-kube-api-access-8hxmk\") pod \"dns-operator-744455d44c-n2h44\" (UID: \"863eda44-9a47-42de-b2de-49234ac647f0\") " pod="openshift-dns-operator/dns-operator-744455d44c-n2h44" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712067 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kgjc\" (UniqueName: \"kubernetes.io/projected/706c6a3b-823b-4ea3-b7a8-e20d571d3ace-kube-api-access-9kgjc\") pod \"route-controller-manager-6576b87f9c-5xwk8\" (UID: \"706c6a3b-823b-4ea3-b7a8-e20d571d3ace\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5xwk8" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712090 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0ceebcd8-2c53-4e4d-97bb-5d81008a6442-bound-sa-token\") pod \"ingress-operator-5b745b69d9-w5l6w\" (UID: \"0ceebcd8-2c53-4e4d-97bb-5d81008a6442\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-w5l6w" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712109 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/3201b51c-af63-40e7-8037-9e581791d495-etcd-service-ca\") pod \"etcd-operator-b45778765-h97cd\" (UID: \"3201b51c-af63-40e7-8037-9e581791d495\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h97cd" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712124 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-serving-cert\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712146 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gck6q\" (UniqueName: \"kubernetes.io/projected/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-kube-api-access-gck6q\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712166 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96e1443d-dd18-4343-b200-756f9511c163-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-jvxv4\" (UID: \"96e1443d-dd18-4343-b200-756f9511c163\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jvxv4" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712186 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/706c6a3b-823b-4ea3-b7a8-e20d571d3ace-config\") pod \"route-controller-manager-6576b87f9c-5xwk8\" (UID: \"706c6a3b-823b-4ea3-b7a8-e20d571d3ace\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5xwk8" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712207 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-node-pullsecrets\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712223 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-audit-dir\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712243 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-console-oauth-config\") pod \"console-f9d7485db-qxzd9\" (UID: \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\") " pod="openshift-console/console-f9d7485db-qxzd9" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712262 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blg69\" (UniqueName: \"kubernetes.io/projected/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-kube-api-access-blg69\") pod \"console-f9d7485db-qxzd9\" (UID: \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\") " pod="openshift-console/console-f9d7485db-qxzd9" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712280 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96e1443d-dd18-4343-b200-756f9511c163-serving-cert\") pod \"authentication-operator-69f744f599-jvxv4\" (UID: \"96e1443d-dd18-4343-b200-756f9511c163\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jvxv4" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712298 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0ceebcd8-2c53-4e4d-97bb-5d81008a6442-trusted-ca\") pod \"ingress-operator-5b745b69d9-w5l6w\" (UID: \"0ceebcd8-2c53-4e4d-97bb-5d81008a6442\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-w5l6w" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712319 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1d6b8080-9c3f-4f6e-bcb4-3d1d0edaaa7c-auth-proxy-config\") pod \"machine-approver-56656f9798-ntqvz\" (UID: \"1d6b8080-9c3f-4f6e-bcb4-3d1d0edaaa7c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ntqvz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712345 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/146cbde4-d891-47d8-a09f-d4f4b50bfe6d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-769kz\" (UID: \"146cbde4-d891-47d8-a09f-d4f4b50bfe6d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712364 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/52d94566-7844-4414-bf48-9122c2207dd6-stats-auth\") pod \"router-default-5444994796-v7wnh\" (UID: \"52d94566-7844-4414-bf48-9122c2207dd6\") " pod="openshift-ingress/router-default-5444994796-v7wnh" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712380 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/52d94566-7844-4414-bf48-9122c2207dd6-metrics-certs\") pod \"router-default-5444994796-v7wnh\" (UID: \"52d94566-7844-4414-bf48-9122c2207dd6\") " pod="openshift-ingress/router-default-5444994796-v7wnh" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712424 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5d68a50c-6a38-4aba-bb02-9a25712d2212-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-8kvzw\" (UID: \"5d68a50c-6a38-4aba-bb02-9a25712d2212\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8kvzw" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712446 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8465162e-dd9f-45b4-83a6-94666ac2b87b-images\") pod \"machine-api-operator-5694c8668f-cclnc\" (UID: \"8465162e-dd9f-45b4-83a6-94666ac2b87b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cclnc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712470 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn8zf\" (UniqueName: \"kubernetes.io/projected/002a39eb-e2e0-4d3e-8f61-89a539a653a9-kube-api-access-vn8zf\") pod \"controller-manager-879f6c89f-wjlxh\" (UID: \"002a39eb-e2e0-4d3e-8f61-89a539a653a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wjlxh" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712513 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/537a87a4-8f58-441f-9199-62c5849c693c-serving-cert\") pod \"openshift-config-operator-7777fb866f-rslv2\" (UID: \"537a87a4-8f58-441f-9199-62c5849c693c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-rslv2" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712546 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/3201b51c-af63-40e7-8037-9e581791d495-etcd-ca\") pod \"etcd-operator-b45778765-h97cd\" (UID: \"3201b51c-af63-40e7-8037-9e581791d495\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h97cd" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712580 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/002a39eb-e2e0-4d3e-8f61-89a539a653a9-config\") pod \"controller-manager-879f6c89f-wjlxh\" (UID: \"002a39eb-e2e0-4d3e-8f61-89a539a653a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wjlxh" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712600 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e960def-7bc7-4041-94dc-8ccea63f8bb8-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7cs59\" (UID: \"1e960def-7bc7-4041-94dc-8ccea63f8bb8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7cs59" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712643 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qshkt\" (UniqueName: \"kubernetes.io/projected/f1f74368-89f6-44fb-aaa2-9159a217b4d7-kube-api-access-qshkt\") pod \"console-operator-58897d9998-zjqz6\" (UID: \"f1f74368-89f6-44fb-aaa2-9159a217b4d7\") " pod="openshift-console-operator/console-operator-58897d9998-zjqz6" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712673 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghfkh\" (UniqueName: \"kubernetes.io/projected/3201b51c-af63-40e7-8037-9e581791d495-kube-api-access-ghfkh\") pod \"etcd-operator-b45778765-h97cd\" (UID: \"3201b51c-af63-40e7-8037-9e581791d495\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h97cd" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712697 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/146cbde4-d891-47d8-a09f-d4f4b50bfe6d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-769kz\" (UID: \"146cbde4-d891-47d8-a09f-d4f4b50bfe6d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712729 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-config\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.712750 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-etcd-serving-ca\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.716664 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3201b51c-af63-40e7-8037-9e581791d495-serving-cert\") pod \"etcd-operator-b45778765-h97cd\" (UID: \"3201b51c-af63-40e7-8037-9e581791d495\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h97cd" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.716730 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/002a39eb-e2e0-4d3e-8f61-89a539a653a9-client-ca\") pod \"controller-manager-879f6c89f-wjlxh\" (UID: \"002a39eb-e2e0-4d3e-8f61-89a539a653a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wjlxh" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.716775 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29dca8bf-7bce-455b-812f-fca8861518ca-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vfcd9\" (UID: \"29dca8bf-7bce-455b-812f-fca8861518ca\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vfcd9" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.716900 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29dca8bf-7bce-455b-812f-fca8861518ca-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vfcd9\" (UID: \"29dca8bf-7bce-455b-812f-fca8861518ca\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vfcd9" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.716924 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czg99\" (UniqueName: \"kubernetes.io/projected/0ceebcd8-2c53-4e4d-97bb-5d81008a6442-kube-api-access-czg99\") pod \"ingress-operator-5b745b69d9-w5l6w\" (UID: \"0ceebcd8-2c53-4e4d-97bb-5d81008a6442\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-w5l6w" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.716952 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-etcd-client\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.716977 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/52d94566-7844-4414-bf48-9122c2207dd6-default-certificate\") pod \"router-default-5444994796-v7wnh\" (UID: \"52d94566-7844-4414-bf48-9122c2207dd6\") " pod="openshift-ingress/router-default-5444994796-v7wnh" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.716998 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5d68a50c-6a38-4aba-bb02-9a25712d2212-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-8kvzw\" (UID: \"5d68a50c-6a38-4aba-bb02-9a25712d2212\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8kvzw" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.717083 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6d131df-3eb3-4bb1-a45a-ff6ae44b5ecb-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-pjbh7\" (UID: \"e6d131df-3eb3-4bb1-a45a-ff6ae44b5ecb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pjbh7" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.717103 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/146cbde4-d891-47d8-a09f-d4f4b50bfe6d-audit-dir\") pod \"apiserver-7bbb656c7d-769kz\" (UID: \"146cbde4-d891-47d8-a09f-d4f4b50bfe6d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.717125 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52d94566-7844-4414-bf48-9122c2207dd6-service-ca-bundle\") pod \"router-default-5444994796-v7wnh\" (UID: \"52d94566-7844-4414-bf48-9122c2207dd6\") " pod="openshift-ingress/router-default-5444994796-v7wnh" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.717149 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/537a87a4-8f58-441f-9199-62c5849c693c-available-featuregates\") pod \"openshift-config-operator-7777fb866f-rslv2\" (UID: \"537a87a4-8f58-441f-9199-62c5849c693c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-rslv2" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.717278 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d6b8080-9c3f-4f6e-bcb4-3d1d0edaaa7c-config\") pod \"machine-approver-56656f9798-ntqvz\" (UID: \"1d6b8080-9c3f-4f6e-bcb4-3d1d0edaaa7c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ntqvz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.717302 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/002a39eb-e2e0-4d3e-8f61-89a539a653a9-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-wjlxh\" (UID: \"002a39eb-e2e0-4d3e-8f61-89a539a653a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wjlxh" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.717325 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e960def-7bc7-4041-94dc-8ccea63f8bb8-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7cs59\" (UID: \"1e960def-7bc7-4041-94dc-8ccea63f8bb8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7cs59" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.717345 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ppts\" (UniqueName: \"kubernetes.io/projected/96e1443d-dd18-4343-b200-756f9511c163-kube-api-access-7ppts\") pod \"authentication-operator-69f744f599-jvxv4\" (UID: \"96e1443d-dd18-4343-b200-756f9511c163\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jvxv4" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.717365 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3201b51c-af63-40e7-8037-9e581791d495-config\") pod \"etcd-operator-b45778765-h97cd\" (UID: \"3201b51c-af63-40e7-8037-9e581791d495\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h97cd" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.717399 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/146cbde4-d891-47d8-a09f-d4f4b50bfe6d-encryption-config\") pod \"apiserver-7bbb656c7d-769kz\" (UID: \"146cbde4-d891-47d8-a09f-d4f4b50bfe6d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.717430 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7rfj\" (UniqueName: \"kubernetes.io/projected/b745a377-4575-45fb-a206-ea4754ecff76-kube-api-access-p7rfj\") pod \"cluster-samples-operator-665b6dd947-phm68\" (UID: \"b745a377-4575-45fb-a206-ea4754ecff76\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-phm68" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.717448 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-encryption-config\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.717470 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e960def-7bc7-4041-94dc-8ccea63f8bb8-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7cs59\" (UID: \"1e960def-7bc7-4041-94dc-8ccea63f8bb8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7cs59" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.717490 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/002a39eb-e2e0-4d3e-8f61-89a539a653a9-serving-cert\") pod \"controller-manager-879f6c89f-wjlxh\" (UID: \"002a39eb-e2e0-4d3e-8f61-89a539a653a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wjlxh" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.717515 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-console-config\") pod \"console-f9d7485db-qxzd9\" (UID: \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\") " pod="openshift-console/console-f9d7485db-qxzd9" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.717542 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/146cbde4-d891-47d8-a09f-d4f4b50bfe6d-serving-cert\") pod \"apiserver-7bbb656c7d-769kz\" (UID: \"146cbde4-d891-47d8-a09f-d4f4b50bfe6d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.717581 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/146cbde4-d891-47d8-a09f-d4f4b50bfe6d-audit-policies\") pod \"apiserver-7bbb656c7d-769kz\" (UID: \"146cbde4-d891-47d8-a09f-d4f4b50bfe6d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.717599 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-audit\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.717620 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5d68a50c-6a38-4aba-bb02-9a25712d2212-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-8kvzw\" (UID: \"5d68a50c-6a38-4aba-bb02-9a25712d2212\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8kvzw" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.717674 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6d131df-3eb3-4bb1-a45a-ff6ae44b5ecb-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-pjbh7\" (UID: \"e6d131df-3eb3-4bb1-a45a-ff6ae44b5ecb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pjbh7" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.717690 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8465162e-dd9f-45b4-83a6-94666ac2b87b-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-cclnc\" (UID: \"8465162e-dd9f-45b4-83a6-94666ac2b87b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cclnc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.717721 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/146cbde4-d891-47d8-a09f-d4f4b50bfe6d-etcd-client\") pod \"apiserver-7bbb656c7d-769kz\" (UID: \"146cbde4-d891-47d8-a09f-d4f4b50bfe6d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.717737 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5477x\" (UniqueName: \"kubernetes.io/projected/146cbde4-d891-47d8-a09f-d4f4b50bfe6d-kube-api-access-5477x\") pod \"apiserver-7bbb656c7d-769kz\" (UID: \"146cbde4-d891-47d8-a09f-d4f4b50bfe6d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.717768 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27c4b3cb-57d3-4282-93fe-16cfab039277-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-lm4k2\" (UID: \"27c4b3cb-57d3-4282-93fe-16cfab039277\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lm4k2" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.717820 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4vrg\" (UniqueName: \"kubernetes.io/projected/27c4b3cb-57d3-4282-93fe-16cfab039277-kube-api-access-z4vrg\") pod \"openshift-controller-manager-operator-756b6f6bc6-lm4k2\" (UID: \"27c4b3cb-57d3-4282-93fe-16cfab039277\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lm4k2" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.717843 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96e1443d-dd18-4343-b200-756f9511c163-config\") pod \"authentication-operator-69f744f599-jvxv4\" (UID: \"96e1443d-dd18-4343-b200-756f9511c163\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jvxv4" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.719964 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cfw2n"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.721222 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rdgn6"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.721997 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rdgn6" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.727025 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.728282 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-72bt6"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.733407 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.740032 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-72bt6" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.740121 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cfw2n" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.763394 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xmq82"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.764415 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xmq82" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.764414 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zkkpc"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.765018 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zkkpc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.765382 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-llgd7"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.767895 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.768853 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-llgd7" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.769239 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7gdkq"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.769384 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.769451 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.769616 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.769744 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.769883 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7gdkq" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.770111 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.769390 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.771157 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qpdx4"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.772568 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qpdx4" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.773260 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.773935 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.772770 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-f877x"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.775723 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-f877x" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.781659 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5xwk8"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.781710 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483205-527gk"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.782427 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-527gk" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.782710 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hfc8p"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.783318 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hfc8p" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.785577 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-qxzd9"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.786581 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lm4k2"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.787665 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.789370 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wjlxh"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.791055 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-kl9j4"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.791845 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-kl9j4" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.792649 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-468h5"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.793516 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-468h5" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.800891 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-rslv2"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.800969 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-cclnc"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.800986 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8kvzw"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.800998 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-n2h44"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.801008 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-h97cd"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.801017 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vfcd9"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.804650 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-72bt6"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.805894 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7cs59"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.807886 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-svmbc"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.812872 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.813461 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-phm68"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.816127 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pjbh7"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.817397 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zkkpc"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.818765 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-hqjnl"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820211 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-n98tz"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820286 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1d6b8080-9c3f-4f6e-bcb4-3d1d0edaaa7c-auth-proxy-config\") pod \"machine-approver-56656f9798-ntqvz\" (UID: \"1d6b8080-9c3f-4f6e-bcb4-3d1d0edaaa7c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ntqvz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820323 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/146cbde4-d891-47d8-a09f-d4f4b50bfe6d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-769kz\" (UID: \"146cbde4-d891-47d8-a09f-d4f4b50bfe6d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820349 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5d68a50c-6a38-4aba-bb02-9a25712d2212-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-8kvzw\" (UID: \"5d68a50c-6a38-4aba-bb02-9a25712d2212\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8kvzw" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820391 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8465162e-dd9f-45b4-83a6-94666ac2b87b-images\") pod \"machine-api-operator-5694c8668f-cclnc\" (UID: \"8465162e-dd9f-45b4-83a6-94666ac2b87b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cclnc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820422 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vn8zf\" (UniqueName: \"kubernetes.io/projected/002a39eb-e2e0-4d3e-8f61-89a539a653a9-kube-api-access-vn8zf\") pod \"controller-manager-879f6c89f-wjlxh\" (UID: \"002a39eb-e2e0-4d3e-8f61-89a539a653a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wjlxh" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820455 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pkjt\" (UniqueName: \"kubernetes.io/projected/2957ef21-9f30-4c81-8c6a-4a7f9d7315db-kube-api-access-9pkjt\") pod \"package-server-manager-789f6589d5-72bt6\" (UID: \"2957ef21-9f30-4c81-8c6a-4a7f9d7315db\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-72bt6" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820478 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/537a87a4-8f58-441f-9199-62c5849c693c-serving-cert\") pod \"openshift-config-operator-7777fb866f-rslv2\" (UID: \"537a87a4-8f58-441f-9199-62c5849c693c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-rslv2" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820495 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/002a39eb-e2e0-4d3e-8f61-89a539a653a9-config\") pod \"controller-manager-879f6c89f-wjlxh\" (UID: \"002a39eb-e2e0-4d3e-8f61-89a539a653a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wjlxh" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820512 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e960def-7bc7-4041-94dc-8ccea63f8bb8-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7cs59\" (UID: \"1e960def-7bc7-4041-94dc-8ccea63f8bb8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7cs59" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820528 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qshkt\" (UniqueName: \"kubernetes.io/projected/f1f74368-89f6-44fb-aaa2-9159a217b4d7-kube-api-access-qshkt\") pod \"console-operator-58897d9998-zjqz6\" (UID: \"f1f74368-89f6-44fb-aaa2-9159a217b4d7\") " pod="openshift-console-operator/console-operator-58897d9998-zjqz6" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820548 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghfkh\" (UniqueName: \"kubernetes.io/projected/3201b51c-af63-40e7-8037-9e581791d495-kube-api-access-ghfkh\") pod \"etcd-operator-b45778765-h97cd\" (UID: \"3201b51c-af63-40e7-8037-9e581791d495\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h97cd" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820565 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/146cbde4-d891-47d8-a09f-d4f4b50bfe6d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-769kz\" (UID: \"146cbde4-d891-47d8-a09f-d4f4b50bfe6d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820584 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-etcd-serving-ca\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820600 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3201b51c-af63-40e7-8037-9e581791d495-serving-cert\") pod \"etcd-operator-b45778765-h97cd\" (UID: \"3201b51c-af63-40e7-8037-9e581791d495\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h97cd" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820620 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czg99\" (UniqueName: \"kubernetes.io/projected/0ceebcd8-2c53-4e4d-97bb-5d81008a6442-kube-api-access-czg99\") pod \"ingress-operator-5b745b69d9-w5l6w\" (UID: \"0ceebcd8-2c53-4e4d-97bb-5d81008a6442\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-w5l6w" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820639 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29dca8bf-7bce-455b-812f-fca8861518ca-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vfcd9\" (UID: \"29dca8bf-7bce-455b-812f-fca8861518ca\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vfcd9" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820658 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbqhc\" (UniqueName: \"kubernetes.io/projected/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-kube-api-access-lbqhc\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820675 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-etcd-client\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820691 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5d68a50c-6a38-4aba-bb02-9a25712d2212-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-8kvzw\" (UID: \"5d68a50c-6a38-4aba-bb02-9a25712d2212\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8kvzw" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820707 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52d94566-7844-4414-bf48-9122c2207dd6-service-ca-bundle\") pod \"router-default-5444994796-v7wnh\" (UID: \"52d94566-7844-4414-bf48-9122c2207dd6\") " pod="openshift-ingress/router-default-5444994796-v7wnh" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820721 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e960def-7bc7-4041-94dc-8ccea63f8bb8-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7cs59\" (UID: \"1e960def-7bc7-4041-94dc-8ccea63f8bb8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7cs59" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820736 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ppts\" (UniqueName: \"kubernetes.io/projected/96e1443d-dd18-4343-b200-756f9511c163-kube-api-access-7ppts\") pod \"authentication-operator-69f744f599-jvxv4\" (UID: \"96e1443d-dd18-4343-b200-756f9511c163\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jvxv4" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820750 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-encryption-config\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820767 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820803 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8jwm\" (UniqueName: \"kubernetes.io/projected/c56c4a24-e6c6-4aa0-8a62-94d3179dfe54-kube-api-access-l8jwm\") pod \"catalog-operator-68c6474976-7gdkq\" (UID: \"c56c4a24-e6c6-4aa0-8a62-94d3179dfe54\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7gdkq" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820821 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f997bb38-4f6e-495f-acb8-e8e0d1730947-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-vp6qk\" (UID: \"f997bb38-4f6e-495f-acb8-e8e0d1730947\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vp6qk" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820837 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e94f1e92-21b2-44c9-b499-b879850c288d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xmq82\" (UID: \"e94f1e92-21b2-44c9-b499-b879850c288d\") " pod="openshift-marketplace/marketplace-operator-79b997595-xmq82" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820852 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/002a39eb-e2e0-4d3e-8f61-89a539a653a9-serving-cert\") pod \"controller-manager-879f6c89f-wjlxh\" (UID: \"002a39eb-e2e0-4d3e-8f61-89a539a653a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wjlxh" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820883 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/146cbde4-d891-47d8-a09f-d4f4b50bfe6d-serving-cert\") pod \"apiserver-7bbb656c7d-769kz\" (UID: \"146cbde4-d891-47d8-a09f-d4f4b50bfe6d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820905 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820932 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-audit\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820949 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c56c4a24-e6c6-4aa0-8a62-94d3179dfe54-profile-collector-cert\") pod \"catalog-operator-68c6474976-7gdkq\" (UID: \"c56c4a24-e6c6-4aa0-8a62-94d3179dfe54\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7gdkq" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820966 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5d68a50c-6a38-4aba-bb02-9a25712d2212-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-8kvzw\" (UID: \"5d68a50c-6a38-4aba-bb02-9a25712d2212\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8kvzw" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.820985 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7470431a-2a31-41ae-b021-510ae5e3c505-proxy-tls\") pod \"machine-config-controller-84d6567774-vwqwb\" (UID: \"7470431a-2a31-41ae-b021-510ae5e3c505\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vwqwb" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821003 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-995tp\" (UniqueName: \"kubernetes.io/projected/e94f1e92-21b2-44c9-b499-b879850c288d-kube-api-access-995tp\") pod \"marketplace-operator-79b997595-xmq82\" (UID: \"e94f1e92-21b2-44c9-b499-b879850c288d\") " pod="openshift-marketplace/marketplace-operator-79b997595-xmq82" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821025 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/146cbde4-d891-47d8-a09f-d4f4b50bfe6d-etcd-client\") pod \"apiserver-7bbb656c7d-769kz\" (UID: \"146cbde4-d891-47d8-a09f-d4f4b50bfe6d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821042 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96e1443d-dd18-4343-b200-756f9511c163-config\") pod \"authentication-operator-69f744f599-jvxv4\" (UID: \"96e1443d-dd18-4343-b200-756f9511c163\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jvxv4" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821056 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1f74368-89f6-44fb-aaa2-9159a217b4d7-config\") pod \"console-operator-58897d9998-zjqz6\" (UID: \"f1f74368-89f6-44fb-aaa2-9159a217b4d7\") " pod="openshift-console-operator/console-operator-58897d9998-zjqz6" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821072 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/706c6a3b-823b-4ea3-b7a8-e20d571d3ace-client-ca\") pod \"route-controller-manager-6576b87f9c-5xwk8\" (UID: \"706c6a3b-823b-4ea3-b7a8-e20d571d3ace\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5xwk8" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821089 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgqgf\" (UniqueName: \"kubernetes.io/projected/86ac2c23-01e6-4a22-a79d-77a3269fb5a0-kube-api-access-wgqgf\") pod \"migrator-59844c95c7-qpdx4\" (UID: \"86ac2c23-01e6-4a22-a79d-77a3269fb5a0\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qpdx4" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821117 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c56c4a24-e6c6-4aa0-8a62-94d3179dfe54-srv-cert\") pod \"catalog-operator-68c6474976-7gdkq\" (UID: \"c56c4a24-e6c6-4aa0-8a62-94d3179dfe54\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7gdkq" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821146 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821163 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5qlp\" (UniqueName: \"kubernetes.io/projected/f997bb38-4f6e-495f-acb8-e8e0d1730947-kube-api-access-n5qlp\") pod \"kube-storage-version-migrator-operator-b67b599dd-vp6qk\" (UID: \"f997bb38-4f6e-495f-acb8-e8e0d1730947\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vp6qk" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821179 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/706c6a3b-823b-4ea3-b7a8-e20d571d3ace-serving-cert\") pod \"route-controller-manager-6576b87f9c-5xwk8\" (UID: \"706c6a3b-823b-4ea3-b7a8-e20d571d3ace\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5xwk8" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821195 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1d6b8080-9c3f-4f6e-bcb4-3d1d0edaaa7c-machine-approver-tls\") pod \"machine-approver-56656f9798-ntqvz\" (UID: \"1d6b8080-9c3f-4f6e-bcb4-3d1d0edaaa7c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ntqvz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821213 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3201b51c-af63-40e7-8037-9e581791d495-etcd-client\") pod \"etcd-operator-b45778765-h97cd\" (UID: \"3201b51c-af63-40e7-8037-9e581791d495\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h97cd" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821228 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8465162e-dd9f-45b4-83a6-94666ac2b87b-config\") pod \"machine-api-operator-5694c8668f-cclnc\" (UID: \"8465162e-dd9f-45b4-83a6-94666ac2b87b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cclnc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821243 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96e1443d-dd18-4343-b200-756f9511c163-service-ca-bundle\") pod \"authentication-operator-69f744f599-jvxv4\" (UID: \"96e1443d-dd18-4343-b200-756f9511c163\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jvxv4" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821260 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-audit-policies\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821280 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821297 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821316 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f1f74368-89f6-44fb-aaa2-9159a217b4d7-trusted-ca\") pod \"console-operator-58897d9998-zjqz6\" (UID: \"f1f74368-89f6-44fb-aaa2-9159a217b4d7\") " pod="openshift-console-operator/console-operator-58897d9998-zjqz6" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821334 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhrlb\" (UniqueName: \"kubernetes.io/projected/1d6b8080-9c3f-4f6e-bcb4-3d1d0edaaa7c-kube-api-access-mhrlb\") pod \"machine-approver-56656f9798-ntqvz\" (UID: \"1d6b8080-9c3f-4f6e-bcb4-3d1d0edaaa7c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ntqvz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821365 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cldhz\" (UniqueName: \"kubernetes.io/projected/628cb8f4-a587-498f-9398-403e0af5eec4-kube-api-access-cldhz\") pod \"downloads-7954f5f757-wrqpb\" (UID: \"628cb8f4-a587-498f-9398-403e0af5eec4\") " pod="openshift-console/downloads-7954f5f757-wrqpb" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821393 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72phf\" (UniqueName: \"kubernetes.io/projected/29dca8bf-7bce-455b-812f-fca8861518ca-kube-api-access-72phf\") pod \"openshift-apiserver-operator-796bbdcf4f-vfcd9\" (UID: \"29dca8bf-7bce-455b-812f-fca8861518ca\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vfcd9" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821425 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-console-serving-cert\") pod \"console-f9d7485db-qxzd9\" (UID: \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\") " pod="openshift-console/console-f9d7485db-qxzd9" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821448 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c510b795-d750-4f94-bc9a-88ba625bd556-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-cfw2n\" (UID: \"c510b795-d750-4f94-bc9a-88ba625bd556\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cfw2n" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821466 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b745a377-4575-45fb-a206-ea4754ecff76-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-phm68\" (UID: \"b745a377-4575-45fb-a206-ea4754ecff76\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-phm68" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821483 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-trusted-ca-bundle\") pod \"console-f9d7485db-qxzd9\" (UID: \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\") " pod="openshift-console/console-f9d7485db-qxzd9" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821500 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hxmk\" (UniqueName: \"kubernetes.io/projected/863eda44-9a47-42de-b2de-49234ac647f0-kube-api-access-8hxmk\") pod \"dns-operator-744455d44c-n2h44\" (UID: \"863eda44-9a47-42de-b2de-49234ac647f0\") " pod="openshift-dns-operator/dns-operator-744455d44c-n2h44" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821515 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kgjc\" (UniqueName: \"kubernetes.io/projected/706c6a3b-823b-4ea3-b7a8-e20d571d3ace-kube-api-access-9kgjc\") pod \"route-controller-manager-6576b87f9c-5xwk8\" (UID: \"706c6a3b-823b-4ea3-b7a8-e20d571d3ace\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5xwk8" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821532 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gck6q\" (UniqueName: \"kubernetes.io/projected/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-kube-api-access-gck6q\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821547 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/706c6a3b-823b-4ea3-b7a8-e20d571d3ace-config\") pod \"route-controller-manager-6576b87f9c-5xwk8\" (UID: \"706c6a3b-823b-4ea3-b7a8-e20d571d3ace\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5xwk8" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821562 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzkzm\" (UniqueName: \"kubernetes.io/projected/0007a585-5b17-44bd-89b8-2d1d233a03d4-kube-api-access-gzkzm\") pod \"olm-operator-6b444d44fb-zkkpc\" (UID: \"0007a585-5b17-44bd-89b8-2d1d233a03d4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zkkpc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821578 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-node-pullsecrets\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821596 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-console-oauth-config\") pod \"console-f9d7485db-qxzd9\" (UID: \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\") " pod="openshift-console/console-f9d7485db-qxzd9" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821611 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96e1443d-dd18-4343-b200-756f9511c163-serving-cert\") pod \"authentication-operator-69f744f599-jvxv4\" (UID: \"96e1443d-dd18-4343-b200-756f9511c163\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jvxv4" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821628 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821738 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5c8e7010-8b57-47ed-9270-417650a2a7c5-proxy-tls\") pod \"machine-config-operator-74547568cd-hqjnl\" (UID: \"5c8e7010-8b57-47ed-9270-417650a2a7c5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hqjnl" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821826 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1d6b8080-9c3f-4f6e-bcb4-3d1d0edaaa7c-auth-proxy-config\") pod \"machine-approver-56656f9798-ntqvz\" (UID: \"1d6b8080-9c3f-4f6e-bcb4-3d1d0edaaa7c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ntqvz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821962 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c510b795-d750-4f94-bc9a-88ba625bd556-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-cfw2n\" (UID: \"c510b795-d750-4f94-bc9a-88ba625bd556\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cfw2n" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821995 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/52d94566-7844-4414-bf48-9122c2207dd6-stats-auth\") pod \"router-default-5444994796-v7wnh\" (UID: \"52d94566-7844-4414-bf48-9122c2207dd6\") " pod="openshift-ingress/router-default-5444994796-v7wnh" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.822644 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/52d94566-7844-4414-bf48-9122c2207dd6-metrics-certs\") pod \"router-default-5444994796-v7wnh\" (UID: \"52d94566-7844-4414-bf48-9122c2207dd6\") " pod="openshift-ingress/router-default-5444994796-v7wnh" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.823076 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5c8e7010-8b57-47ed-9270-417650a2a7c5-auth-proxy-config\") pod \"machine-config-operator-74547568cd-hqjnl\" (UID: \"5c8e7010-8b57-47ed-9270-417650a2a7c5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hqjnl" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.823095 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.823124 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/3201b51c-af63-40e7-8037-9e581791d495-etcd-ca\") pod \"etcd-operator-b45778765-h97cd\" (UID: \"3201b51c-af63-40e7-8037-9e581791d495\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h97cd" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.823139 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-config\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.823420 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/002a39eb-e2e0-4d3e-8f61-89a539a653a9-client-ca\") pod \"controller-manager-879f6c89f-wjlxh\" (UID: \"002a39eb-e2e0-4d3e-8f61-89a539a653a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wjlxh" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.823457 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2957ef21-9f30-4c81-8c6a-4a7f9d7315db-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-72bt6\" (UID: \"2957ef21-9f30-4c81-8c6a-4a7f9d7315db\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-72bt6" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.823735 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f997bb38-4f6e-495f-acb8-e8e0d1730947-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-vp6qk\" (UID: \"f997bb38-4f6e-495f-acb8-e8e0d1730947\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vp6qk" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.823959 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0007a585-5b17-44bd-89b8-2d1d233a03d4-profile-collector-cert\") pod \"olm-operator-6b444d44fb-zkkpc\" (UID: \"0007a585-5b17-44bd-89b8-2d1d233a03d4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zkkpc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.824318 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e960def-7bc7-4041-94dc-8ccea63f8bb8-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7cs59\" (UID: \"1e960def-7bc7-4041-94dc-8ccea63f8bb8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7cs59" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.824322 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8465162e-dd9f-45b4-83a6-94666ac2b87b-images\") pod \"machine-api-operator-5694c8668f-cclnc\" (UID: \"8465162e-dd9f-45b4-83a6-94666ac2b87b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cclnc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.825255 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-audit\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.825354 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29dca8bf-7bce-455b-812f-fca8861518ca-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vfcd9\" (UID: \"29dca8bf-7bce-455b-812f-fca8861518ca\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vfcd9" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.825974 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/002a39eb-e2e0-4d3e-8f61-89a539a653a9-config\") pod \"controller-manager-879f6c89f-wjlxh\" (UID: \"002a39eb-e2e0-4d3e-8f61-89a539a653a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wjlxh" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.827627 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29dca8bf-7bce-455b-812f-fca8861518ca-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vfcd9\" (UID: \"29dca8bf-7bce-455b-812f-fca8861518ca\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vfcd9" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.846019 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/706c6a3b-823b-4ea3-b7a8-e20d571d3ace-serving-cert\") pod \"route-controller-manager-6576b87f9c-5xwk8\" (UID: \"706c6a3b-823b-4ea3-b7a8-e20d571d3ace\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5xwk8" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.846203 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5d68a50c-6a38-4aba-bb02-9a25712d2212-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-8kvzw\" (UID: \"5d68a50c-6a38-4aba-bb02-9a25712d2212\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8kvzw" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.846860 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.847952 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7470431a-2a31-41ae-b021-510ae5e3c505-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-vwqwb\" (UID: \"7470431a-2a31-41ae-b021-510ae5e3c505\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vwqwb" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.848007 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.848046 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/5f2944a8-8d91-4461-aa64-8908ca17f59e-signing-cabundle\") pod \"service-ca-9c57cc56f-llgd7\" (UID: \"5f2944a8-8d91-4461-aa64-8908ca17f59e\") " pod="openshift-service-ca/service-ca-9c57cc56f-llgd7" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.848083 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/52d94566-7844-4414-bf48-9122c2207dd6-default-certificate\") pod \"router-default-5444994796-v7wnh\" (UID: \"52d94566-7844-4414-bf48-9122c2207dd6\") " pod="openshift-ingress/router-default-5444994796-v7wnh" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.848115 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6d131df-3eb3-4bb1-a45a-ff6ae44b5ecb-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-pjbh7\" (UID: \"e6d131df-3eb3-4bb1-a45a-ff6ae44b5ecb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pjbh7" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.848171 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/146cbde4-d891-47d8-a09f-d4f4b50bfe6d-audit-dir\") pod \"apiserver-7bbb656c7d-769kz\" (UID: \"146cbde4-d891-47d8-a09f-d4f4b50bfe6d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.848201 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/537a87a4-8f58-441f-9199-62c5849c693c-available-featuregates\") pod \"openshift-config-operator-7777fb866f-rslv2\" (UID: \"537a87a4-8f58-441f-9199-62c5849c693c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-rslv2" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.848225 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d6b8080-9c3f-4f6e-bcb4-3d1d0edaaa7c-config\") pod \"machine-approver-56656f9798-ntqvz\" (UID: \"1d6b8080-9c3f-4f6e-bcb4-3d1d0edaaa7c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ntqvz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.848248 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/002a39eb-e2e0-4d3e-8f61-89a539a653a9-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-wjlxh\" (UID: \"002a39eb-e2e0-4d3e-8f61-89a539a653a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wjlxh" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.848276 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c510b795-d750-4f94-bc9a-88ba625bd556-config\") pod \"kube-apiserver-operator-766d6c64bb-cfw2n\" (UID: \"c510b795-d750-4f94-bc9a-88ba625bd556\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cfw2n" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.848303 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3201b51c-af63-40e7-8037-9e581791d495-config\") pod \"etcd-operator-b45778765-h97cd\" (UID: \"3201b51c-af63-40e7-8037-9e581791d495\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h97cd" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.848326 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/146cbde4-d891-47d8-a09f-d4f4b50bfe6d-encryption-config\") pod \"apiserver-7bbb656c7d-769kz\" (UID: \"146cbde4-d891-47d8-a09f-d4f4b50bfe6d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.848414 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7rfj\" (UniqueName: \"kubernetes.io/projected/b745a377-4575-45fb-a206-ea4754ecff76-kube-api-access-p7rfj\") pod \"cluster-samples-operator-665b6dd947-phm68\" (UID: \"b745a377-4575-45fb-a206-ea4754ecff76\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-phm68" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.848441 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e960def-7bc7-4041-94dc-8ccea63f8bb8-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7cs59\" (UID: \"1e960def-7bc7-4041-94dc-8ccea63f8bb8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7cs59" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.848467 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9cg8\" (UniqueName: \"kubernetes.io/projected/6742e18f-a187-4a77-a734-bdec89bd89e0-kube-api-access-c9cg8\") pod \"multus-admission-controller-857f4d67dd-j4s5w\" (UID: \"6742e18f-a187-4a77-a734-bdec89bd89e0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-j4s5w" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.848494 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-console-config\") pod \"console-f9d7485db-qxzd9\" (UID: \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\") " pod="openshift-console/console-f9d7485db-qxzd9" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.848523 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/146cbde4-d891-47d8-a09f-d4f4b50bfe6d-audit-policies\") pod \"apiserver-7bbb656c7d-769kz\" (UID: \"146cbde4-d891-47d8-a09f-d4f4b50bfe6d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.848594 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr9fr\" (UniqueName: \"kubernetes.io/projected/5f2944a8-8d91-4461-aa64-8908ca17f59e-kube-api-access-dr9fr\") pod \"service-ca-9c57cc56f-llgd7\" (UID: \"5f2944a8-8d91-4461-aa64-8908ca17f59e\") " pod="openshift-service-ca/service-ca-9c57cc56f-llgd7" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.848621 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6d131df-3eb3-4bb1-a45a-ff6ae44b5ecb-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-pjbh7\" (UID: \"e6d131df-3eb3-4bb1-a45a-ff6ae44b5ecb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pjbh7" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.848647 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8465162e-dd9f-45b4-83a6-94666ac2b87b-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-cclnc\" (UID: \"8465162e-dd9f-45b4-83a6-94666ac2b87b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cclnc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.848670 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4vrg\" (UniqueName: \"kubernetes.io/projected/27c4b3cb-57d3-4282-93fe-16cfab039277-kube-api-access-z4vrg\") pod \"openshift-controller-manager-operator-756b6f6bc6-lm4k2\" (UID: \"27c4b3cb-57d3-4282-93fe-16cfab039277\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lm4k2" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.848696 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e94f1e92-21b2-44c9-b499-b879850c288d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xmq82\" (UID: \"e94f1e92-21b2-44c9-b499-b879850c288d\") " pod="openshift-marketplace/marketplace-operator-79b997595-xmq82" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.848798 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5477x\" (UniqueName: \"kubernetes.io/projected/146cbde4-d891-47d8-a09f-d4f4b50bfe6d-kube-api-access-5477x\") pod \"apiserver-7bbb656c7d-769kz\" (UID: \"146cbde4-d891-47d8-a09f-d4f4b50bfe6d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.849537 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/706c6a3b-823b-4ea3-b7a8-e20d571d3ace-config\") pod \"route-controller-manager-6576b87f9c-5xwk8\" (UID: \"706c6a3b-823b-4ea3-b7a8-e20d571d3ace\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5xwk8" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.821330 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-zjqz6"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.850330 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-jvxv4"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.852428 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/002a39eb-e2e0-4d3e-8f61-89a539a653a9-serving-cert\") pod \"controller-manager-879f6c89f-wjlxh\" (UID: \"002a39eb-e2e0-4d3e-8f61-89a539a653a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wjlxh" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.852497 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27c4b3cb-57d3-4282-93fe-16cfab039277-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-lm4k2\" (UID: \"27c4b3cb-57d3-4282-93fe-16cfab039277\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lm4k2" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.852582 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2n2nt\" (UniqueName: \"kubernetes.io/projected/52d94566-7844-4414-bf48-9122c2207dd6-kube-api-access-2n2nt\") pod \"router-default-5444994796-v7wnh\" (UID: \"52d94566-7844-4414-bf48-9122c2207dd6\") " pod="openshift-ingress/router-default-5444994796-v7wnh" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.852609 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-oauth-serving-cert\") pod \"console-f9d7485db-qxzd9\" (UID: \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\") " pod="openshift-console/console-f9d7485db-qxzd9" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.852635 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.852670 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-image-import-ca\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.853201 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-service-ca\") pod \"console-f9d7485db-qxzd9\" (UID: \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\") " pod="openshift-console/console-f9d7485db-qxzd9" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.853235 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.853265 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn8zr\" (UniqueName: \"kubernetes.io/projected/7470431a-2a31-41ae-b021-510ae5e3c505-kube-api-access-hn8zr\") pod \"machine-config-controller-84d6567774-vwqwb\" (UID: \"7470431a-2a31-41ae-b021-510ae5e3c505\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vwqwb" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.853288 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-trusted-ca-bundle\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.853313 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1f74368-89f6-44fb-aaa2-9159a217b4d7-serving-cert\") pod \"console-operator-58897d9998-zjqz6\" (UID: \"f1f74368-89f6-44fb-aaa2-9159a217b4d7\") " pod="openshift-console-operator/console-operator-58897d9998-zjqz6" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.854076 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-config\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.854108 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29dca8bf-7bce-455b-812f-fca8861518ca-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vfcd9\" (UID: \"29dca8bf-7bce-455b-812f-fca8861518ca\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vfcd9" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.854603 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5c8e7010-8b57-47ed-9270-417650a2a7c5-images\") pod \"machine-config-operator-74547568cd-hqjnl\" (UID: \"5c8e7010-8b57-47ed-9270-417650a2a7c5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hqjnl" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.854663 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/863eda44-9a47-42de-b2de-49234ac647f0-metrics-tls\") pod \"dns-operator-744455d44c-n2h44\" (UID: \"863eda44-9a47-42de-b2de-49234ac647f0\") " pod="openshift-dns-operator/dns-operator-744455d44c-n2h44" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.854694 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4z962\" (UniqueName: \"kubernetes.io/projected/537a87a4-8f58-441f-9199-62c5849c693c-kube-api-access-4z962\") pod \"openshift-config-operator-7777fb866f-rslv2\" (UID: \"537a87a4-8f58-441f-9199-62c5849c693c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-rslv2" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.854722 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27c4b3cb-57d3-4282-93fe-16cfab039277-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-lm4k2\" (UID: \"27c4b3cb-57d3-4282-93fe-16cfab039277\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lm4k2" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.854749 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5h6z\" (UniqueName: \"kubernetes.io/projected/5c8e7010-8b57-47ed-9270-417650a2a7c5-kube-api-access-v5h6z\") pod \"machine-config-operator-74547568cd-hqjnl\" (UID: \"5c8e7010-8b57-47ed-9270-417650a2a7c5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hqjnl" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.854795 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6mtd\" (UniqueName: \"kubernetes.io/projected/5d68a50c-6a38-4aba-bb02-9a25712d2212-kube-api-access-r6mtd\") pod \"cluster-image-registry-operator-dc59b4c8b-8kvzw\" (UID: \"5d68a50c-6a38-4aba-bb02-9a25712d2212\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8kvzw" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.854824 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0ceebcd8-2c53-4e4d-97bb-5d81008a6442-metrics-tls\") pod \"ingress-operator-5b745b69d9-w5l6w\" (UID: \"0ceebcd8-2c53-4e4d-97bb-5d81008a6442\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-w5l6w" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.854854 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6d131df-3eb3-4bb1-a45a-ff6ae44b5ecb-config\") pod \"kube-controller-manager-operator-78b949d7b-pjbh7\" (UID: \"e6d131df-3eb3-4bb1-a45a-ff6ae44b5ecb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pjbh7" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.854880 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4f8p\" (UniqueName: \"kubernetes.io/projected/8465162e-dd9f-45b4-83a6-94666ac2b87b-kube-api-access-d4f8p\") pod \"machine-api-operator-5694c8668f-cclnc\" (UID: \"8465162e-dd9f-45b4-83a6-94666ac2b87b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cclnc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.854907 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-audit-dir\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.854934 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.854961 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6742e18f-a187-4a77-a734-bdec89bd89e0-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-j4s5w\" (UID: \"6742e18f-a187-4a77-a734-bdec89bd89e0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-j4s5w" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.855334 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0ceebcd8-2c53-4e4d-97bb-5d81008a6442-bound-sa-token\") pod \"ingress-operator-5b745b69d9-w5l6w\" (UID: \"0ceebcd8-2c53-4e4d-97bb-5d81008a6442\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-w5l6w" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.855365 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/3201b51c-af63-40e7-8037-9e581791d495-etcd-service-ca\") pod \"etcd-operator-b45778765-h97cd\" (UID: \"3201b51c-af63-40e7-8037-9e581791d495\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h97cd" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.855392 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-serving-cert\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.855418 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96e1443d-dd18-4343-b200-756f9511c163-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-jvxv4\" (UID: \"96e1443d-dd18-4343-b200-756f9511c163\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jvxv4" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.855444 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0007a585-5b17-44bd-89b8-2d1d233a03d4-srv-cert\") pod \"olm-operator-6b444d44fb-zkkpc\" (UID: \"0007a585-5b17-44bd-89b8-2d1d233a03d4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zkkpc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.855475 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-audit-dir\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.855497 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blg69\" (UniqueName: \"kubernetes.io/projected/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-kube-api-access-blg69\") pod \"console-f9d7485db-qxzd9\" (UID: \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\") " pod="openshift-console/console-f9d7485db-qxzd9" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.855528 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/002a39eb-e2e0-4d3e-8f61-89a539a653a9-client-ca\") pod \"controller-manager-879f6c89f-wjlxh\" (UID: \"002a39eb-e2e0-4d3e-8f61-89a539a653a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wjlxh" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.855525 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0ceebcd8-2c53-4e4d-97bb-5d81008a6442-trusted-ca\") pod \"ingress-operator-5b745b69d9-w5l6w\" (UID: \"0ceebcd8-2c53-4e4d-97bb-5d81008a6442\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-w5l6w" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.855688 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/5f2944a8-8d91-4461-aa64-8908ca17f59e-signing-key\") pod \"service-ca-9c57cc56f-llgd7\" (UID: \"5f2944a8-8d91-4461-aa64-8908ca17f59e\") " pod="openshift-service-ca/service-ca-9c57cc56f-llgd7" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.858356 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/146cbde4-d891-47d8-a09f-d4f4b50bfe6d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-769kz\" (UID: \"146cbde4-d891-47d8-a09f-d4f4b50bfe6d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.858472 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-console-serving-cert\") pod \"console-f9d7485db-qxzd9\" (UID: \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\") " pod="openshift-console/console-f9d7485db-qxzd9" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.859129 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0ceebcd8-2c53-4e4d-97bb-5d81008a6442-trusted-ca\") pod \"ingress-operator-5b745b69d9-w5l6w\" (UID: \"0ceebcd8-2c53-4e4d-97bb-5d81008a6442\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-w5l6w" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.859719 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/537a87a4-8f58-441f-9199-62c5849c693c-available-featuregates\") pod \"openshift-config-operator-7777fb866f-rslv2\" (UID: \"537a87a4-8f58-441f-9199-62c5849c693c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-rslv2" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.859897 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27c4b3cb-57d3-4282-93fe-16cfab039277-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-lm4k2\" (UID: \"27c4b3cb-57d3-4282-93fe-16cfab039277\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lm4k2" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.861226 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1f74368-89f6-44fb-aaa2-9159a217b4d7-config\") pod \"console-operator-58897d9998-zjqz6\" (UID: \"f1f74368-89f6-44fb-aaa2-9159a217b4d7\") " pod="openshift-console-operator/console-operator-58897d9998-zjqz6" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.861430 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-trusted-ca-bundle\") pod \"console-f9d7485db-qxzd9\" (UID: \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\") " pod="openshift-console/console-f9d7485db-qxzd9" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.861562 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-oauth-serving-cert\") pod \"console-f9d7485db-qxzd9\" (UID: \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\") " pod="openshift-console/console-f9d7485db-qxzd9" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.862363 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/146cbde4-d891-47d8-a09f-d4f4b50bfe6d-etcd-client\") pod \"apiserver-7bbb656c7d-769kz\" (UID: \"146cbde4-d891-47d8-a09f-d4f4b50bfe6d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.862708 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/146cbde4-d891-47d8-a09f-d4f4b50bfe6d-serving-cert\") pod \"apiserver-7bbb656c7d-769kz\" (UID: \"146cbde4-d891-47d8-a09f-d4f4b50bfe6d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.863080 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/146cbde4-d891-47d8-a09f-d4f4b50bfe6d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-769kz\" (UID: \"146cbde4-d891-47d8-a09f-d4f4b50bfe6d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.863959 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b745a377-4575-45fb-a206-ea4754ecff76-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-phm68\" (UID: \"b745a377-4575-45fb-a206-ea4754ecff76\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-phm68" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.864747 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f1f74368-89f6-44fb-aaa2-9159a217b4d7-trusted-ca\") pod \"console-operator-58897d9998-zjqz6\" (UID: \"f1f74368-89f6-44fb-aaa2-9159a217b4d7\") " pod="openshift-console-operator/console-operator-58897d9998-zjqz6" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.865088 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-image-import-ca\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.865285 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/706c6a3b-823b-4ea3-b7a8-e20d571d3ace-client-ca\") pod \"route-controller-manager-6576b87f9c-5xwk8\" (UID: \"706c6a3b-823b-4ea3-b7a8-e20d571d3ace\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5xwk8" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.865925 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96e1443d-dd18-4343-b200-756f9511c163-config\") pod \"authentication-operator-69f744f599-jvxv4\" (UID: \"96e1443d-dd18-4343-b200-756f9511c163\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jvxv4" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.866011 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/3201b51c-af63-40e7-8037-9e581791d495-etcd-service-ca\") pod \"etcd-operator-b45778765-h97cd\" (UID: \"3201b51c-af63-40e7-8037-9e581791d495\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h97cd" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.867469 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.867799 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d6b8080-9c3f-4f6e-bcb4-3d1d0edaaa7c-config\") pod \"machine-approver-56656f9798-ntqvz\" (UID: \"1d6b8080-9c3f-4f6e-bcb4-3d1d0edaaa7c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ntqvz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.867846 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6d131df-3eb3-4bb1-a45a-ff6ae44b5ecb-config\") pod \"kube-controller-manager-operator-78b949d7b-pjbh7\" (UID: \"e6d131df-3eb3-4bb1-a45a-ff6ae44b5ecb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pjbh7" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.869204 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-service-ca\") pod \"console-f9d7485db-qxzd9\" (UID: \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\") " pod="openshift-console/console-f9d7485db-qxzd9" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.869845 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-etcd-serving-ca\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.870341 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8465162e-dd9f-45b4-83a6-94666ac2b87b-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-cclnc\" (UID: \"8465162e-dd9f-45b4-83a6-94666ac2b87b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cclnc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.870694 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-console-config\") pod \"console-f9d7485db-qxzd9\" (UID: \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\") " pod="openshift-console/console-f9d7485db-qxzd9" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.872185 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-serving-cert\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.872291 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-encryption-config\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.873037 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96e1443d-dd18-4343-b200-756f9511c163-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-jvxv4\" (UID: \"96e1443d-dd18-4343-b200-756f9511c163\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jvxv4" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.873116 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3201b51c-af63-40e7-8037-9e581791d495-config\") pod \"etcd-operator-b45778765-h97cd\" (UID: \"3201b51c-af63-40e7-8037-9e581791d495\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h97cd" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.873139 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-audit-dir\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.873529 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.873540 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/146cbde4-d891-47d8-a09f-d4f4b50bfe6d-audit-dir\") pod \"apiserver-7bbb656c7d-769kz\" (UID: \"146cbde4-d891-47d8-a09f-d4f4b50bfe6d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.873850 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6d131df-3eb3-4bb1-a45a-ff6ae44b5ecb-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-pjbh7\" (UID: \"e6d131df-3eb3-4bb1-a45a-ff6ae44b5ecb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pjbh7" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.875171 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-node-pullsecrets\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.875355 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/002a39eb-e2e0-4d3e-8f61-89a539a653a9-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-wjlxh\" (UID: \"002a39eb-e2e0-4d3e-8f61-89a539a653a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wjlxh" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.875705 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96e1443d-dd18-4343-b200-756f9511c163-service-ca-bundle\") pod \"authentication-operator-69f744f599-jvxv4\" (UID: \"96e1443d-dd18-4343-b200-756f9511c163\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jvxv4" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.875730 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vp6qk"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.875974 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8465162e-dd9f-45b4-83a6-94666ac2b87b-config\") pod \"machine-api-operator-5694c8668f-cclnc\" (UID: \"8465162e-dd9f-45b4-83a6-94666ac2b87b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cclnc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.877084 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5d68a50c-6a38-4aba-bb02-9a25712d2212-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-8kvzw\" (UID: \"5d68a50c-6a38-4aba-bb02-9a25712d2212\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8kvzw" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.877741 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-trusted-ca-bundle\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.878893 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/146cbde4-d891-47d8-a09f-d4f4b50bfe6d-audit-policies\") pod \"apiserver-7bbb656c7d-769kz\" (UID: \"146cbde4-d891-47d8-a09f-d4f4b50bfe6d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.878986 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-j4s5w"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.880339 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-llgd7"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.880452 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-console-oauth-config\") pod \"console-f9d7485db-qxzd9\" (UID: \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\") " pod="openshift-console/console-f9d7485db-qxzd9" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.880606 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-etcd-client\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.880859 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/27c4b3cb-57d3-4282-93fe-16cfab039277-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-lm4k2\" (UID: \"27c4b3cb-57d3-4282-93fe-16cfab039277\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lm4k2" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.881827 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e960def-7bc7-4041-94dc-8ccea63f8bb8-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7cs59\" (UID: \"1e960def-7bc7-4041-94dc-8ccea63f8bb8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7cs59" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.881849 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f1f74368-89f6-44fb-aaa2-9159a217b4d7-serving-cert\") pod \"console-operator-58897d9998-zjqz6\" (UID: \"f1f74368-89f6-44fb-aaa2-9159a217b4d7\") " pod="openshift-console-operator/console-operator-58897d9998-zjqz6" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.882536 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rdgn6"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.884922 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/863eda44-9a47-42de-b2de-49234ac647f0-metrics-tls\") pod \"dns-operator-744455d44c-n2h44\" (UID: \"863eda44-9a47-42de-b2de-49234ac647f0\") " pod="openshift-dns-operator/dns-operator-744455d44c-n2h44" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.885212 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7gdkq"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.887333 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/146cbde4-d891-47d8-a09f-d4f4b50bfe6d-encryption-config\") pod \"apiserver-7bbb656c7d-769kz\" (UID: \"146cbde4-d891-47d8-a09f-d4f4b50bfe6d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.887566 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0ceebcd8-2c53-4e4d-97bb-5d81008a6442-metrics-tls\") pod \"ingress-operator-5b745b69d9-w5l6w\" (UID: \"0ceebcd8-2c53-4e4d-97bb-5d81008a6442\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-w5l6w" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.888190 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/537a87a4-8f58-441f-9199-62c5849c693c-serving-cert\") pod \"openshift-config-operator-7777fb866f-rslv2\" (UID: \"537a87a4-8f58-441f-9199-62c5849c693c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-rslv2" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.888255 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cfw2n"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.888556 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.888580 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3201b51c-af63-40e7-8037-9e581791d495-serving-cert\") pod \"etcd-operator-b45778765-h97cd\" (UID: \"3201b51c-af63-40e7-8037-9e581791d495\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h97cd" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.889318 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96e1443d-dd18-4343-b200-756f9511c163-serving-cert\") pod \"authentication-operator-69f744f599-jvxv4\" (UID: \"96e1443d-dd18-4343-b200-756f9511c163\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jvxv4" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.890307 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1d6b8080-9c3f-4f6e-bcb4-3d1d0edaaa7c-machine-approver-tls\") pod \"machine-approver-56656f9798-ntqvz\" (UID: \"1d6b8080-9c3f-4f6e-bcb4-3d1d0edaaa7c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ntqvz" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.890474 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-whh46"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.891570 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-kl9j4"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.892576 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-w5l6w"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.893576 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xmq82"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.894390 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/3201b51c-af63-40e7-8037-9e581791d495-etcd-ca\") pod \"etcd-operator-b45778765-h97cd\" (UID: \"3201b51c-af63-40e7-8037-9e581791d495\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h97cd" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.894602 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-42f9f"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.896131 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-znm6j"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.896250 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-42f9f" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.896777 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-vwqwb"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.897028 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-znm6j" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.897760 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qpdx4"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.898638 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-wrqpb"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.899609 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-f877x"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.900596 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-znm6j"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.906239 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483205-527gk"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.907669 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hfc8p"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.908306 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.908848 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-42f9f"] Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.927868 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3201b51c-af63-40e7-8037-9e581791d495-etcd-client\") pod \"etcd-operator-b45778765-h97cd\" (UID: \"3201b51c-af63-40e7-8037-9e581791d495\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h97cd" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.928514 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.948089 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.957553 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbqhc\" (UniqueName: \"kubernetes.io/projected/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-kube-api-access-lbqhc\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.957768 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.958173 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8jwm\" (UniqueName: \"kubernetes.io/projected/c56c4a24-e6c6-4aa0-8a62-94d3179dfe54-kube-api-access-l8jwm\") pod \"catalog-operator-68c6474976-7gdkq\" (UID: \"c56c4a24-e6c6-4aa0-8a62-94d3179dfe54\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7gdkq" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.958369 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f997bb38-4f6e-495f-acb8-e8e0d1730947-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-vp6qk\" (UID: \"f997bb38-4f6e-495f-acb8-e8e0d1730947\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vp6qk" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.958570 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e94f1e92-21b2-44c9-b499-b879850c288d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xmq82\" (UID: \"e94f1e92-21b2-44c9-b499-b879850c288d\") " pod="openshift-marketplace/marketplace-operator-79b997595-xmq82" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.958714 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.958925 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c56c4a24-e6c6-4aa0-8a62-94d3179dfe54-profile-collector-cert\") pod \"catalog-operator-68c6474976-7gdkq\" (UID: \"c56c4a24-e6c6-4aa0-8a62-94d3179dfe54\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7gdkq" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.959106 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7470431a-2a31-41ae-b021-510ae5e3c505-proxy-tls\") pod \"machine-config-controller-84d6567774-vwqwb\" (UID: \"7470431a-2a31-41ae-b021-510ae5e3c505\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vwqwb" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.959321 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-995tp\" (UniqueName: \"kubernetes.io/projected/e94f1e92-21b2-44c9-b499-b879850c288d-kube-api-access-995tp\") pod \"marketplace-operator-79b997595-xmq82\" (UID: \"e94f1e92-21b2-44c9-b499-b879850c288d\") " pod="openshift-marketplace/marketplace-operator-79b997595-xmq82" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.959552 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgqgf\" (UniqueName: \"kubernetes.io/projected/86ac2c23-01e6-4a22-a79d-77a3269fb5a0-kube-api-access-wgqgf\") pod \"migrator-59844c95c7-qpdx4\" (UID: \"86ac2c23-01e6-4a22-a79d-77a3269fb5a0\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qpdx4" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.959694 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c56c4a24-e6c6-4aa0-8a62-94d3179dfe54-srv-cert\") pod \"catalog-operator-68c6474976-7gdkq\" (UID: \"c56c4a24-e6c6-4aa0-8a62-94d3179dfe54\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7gdkq" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.959873 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.959971 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/52d94566-7844-4414-bf48-9122c2207dd6-stats-auth\") pod \"router-default-5444994796-v7wnh\" (UID: \"52d94566-7844-4414-bf48-9122c2207dd6\") " pod="openshift-ingress/router-default-5444994796-v7wnh" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.959995 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5qlp\" (UniqueName: \"kubernetes.io/projected/f997bb38-4f6e-495f-acb8-e8e0d1730947-kube-api-access-n5qlp\") pod \"kube-storage-version-migrator-operator-b67b599dd-vp6qk\" (UID: \"f997bb38-4f6e-495f-acb8-e8e0d1730947\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vp6qk" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.960200 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-audit-policies\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.960314 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.960418 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.960555 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c510b795-d750-4f94-bc9a-88ba625bd556-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-cfw2n\" (UID: \"c510b795-d750-4f94-bc9a-88ba625bd556\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cfw2n" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.960672 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzkzm\" (UniqueName: \"kubernetes.io/projected/0007a585-5b17-44bd-89b8-2d1d233a03d4-kube-api-access-gzkzm\") pod \"olm-operator-6b444d44fb-zkkpc\" (UID: \"0007a585-5b17-44bd-89b8-2d1d233a03d4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zkkpc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.960856 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.961012 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5c8e7010-8b57-47ed-9270-417650a2a7c5-proxy-tls\") pod \"machine-config-operator-74547568cd-hqjnl\" (UID: \"5c8e7010-8b57-47ed-9270-417650a2a7c5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hqjnl" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.961110 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c510b795-d750-4f94-bc9a-88ba625bd556-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-cfw2n\" (UID: \"c510b795-d750-4f94-bc9a-88ba625bd556\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cfw2n" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.961211 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5c8e7010-8b57-47ed-9270-417650a2a7c5-auth-proxy-config\") pod \"machine-config-operator-74547568cd-hqjnl\" (UID: \"5c8e7010-8b57-47ed-9270-417650a2a7c5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hqjnl" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.961308 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.961432 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2957ef21-9f30-4c81-8c6a-4a7f9d7315db-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-72bt6\" (UID: \"2957ef21-9f30-4c81-8c6a-4a7f9d7315db\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-72bt6" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.961544 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f997bb38-4f6e-495f-acb8-e8e0d1730947-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-vp6qk\" (UID: \"f997bb38-4f6e-495f-acb8-e8e0d1730947\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vp6qk" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.961640 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0007a585-5b17-44bd-89b8-2d1d233a03d4-profile-collector-cert\") pod \"olm-operator-6b444d44fb-zkkpc\" (UID: \"0007a585-5b17-44bd-89b8-2d1d233a03d4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zkkpc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.961750 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7470431a-2a31-41ae-b021-510ae5e3c505-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-vwqwb\" (UID: \"7470431a-2a31-41ae-b021-510ae5e3c505\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vwqwb" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.961939 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.962066 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/5f2944a8-8d91-4461-aa64-8908ca17f59e-signing-cabundle\") pod \"service-ca-9c57cc56f-llgd7\" (UID: \"5f2944a8-8d91-4461-aa64-8908ca17f59e\") " pod="openshift-service-ca/service-ca-9c57cc56f-llgd7" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.962199 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c510b795-d750-4f94-bc9a-88ba625bd556-config\") pod \"kube-apiserver-operator-766d6c64bb-cfw2n\" (UID: \"c510b795-d750-4f94-bc9a-88ba625bd556\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cfw2n" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.962367 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9cg8\" (UniqueName: \"kubernetes.io/projected/6742e18f-a187-4a77-a734-bdec89bd89e0-kube-api-access-c9cg8\") pod \"multus-admission-controller-857f4d67dd-j4s5w\" (UID: \"6742e18f-a187-4a77-a734-bdec89bd89e0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-j4s5w" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.962481 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dr9fr\" (UniqueName: \"kubernetes.io/projected/5f2944a8-8d91-4461-aa64-8908ca17f59e-kube-api-access-dr9fr\") pod \"service-ca-9c57cc56f-llgd7\" (UID: \"5f2944a8-8d91-4461-aa64-8908ca17f59e\") " pod="openshift-service-ca/service-ca-9c57cc56f-llgd7" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.962600 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e94f1e92-21b2-44c9-b499-b879850c288d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xmq82\" (UID: \"e94f1e92-21b2-44c9-b499-b879850c288d\") " pod="openshift-marketplace/marketplace-operator-79b997595-xmq82" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.962729 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7470431a-2a31-41ae-b021-510ae5e3c505-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-vwqwb\" (UID: \"7470431a-2a31-41ae-b021-510ae5e3c505\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vwqwb" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.962082 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5c8e7010-8b57-47ed-9270-417650a2a7c5-auth-proxy-config\") pod \"machine-config-operator-74547568cd-hqjnl\" (UID: \"5c8e7010-8b57-47ed-9270-417650a2a7c5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hqjnl" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.962927 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.963169 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.963295 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hn8zr\" (UniqueName: \"kubernetes.io/projected/7470431a-2a31-41ae-b021-510ae5e3c505-kube-api-access-hn8zr\") pod \"machine-config-controller-84d6567774-vwqwb\" (UID: \"7470431a-2a31-41ae-b021-510ae5e3c505\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vwqwb" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.963532 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5c8e7010-8b57-47ed-9270-417650a2a7c5-images\") pod \"machine-config-operator-74547568cd-hqjnl\" (UID: \"5c8e7010-8b57-47ed-9270-417650a2a7c5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hqjnl" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.963668 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5h6z\" (UniqueName: \"kubernetes.io/projected/5c8e7010-8b57-47ed-9270-417650a2a7c5-kube-api-access-v5h6z\") pod \"machine-config-operator-74547568cd-hqjnl\" (UID: \"5c8e7010-8b57-47ed-9270-417650a2a7c5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hqjnl" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.963944 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-audit-dir\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.964130 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.964324 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6742e18f-a187-4a77-a734-bdec89bd89e0-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-j4s5w\" (UID: \"6742e18f-a187-4a77-a734-bdec89bd89e0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-j4s5w" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.964467 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0007a585-5b17-44bd-89b8-2d1d233a03d4-srv-cert\") pod \"olm-operator-6b444d44fb-zkkpc\" (UID: \"0007a585-5b17-44bd-89b8-2d1d233a03d4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zkkpc" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.964670 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/5f2944a8-8d91-4461-aa64-8908ca17f59e-signing-key\") pod \"service-ca-9c57cc56f-llgd7\" (UID: \"5f2944a8-8d91-4461-aa64-8908ca17f59e\") " pod="openshift-service-ca/service-ca-9c57cc56f-llgd7" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.964945 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pkjt\" (UniqueName: \"kubernetes.io/projected/2957ef21-9f30-4c81-8c6a-4a7f9d7315db-kube-api-access-9pkjt\") pod \"package-server-manager-789f6589d5-72bt6\" (UID: \"2957ef21-9f30-4c81-8c6a-4a7f9d7315db\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-72bt6" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.964093 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-audit-dir\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.968602 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.988693 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 21 10:58:29 crc kubenswrapper[4881]: I0121 10:58:29.999523 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/52d94566-7844-4414-bf48-9122c2207dd6-default-certificate\") pod \"router-default-5444994796-v7wnh\" (UID: \"52d94566-7844-4414-bf48-9122c2207dd6\") " pod="openshift-ingress/router-default-5444994796-v7wnh" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.007591 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.014120 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/52d94566-7844-4414-bf48-9122c2207dd6-service-ca-bundle\") pod \"router-default-5444994796-v7wnh\" (UID: \"52d94566-7844-4414-bf48-9122c2207dd6\") " pod="openshift-ingress/router-default-5444994796-v7wnh" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.027545 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.038708 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/52d94566-7844-4414-bf48-9122c2207dd6-metrics-certs\") pod \"router-default-5444994796-v7wnh\" (UID: \"52d94566-7844-4414-bf48-9122c2207dd6\") " pod="openshift-ingress/router-default-5444994796-v7wnh" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.047722 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.068689 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.072366 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7470431a-2a31-41ae-b021-510ae5e3c505-proxy-tls\") pod \"machine-config-controller-84d6567774-vwqwb\" (UID: \"7470431a-2a31-41ae-b021-510ae5e3c505\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vwqwb" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.087731 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.108169 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.117673 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6742e18f-a187-4a77-a734-bdec89bd89e0-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-j4s5w\" (UID: \"6742e18f-a187-4a77-a734-bdec89bd89e0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-j4s5w" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.128460 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.147494 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.168059 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.174590 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.188181 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.193638 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-audit-policies\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.208468 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.221364 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.227908 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.249327 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.270655 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.281287 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.288378 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.292426 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.309662 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.309739 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.309681 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.328341 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.337355 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.337570 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.348444 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.356303 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.360133 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.374309 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.383005 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.389511 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.408909 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.414632 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.435586 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.443046 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.448164 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.469094 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.474270 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.488512 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.508212 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.557294 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5c8e7010-8b57-47ed-9270-417650a2a7c5-images\") pod \"machine-config-operator-74547568cd-hqjnl\" (UID: \"5c8e7010-8b57-47ed-9270-417650a2a7c5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hqjnl" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.558655 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.558900 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.565323 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f997bb38-4f6e-495f-acb8-e8e0d1730947-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-vp6qk\" (UID: \"f997bb38-4f6e-495f-acb8-e8e0d1730947\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vp6qk" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.569026 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.588031 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.589690 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f997bb38-4f6e-495f-acb8-e8e0d1730947-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-vp6qk\" (UID: \"f997bb38-4f6e-495f-acb8-e8e0d1730947\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vp6qk" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.607591 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.628802 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.637338 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5c8e7010-8b57-47ed-9270-417650a2a7c5-proxy-tls\") pod \"machine-config-operator-74547568cd-hqjnl\" (UID: \"5c8e7010-8b57-47ed-9270-417650a2a7c5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hqjnl" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.649018 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.667443 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.689247 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.708444 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.728155 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.748851 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.756416 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2957ef21-9f30-4c81-8c6a-4a7f9d7315db-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-72bt6\" (UID: \"2957ef21-9f30-4c81-8c6a-4a7f9d7315db\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-72bt6" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.766646 4881 request.go:700] Waited for 1.002050161s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/secrets?fieldSelector=metadata.name%3Dkube-apiserver-operator-dockercfg-x57mr&limit=500&resourceVersion=0 Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.768521 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.788545 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.793895 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c510b795-d750-4f94-bc9a-88ba625bd556-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-cfw2n\" (UID: \"c510b795-d750-4f94-bc9a-88ba625bd556\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cfw2n" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.808856 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.828557 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.848025 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.853570 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c510b795-d750-4f94-bc9a-88ba625bd556-config\") pod \"kube-apiserver-operator-766d6c64bb-cfw2n\" (UID: \"c510b795-d750-4f94-bc9a-88ba625bd556\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cfw2n" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.868192 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.872923 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e94f1e92-21b2-44c9-b499-b879850c288d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xmq82\" (UID: \"e94f1e92-21b2-44c9-b499-b879850c288d\") " pod="openshift-marketplace/marketplace-operator-79b997595-xmq82" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.889188 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.914781 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.924484 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e94f1e92-21b2-44c9-b499-b879850c288d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xmq82\" (UID: \"e94f1e92-21b2-44c9-b499-b879850c288d\") " pod="openshift-marketplace/marketplace-operator-79b997595-xmq82" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.928457 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.932311 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c56c4a24-e6c6-4aa0-8a62-94d3179dfe54-profile-collector-cert\") pod \"catalog-operator-68c6474976-7gdkq\" (UID: \"c56c4a24-e6c6-4aa0-8a62-94d3179dfe54\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7gdkq" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.935411 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0007a585-5b17-44bd-89b8-2d1d233a03d4-profile-collector-cert\") pod \"olm-operator-6b444d44fb-zkkpc\" (UID: \"0007a585-5b17-44bd-89b8-2d1d233a03d4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zkkpc" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.948271 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 21 10:58:30 crc kubenswrapper[4881]: E0121 10:58:30.960475 4881 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 21 10:58:30 crc kubenswrapper[4881]: E0121 10:58:30.960808 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c56c4a24-e6c6-4aa0-8a62-94d3179dfe54-srv-cert podName:c56c4a24-e6c6-4aa0-8a62-94d3179dfe54 nodeName:}" failed. No retries permitted until 2026-01-21 10:58:31.460766809 +0000 UTC m=+98.720723278 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/c56c4a24-e6c6-4aa0-8a62-94d3179dfe54-srv-cert") pod "catalog-operator-68c6474976-7gdkq" (UID: "c56c4a24-e6c6-4aa0-8a62-94d3179dfe54") : failed to sync secret cache: timed out waiting for the condition Jan 21 10:58:30 crc kubenswrapper[4881]: E0121 10:58:30.962676 4881 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: failed to sync configmap cache: timed out waiting for the condition Jan 21 10:58:30 crc kubenswrapper[4881]: E0121 10:58:30.962774 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5f2944a8-8d91-4461-aa64-8908ca17f59e-signing-cabundle podName:5f2944a8-8d91-4461-aa64-8908ca17f59e nodeName:}" failed. No retries permitted until 2026-01-21 10:58:31.462755787 +0000 UTC m=+98.722712246 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/5f2944a8-8d91-4461-aa64-8908ca17f59e-signing-cabundle") pod "service-ca-9c57cc56f-llgd7" (UID: "5f2944a8-8d91-4461-aa64-8908ca17f59e") : failed to sync configmap cache: timed out waiting for the condition Jan 21 10:58:30 crc kubenswrapper[4881]: E0121 10:58:30.964941 4881 secret.go:188] Couldn't get secret openshift-service-ca/signing-key: failed to sync secret cache: timed out waiting for the condition Jan 21 10:58:30 crc kubenswrapper[4881]: E0121 10:58:30.964981 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5f2944a8-8d91-4461-aa64-8908ca17f59e-signing-key podName:5f2944a8-8d91-4461-aa64-8908ca17f59e nodeName:}" failed. No retries permitted until 2026-01-21 10:58:31.464972072 +0000 UTC m=+98.724928541 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/5f2944a8-8d91-4461-aa64-8908ca17f59e-signing-key") pod "service-ca-9c57cc56f-llgd7" (UID: "5f2944a8-8d91-4461-aa64-8908ca17f59e") : failed to sync secret cache: timed out waiting for the condition Jan 21 10:58:30 crc kubenswrapper[4881]: E0121 10:58:30.964997 4881 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 21 10:58:30 crc kubenswrapper[4881]: E0121 10:58:30.965077 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0007a585-5b17-44bd-89b8-2d1d233a03d4-srv-cert podName:0007a585-5b17-44bd-89b8-2d1d233a03d4 nodeName:}" failed. No retries permitted until 2026-01-21 10:58:31.465055454 +0000 UTC m=+98.725011923 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/0007a585-5b17-44bd-89b8-2d1d233a03d4-srv-cert") pod "olm-operator-6b444d44fb-zkkpc" (UID: "0007a585-5b17-44bd-89b8-2d1d233a03d4") : failed to sync secret cache: timed out waiting for the condition Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.967699 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 21 10:58:30 crc kubenswrapper[4881]: I0121 10:58:30.987656 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.008123 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.027818 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.048372 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.068714 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.089038 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.108616 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.129705 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.148779 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.187977 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.208734 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.229094 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.248164 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.268911 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.288442 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.308565 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.310432 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.328989 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.348336 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.370410 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.389111 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.408070 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.427895 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.448163 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.467658 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.489494 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.503689 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/5f2944a8-8d91-4461-aa64-8908ca17f59e-signing-cabundle\") pod \"service-ca-9c57cc56f-llgd7\" (UID: \"5f2944a8-8d91-4461-aa64-8908ca17f59e\") " pod="openshift-service-ca/service-ca-9c57cc56f-llgd7" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.503952 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0007a585-5b17-44bd-89b8-2d1d233a03d4-srv-cert\") pod \"olm-operator-6b444d44fb-zkkpc\" (UID: \"0007a585-5b17-44bd-89b8-2d1d233a03d4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zkkpc" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.504003 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/5f2944a8-8d91-4461-aa64-8908ca17f59e-signing-key\") pod \"service-ca-9c57cc56f-llgd7\" (UID: \"5f2944a8-8d91-4461-aa64-8908ca17f59e\") " pod="openshift-service-ca/service-ca-9c57cc56f-llgd7" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.504149 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c56c4a24-e6c6-4aa0-8a62-94d3179dfe54-srv-cert\") pod \"catalog-operator-68c6474976-7gdkq\" (UID: \"c56c4a24-e6c6-4aa0-8a62-94d3179dfe54\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7gdkq" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.505484 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/5f2944a8-8d91-4461-aa64-8908ca17f59e-signing-cabundle\") pod \"service-ca-9c57cc56f-llgd7\" (UID: \"5f2944a8-8d91-4461-aa64-8908ca17f59e\") " pod="openshift-service-ca/service-ca-9c57cc56f-llgd7" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.509408 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0007a585-5b17-44bd-89b8-2d1d233a03d4-srv-cert\") pod \"olm-operator-6b444d44fb-zkkpc\" (UID: \"0007a585-5b17-44bd-89b8-2d1d233a03d4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zkkpc" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.511613 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c56c4a24-e6c6-4aa0-8a62-94d3179dfe54-srv-cert\") pod \"catalog-operator-68c6474976-7gdkq\" (UID: \"c56c4a24-e6c6-4aa0-8a62-94d3179dfe54\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7gdkq" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.511758 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/5f2944a8-8d91-4461-aa64-8908ca17f59e-signing-key\") pod \"service-ca-9c57cc56f-llgd7\" (UID: \"5f2944a8-8d91-4461-aa64-8908ca17f59e\") " pod="openshift-service-ca/service-ca-9c57cc56f-llgd7" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.535542 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czg99\" (UniqueName: \"kubernetes.io/projected/0ceebcd8-2c53-4e4d-97bb-5d81008a6442-kube-api-access-czg99\") pod \"ingress-operator-5b745b69d9-w5l6w\" (UID: \"0ceebcd8-2c53-4e4d-97bb-5d81008a6442\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-w5l6w" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.549694 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72phf\" (UniqueName: \"kubernetes.io/projected/29dca8bf-7bce-455b-812f-fca8861518ca-kube-api-access-72phf\") pod \"openshift-apiserver-operator-796bbdcf4f-vfcd9\" (UID: \"29dca8bf-7bce-455b-812f-fca8861518ca\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vfcd9" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.565239 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qshkt\" (UniqueName: \"kubernetes.io/projected/f1f74368-89f6-44fb-aaa2-9159a217b4d7-kube-api-access-qshkt\") pod \"console-operator-58897d9998-zjqz6\" (UID: \"f1f74368-89f6-44fb-aaa2-9159a217b4d7\") " pod="openshift-console-operator/console-operator-58897d9998-zjqz6" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.587664 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vn8zf\" (UniqueName: \"kubernetes.io/projected/002a39eb-e2e0-4d3e-8f61-89a539a653a9-kube-api-access-vn8zf\") pod \"controller-manager-879f6c89f-wjlxh\" (UID: \"002a39eb-e2e0-4d3e-8f61-89a539a653a9\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wjlxh" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.595549 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-zjqz6" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.605687 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghfkh\" (UniqueName: \"kubernetes.io/projected/3201b51c-af63-40e7-8037-9e581791d495-kube-api-access-ghfkh\") pod \"etcd-operator-b45778765-h97cd\" (UID: \"3201b51c-af63-40e7-8037-9e581791d495\") " pod="openshift-etcd-operator/etcd-operator-b45778765-h97cd" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.620382 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vfcd9" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.627573 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gck6q\" (UniqueName: \"kubernetes.io/projected/3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57-kube-api-access-gck6q\") pod \"apiserver-76f77b778f-svmbc\" (UID: \"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57\") " pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.648098 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhrlb\" (UniqueName: \"kubernetes.io/projected/1d6b8080-9c3f-4f6e-bcb4-3d1d0edaaa7c-kube-api-access-mhrlb\") pod \"machine-approver-56656f9798-ntqvz\" (UID: \"1d6b8080-9c3f-4f6e-bcb4-3d1d0edaaa7c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ntqvz" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.731258 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0ceebcd8-2c53-4e4d-97bb-5d81008a6442-bound-sa-token\") pod \"ingress-operator-5b745b69d9-w5l6w\" (UID: \"0ceebcd8-2c53-4e4d-97bb-5d81008a6442\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-w5l6w" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.786639 4881 request.go:700] Waited for 1.918170143s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/serviceaccounts/machine-api-operator/token Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.811291 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4f8p\" (UniqueName: \"kubernetes.io/projected/8465162e-dd9f-45b4-83a6-94666ac2b87b-kube-api-access-d4f8p\") pod \"machine-api-operator-5694c8668f-cclnc\" (UID: \"8465162e-dd9f-45b4-83a6-94666ac2b87b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-cclnc" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.834202 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cldhz\" (UniqueName: \"kubernetes.io/projected/628cb8f4-a587-498f-9398-403e0af5eec4-kube-api-access-cldhz\") pod \"downloads-7954f5f757-wrqpb\" (UID: \"628cb8f4-a587-498f-9398-403e0af5eec4\") " pod="openshift-console/downloads-7954f5f757-wrqpb" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.861348 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-h97cd" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.861568 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ntqvz" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.862233 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-w5l6w" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.863656 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-cclnc" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.865069 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-wrqpb" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.865403 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.885173 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-wjlxh" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.887384 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hxmk\" (UniqueName: \"kubernetes.io/projected/863eda44-9a47-42de-b2de-49234ac647f0-kube-api-access-8hxmk\") pod \"dns-operator-744455d44c-n2h44\" (UID: \"863eda44-9a47-42de-b2de-49234ac647f0\") " pod="openshift-dns-operator/dns-operator-744455d44c-n2h44" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.893755 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e960def-7bc7-4041-94dc-8ccea63f8bb8-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7cs59\" (UID: \"1e960def-7bc7-4041-94dc-8ccea63f8bb8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7cs59" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.896916 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e6d131df-3eb3-4bb1-a45a-ff6ae44b5ecb-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-pjbh7\" (UID: \"e6d131df-3eb3-4bb1-a45a-ff6ae44b5ecb\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pjbh7" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.901183 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blg69\" (UniqueName: \"kubernetes.io/projected/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-kube-api-access-blg69\") pod \"console-f9d7485db-qxzd9\" (UID: \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\") " pod="openshift-console/console-f9d7485db-qxzd9" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.901907 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-qxzd9" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.903063 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2n2nt\" (UniqueName: \"kubernetes.io/projected/52d94566-7844-4414-bf48-9122c2207dd6-kube-api-access-2n2nt\") pod \"router-default-5444994796-v7wnh\" (UID: \"52d94566-7844-4414-bf48-9122c2207dd6\") " pod="openshift-ingress/router-default-5444994796-v7wnh" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.909457 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kgjc\" (UniqueName: \"kubernetes.io/projected/706c6a3b-823b-4ea3-b7a8-e20d571d3ace-kube-api-access-9kgjc\") pod \"route-controller-manager-6576b87f9c-5xwk8\" (UID: \"706c6a3b-823b-4ea3-b7a8-e20d571d3ace\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5xwk8" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.912814 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5477x\" (UniqueName: \"kubernetes.io/projected/146cbde4-d891-47d8-a09f-d4f4b50bfe6d-kube-api-access-5477x\") pod \"apiserver-7bbb656c7d-769kz\" (UID: \"146cbde4-d891-47d8-a09f-d4f4b50bfe6d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" Jan 21 10:58:31 crc kubenswrapper[4881]: I0121 10:58:31.989678 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.009207 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4z962\" (UniqueName: \"kubernetes.io/projected/537a87a4-8f58-441f-9199-62c5849c693c-kube-api-access-4z962\") pod \"openshift-config-operator-7777fb866f-rslv2\" (UID: \"537a87a4-8f58-441f-9199-62c5849c693c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-rslv2" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.021680 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6mtd\" (UniqueName: \"kubernetes.io/projected/5d68a50c-6a38-4aba-bb02-9a25712d2212-kube-api-access-r6mtd\") pod \"cluster-image-registry-operator-dc59b4c8b-8kvzw\" (UID: \"5d68a50c-6a38-4aba-bb02-9a25712d2212\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8kvzw" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.026661 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7rfj\" (UniqueName: \"kubernetes.io/projected/b745a377-4575-45fb-a206-ea4754ecff76-kube-api-access-p7rfj\") pod \"cluster-samples-operator-665b6dd947-phm68\" (UID: \"b745a377-4575-45fb-a206-ea4754ecff76\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-phm68" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.031534 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5d68a50c-6a38-4aba-bb02-9a25712d2212-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-8kvzw\" (UID: \"5d68a50c-6a38-4aba-bb02-9a25712d2212\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8kvzw" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.032156 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5xwk8" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.032531 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-n2h44" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.032542 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rslv2" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.057772 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4vrg\" (UniqueName: \"kubernetes.io/projected/27c4b3cb-57d3-4282-93fe-16cfab039277-kube-api-access-z4vrg\") pod \"openshift-controller-manager-operator-756b6f6bc6-lm4k2\" (UID: \"27c4b3cb-57d3-4282-93fe-16cfab039277\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lm4k2" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.060018 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pjbh7" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.075228 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7cs59" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.075430 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.075799 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.075851 4881 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.076112 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ppts\" (UniqueName: \"kubernetes.io/projected/96e1443d-dd18-4343-b200-756f9511c163-kube-api-access-7ppts\") pod \"authentication-operator-69f744f599-jvxv4\" (UID: \"96e1443d-dd18-4343-b200-756f9511c163\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jvxv4" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.078231 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.088199 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.097107 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.109936 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-v7wnh" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.259210 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-phm68" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.259498 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8kvzw" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.259587 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lm4k2" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.262989 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbqhc\" (UniqueName: \"kubernetes.io/projected/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-kube-api-access-lbqhc\") pod \"oauth-openshift-558db77b4-whh46\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.278255 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-995tp\" (UniqueName: \"kubernetes.io/projected/e94f1e92-21b2-44c9-b499-b879850c288d-kube-api-access-995tp\") pod \"marketplace-operator-79b997595-xmq82\" (UID: \"e94f1e92-21b2-44c9-b499-b879850c288d\") " pod="openshift-marketplace/marketplace-operator-79b997595-xmq82" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.285363 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgqgf\" (UniqueName: \"kubernetes.io/projected/86ac2c23-01e6-4a22-a79d-77a3269fb5a0-kube-api-access-wgqgf\") pod \"migrator-59844c95c7-qpdx4\" (UID: \"86ac2c23-01e6-4a22-a79d-77a3269fb5a0\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qpdx4" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.288593 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8jwm\" (UniqueName: \"kubernetes.io/projected/c56c4a24-e6c6-4aa0-8a62-94d3179dfe54-kube-api-access-l8jwm\") pod \"catalog-operator-68c6474976-7gdkq\" (UID: \"c56c4a24-e6c6-4aa0-8a62-94d3179dfe54\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7gdkq" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.292115 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5qlp\" (UniqueName: \"kubernetes.io/projected/f997bb38-4f6e-495f-acb8-e8e0d1730947-kube-api-access-n5qlp\") pod \"kube-storage-version-migrator-operator-b67b599dd-vp6qk\" (UID: \"f997bb38-4f6e-495f-acb8-e8e0d1730947\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vp6qk" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.296618 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9cg8\" (UniqueName: \"kubernetes.io/projected/6742e18f-a187-4a77-a734-bdec89bd89e0-kube-api-access-c9cg8\") pod \"multus-admission-controller-857f4d67dd-j4s5w\" (UID: \"6742e18f-a187-4a77-a734-bdec89bd89e0\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-j4s5w" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.342668 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dr9fr\" (UniqueName: \"kubernetes.io/projected/5f2944a8-8d91-4461-aa64-8908ca17f59e-kube-api-access-dr9fr\") pod \"service-ca-9c57cc56f-llgd7\" (UID: \"5f2944a8-8d91-4461-aa64-8908ca17f59e\") " pod="openshift-service-ca/service-ca-9c57cc56f-llgd7" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.343542 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzkzm\" (UniqueName: \"kubernetes.io/projected/0007a585-5b17-44bd-89b8-2d1d233a03d4-kube-api-access-gzkzm\") pod \"olm-operator-6b444d44fb-zkkpc\" (UID: \"0007a585-5b17-44bd-89b8-2d1d233a03d4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zkkpc" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.344636 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c510b795-d750-4f94-bc9a-88ba625bd556-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-cfw2n\" (UID: \"c510b795-d750-4f94-bc9a-88ba625bd556\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cfw2n" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.362694 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hn8zr\" (UniqueName: \"kubernetes.io/projected/7470431a-2a31-41ae-b021-510ae5e3c505-kube-api-access-hn8zr\") pod \"machine-config-controller-84d6567774-vwqwb\" (UID: \"7470431a-2a31-41ae-b021-510ae5e3c505\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vwqwb" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.363338 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-jvxv4" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.363685 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pkjt\" (UniqueName: \"kubernetes.io/projected/2957ef21-9f30-4c81-8c6a-4a7f9d7315db-kube-api-access-9pkjt\") pod \"package-server-manager-789f6589d5-72bt6\" (UID: \"2957ef21-9f30-4c81-8c6a-4a7f9d7315db\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-72bt6" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.364615 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ntqvz" event={"ID":"1d6b8080-9c3f-4f6e-bcb4-3d1d0edaaa7c","Type":"ContainerStarted","Data":"d402858a5ef5514fec0754a973317b2de9ad2aaad9b3baa96045e00080574752"} Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.365388 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.366880 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5h6z\" (UniqueName: \"kubernetes.io/projected/5c8e7010-8b57-47ed-9270-417650a2a7c5-kube-api-access-v5h6z\") pod \"machine-config-operator-74547568cd-hqjnl\" (UID: \"5c8e7010-8b57-47ed-9270-417650a2a7c5\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hqjnl" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.367172 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.398362 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.414324 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.425165 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vwqwb" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.434340 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.440293 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-j4s5w" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.452945 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.455849 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hqjnl" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.472044 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vp6qk" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.473103 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.473450 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6ljz\" (UniqueName: \"kubernetes.io/projected/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-kube-api-access-z6ljz\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.473490 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-registry-certificates\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.473516 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-registry-tls\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.473568 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7f30da15-7c75-4c87-9dc4-78653d6f84cd-apiservice-cert\") pod \"packageserver-d55dfcdfc-rdgn6\" (UID: \"7f30da15-7c75-4c87-9dc4-78653d6f84cd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rdgn6" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.473592 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-bound-sa-token\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.473609 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7f30da15-7c75-4c87-9dc4-78653d6f84cd-webhook-cert\") pod \"packageserver-d55dfcdfc-rdgn6\" (UID: \"7f30da15-7c75-4c87-9dc4-78653d6f84cd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rdgn6" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.473710 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7f30da15-7c75-4c87-9dc4-78653d6f84cd-tmpfs\") pod \"packageserver-d55dfcdfc-rdgn6\" (UID: \"7f30da15-7c75-4c87-9dc4-78653d6f84cd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rdgn6" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.473809 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnzz9\" (UniqueName: \"kubernetes.io/projected/7f30da15-7c75-4c87-9dc4-78653d6f84cd-kube-api-access-cnzz9\") pod \"packageserver-d55dfcdfc-rdgn6\" (UID: \"7f30da15-7c75-4c87-9dc4-78653d6f84cd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rdgn6" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.473835 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.473894 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-trusted-ca\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.473922 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:32 crc kubenswrapper[4881]: E0121 10:58:32.477216 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:32.977198664 +0000 UTC m=+100.237155123 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.478423 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.539931 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7gdkq" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.543278 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qpdx4" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.543941 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-72bt6" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.544453 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cfw2n" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.544743 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zkkpc" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.544927 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-llgd7" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.545027 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xmq82" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.580257 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.580561 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7f30da15-7c75-4c87-9dc4-78653d6f84cd-tmpfs\") pod \"packageserver-d55dfcdfc-rdgn6\" (UID: \"7f30da15-7c75-4c87-9dc4-78653d6f84cd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rdgn6" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.580594 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/303bdbe6-3bb4-4ace-86b1-f489c795580f-config-volume\") pod \"collect-profiles-29483205-527gk\" (UID: \"303bdbe6-3bb4-4ace-86b1-f489c795580f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-527gk" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.580660 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d53ea19f-eb9b-43d6-bab3-3fc7d6fa196f-config\") pod \"service-ca-operator-777779d784-f877x\" (UID: \"d53ea19f-eb9b-43d6-bab3-3fc7d6fa196f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f877x" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.580679 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7qxx\" (UniqueName: \"kubernetes.io/projected/409e44ed-8f6d-4321-9620-d8da23cf0fec-kube-api-access-b7qxx\") pod \"csi-hostpathplugin-42f9f\" (UID: \"409e44ed-8f6d-4321-9620-d8da23cf0fec\") " pod="hostpath-provisioner/csi-hostpathplugin-42f9f" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.580724 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/409e44ed-8f6d-4321-9620-d8da23cf0fec-csi-data-dir\") pod \"csi-hostpathplugin-42f9f\" (UID: \"409e44ed-8f6d-4321-9620-d8da23cf0fec\") " pod="hostpath-provisioner/csi-hostpathplugin-42f9f" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.580747 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnzz9\" (UniqueName: \"kubernetes.io/projected/7f30da15-7c75-4c87-9dc4-78653d6f84cd-kube-api-access-cnzz9\") pod \"packageserver-d55dfcdfc-rdgn6\" (UID: \"7f30da15-7c75-4c87-9dc4-78653d6f84cd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rdgn6" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.580762 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqf99\" (UniqueName: \"kubernetes.io/projected/d53ea19f-eb9b-43d6-bab3-3fc7d6fa196f-kube-api-access-pqf99\") pod \"service-ca-operator-777779d784-f877x\" (UID: \"d53ea19f-eb9b-43d6-bab3-3fc7d6fa196f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f877x" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.580860 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.580923 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-trusted-ca\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.580984 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d53ea19f-eb9b-43d6-bab3-3fc7d6fa196f-serving-cert\") pod \"service-ca-operator-777779d784-f877x\" (UID: \"d53ea19f-eb9b-43d6-bab3-3fc7d6fa196f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f877x" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.581024 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86acb693-c0d9-41f4-b33c-4716963ce268-cert\") pod \"ingress-canary-kl9j4\" (UID: \"86acb693-c0d9-41f4-b33c-4716963ce268\") " pod="openshift-ingress-canary/ingress-canary-kl9j4" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.581038 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/dc0d7d08-d133-4880-a391-e8750932d507-node-bootstrap-token\") pod \"machine-config-server-468h5\" (UID: \"dc0d7d08-d133-4880-a391-e8750932d507\") " pod="openshift-machine-config-operator/machine-config-server-468h5" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.581088 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkcp2\" (UniqueName: \"kubernetes.io/projected/bc38f0b5-944c-40ae-aed0-50ca39ea2627-kube-api-access-pkcp2\") pod \"control-plane-machine-set-operator-78cbb6b69f-hfc8p\" (UID: \"bc38f0b5-944c-40ae-aed0-50ca39ea2627\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hfc8p" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.581129 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/303bdbe6-3bb4-4ace-86b1-f489c795580f-secret-volume\") pod \"collect-profiles-29483205-527gk\" (UID: \"303bdbe6-3bb4-4ace-86b1-f489c795580f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-527gk" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.581180 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.581248 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7e58845-f0a1-4320-b879-0765b6d57988-config-volume\") pod \"dns-default-znm6j\" (UID: \"b7e58845-f0a1-4320-b879-0765b6d57988\") " pod="openshift-dns/dns-default-znm6j" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.581264 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l45nv\" (UniqueName: \"kubernetes.io/projected/303bdbe6-3bb4-4ace-86b1-f489c795580f-kube-api-access-l45nv\") pod \"collect-profiles-29483205-527gk\" (UID: \"303bdbe6-3bb4-4ace-86b1-f489c795580f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-527gk" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.581357 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6ljz\" (UniqueName: \"kubernetes.io/projected/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-kube-api-access-z6ljz\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.581374 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-registry-certificates\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.581389 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/409e44ed-8f6d-4321-9620-d8da23cf0fec-plugins-dir\") pod \"csi-hostpathplugin-42f9f\" (UID: \"409e44ed-8f6d-4321-9620-d8da23cf0fec\") " pod="hostpath-provisioner/csi-hostpathplugin-42f9f" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.581457 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/409e44ed-8f6d-4321-9620-d8da23cf0fec-socket-dir\") pod \"csi-hostpathplugin-42f9f\" (UID: \"409e44ed-8f6d-4321-9620-d8da23cf0fec\") " pod="hostpath-provisioner/csi-hostpathplugin-42f9f" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.581482 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-registry-tls\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.581513 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vc5l6\" (UniqueName: \"kubernetes.io/projected/b7e58845-f0a1-4320-b879-0765b6d57988-kube-api-access-vc5l6\") pod \"dns-default-znm6j\" (UID: \"b7e58845-f0a1-4320-b879-0765b6d57988\") " pod="openshift-dns/dns-default-znm6j" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.581538 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqvwd\" (UniqueName: \"kubernetes.io/projected/dc0d7d08-d133-4880-a391-e8750932d507-kube-api-access-sqvwd\") pod \"machine-config-server-468h5\" (UID: \"dc0d7d08-d133-4880-a391-e8750932d507\") " pod="openshift-machine-config-operator/machine-config-server-468h5" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.581612 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7f30da15-7c75-4c87-9dc4-78653d6f84cd-apiservice-cert\") pod \"packageserver-d55dfcdfc-rdgn6\" (UID: \"7f30da15-7c75-4c87-9dc4-78653d6f84cd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rdgn6" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.581637 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-bound-sa-token\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.581660 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7f30da15-7c75-4c87-9dc4-78653d6f84cd-webhook-cert\") pod \"packageserver-d55dfcdfc-rdgn6\" (UID: \"7f30da15-7c75-4c87-9dc4-78653d6f84cd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rdgn6" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.581692 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gc7m\" (UniqueName: \"kubernetes.io/projected/86acb693-c0d9-41f4-b33c-4716963ce268-kube-api-access-6gc7m\") pod \"ingress-canary-kl9j4\" (UID: \"86acb693-c0d9-41f4-b33c-4716963ce268\") " pod="openshift-ingress-canary/ingress-canary-kl9j4" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.581800 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/409e44ed-8f6d-4321-9620-d8da23cf0fec-registration-dir\") pod \"csi-hostpathplugin-42f9f\" (UID: \"409e44ed-8f6d-4321-9620-d8da23cf0fec\") " pod="hostpath-provisioner/csi-hostpathplugin-42f9f" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.581828 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/bc38f0b5-944c-40ae-aed0-50ca39ea2627-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-hfc8p\" (UID: \"bc38f0b5-944c-40ae-aed0-50ca39ea2627\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hfc8p" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.581902 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/dc0d7d08-d133-4880-a391-e8750932d507-certs\") pod \"machine-config-server-468h5\" (UID: \"dc0d7d08-d133-4880-a391-e8750932d507\") " pod="openshift-machine-config-operator/machine-config-server-468h5" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.581920 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/409e44ed-8f6d-4321-9620-d8da23cf0fec-mountpoint-dir\") pod \"csi-hostpathplugin-42f9f\" (UID: \"409e44ed-8f6d-4321-9620-d8da23cf0fec\") " pod="hostpath-provisioner/csi-hostpathplugin-42f9f" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.581935 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b7e58845-f0a1-4320-b879-0765b6d57988-metrics-tls\") pod \"dns-default-znm6j\" (UID: \"b7e58845-f0a1-4320-b879-0765b6d57988\") " pod="openshift-dns/dns-default-znm6j" Jan 21 10:58:32 crc kubenswrapper[4881]: E0121 10:58:32.582489 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:33.08246584 +0000 UTC m=+100.342422299 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.584521 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.584605 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7f30da15-7c75-4c87-9dc4-78653d6f84cd-tmpfs\") pod \"packageserver-d55dfcdfc-rdgn6\" (UID: \"7f30da15-7c75-4c87-9dc4-78653d6f84cd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rdgn6" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.587755 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-registry-certificates\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.594628 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-trusted-ca\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.599983 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-registry-tls\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.611682 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7f30da15-7c75-4c87-9dc4-78653d6f84cd-apiservice-cert\") pod \"packageserver-d55dfcdfc-rdgn6\" (UID: \"7f30da15-7c75-4c87-9dc4-78653d6f84cd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rdgn6" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.613654 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7f30da15-7c75-4c87-9dc4-78653d6f84cd-webhook-cert\") pod \"packageserver-d55dfcdfc-rdgn6\" (UID: \"7f30da15-7c75-4c87-9dc4-78653d6f84cd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rdgn6" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.627260 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6ljz\" (UniqueName: \"kubernetes.io/projected/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-kube-api-access-z6ljz\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.638570 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.661183 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-bound-sa-token\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.675421 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnzz9\" (UniqueName: \"kubernetes.io/projected/7f30da15-7c75-4c87-9dc4-78653d6f84cd-kube-api-access-cnzz9\") pod \"packageserver-d55dfcdfc-rdgn6\" (UID: \"7f30da15-7c75-4c87-9dc4-78653d6f84cd\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rdgn6" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.683686 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gc7m\" (UniqueName: \"kubernetes.io/projected/86acb693-c0d9-41f4-b33c-4716963ce268-kube-api-access-6gc7m\") pod \"ingress-canary-kl9j4\" (UID: \"86acb693-c0d9-41f4-b33c-4716963ce268\") " pod="openshift-ingress-canary/ingress-canary-kl9j4" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.683757 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/409e44ed-8f6d-4321-9620-d8da23cf0fec-registration-dir\") pod \"csi-hostpathplugin-42f9f\" (UID: \"409e44ed-8f6d-4321-9620-d8da23cf0fec\") " pod="hostpath-provisioner/csi-hostpathplugin-42f9f" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.683794 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/bc38f0b5-944c-40ae-aed0-50ca39ea2627-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-hfc8p\" (UID: \"bc38f0b5-944c-40ae-aed0-50ca39ea2627\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hfc8p" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.683864 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/dc0d7d08-d133-4880-a391-e8750932d507-certs\") pod \"machine-config-server-468h5\" (UID: \"dc0d7d08-d133-4880-a391-e8750932d507\") " pod="openshift-machine-config-operator/machine-config-server-468h5" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.683881 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/409e44ed-8f6d-4321-9620-d8da23cf0fec-mountpoint-dir\") pod \"csi-hostpathplugin-42f9f\" (UID: \"409e44ed-8f6d-4321-9620-d8da23cf0fec\") " pod="hostpath-provisioner/csi-hostpathplugin-42f9f" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.683899 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b7e58845-f0a1-4320-b879-0765b6d57988-metrics-tls\") pod \"dns-default-znm6j\" (UID: \"b7e58845-f0a1-4320-b879-0765b6d57988\") " pod="openshift-dns/dns-default-znm6j" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.683933 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/303bdbe6-3bb4-4ace-86b1-f489c795580f-config-volume\") pod \"collect-profiles-29483205-527gk\" (UID: \"303bdbe6-3bb4-4ace-86b1-f489c795580f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-527gk" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.683952 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d53ea19f-eb9b-43d6-bab3-3fc7d6fa196f-config\") pod \"service-ca-operator-777779d784-f877x\" (UID: \"d53ea19f-eb9b-43d6-bab3-3fc7d6fa196f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f877x" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.683972 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7qxx\" (UniqueName: \"kubernetes.io/projected/409e44ed-8f6d-4321-9620-d8da23cf0fec-kube-api-access-b7qxx\") pod \"csi-hostpathplugin-42f9f\" (UID: \"409e44ed-8f6d-4321-9620-d8da23cf0fec\") " pod="hostpath-provisioner/csi-hostpathplugin-42f9f" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.683990 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/409e44ed-8f6d-4321-9620-d8da23cf0fec-csi-data-dir\") pod \"csi-hostpathplugin-42f9f\" (UID: \"409e44ed-8f6d-4321-9620-d8da23cf0fec\") " pod="hostpath-provisioner/csi-hostpathplugin-42f9f" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.684008 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqf99\" (UniqueName: \"kubernetes.io/projected/d53ea19f-eb9b-43d6-bab3-3fc7d6fa196f-kube-api-access-pqf99\") pod \"service-ca-operator-777779d784-f877x\" (UID: \"d53ea19f-eb9b-43d6-bab3-3fc7d6fa196f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f877x" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.684044 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.684075 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d53ea19f-eb9b-43d6-bab3-3fc7d6fa196f-serving-cert\") pod \"service-ca-operator-777779d784-f877x\" (UID: \"d53ea19f-eb9b-43d6-bab3-3fc7d6fa196f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f877x" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.684090 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86acb693-c0d9-41f4-b33c-4716963ce268-cert\") pod \"ingress-canary-kl9j4\" (UID: \"86acb693-c0d9-41f4-b33c-4716963ce268\") " pod="openshift-ingress-canary/ingress-canary-kl9j4" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.684106 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/dc0d7d08-d133-4880-a391-e8750932d507-node-bootstrap-token\") pod \"machine-config-server-468h5\" (UID: \"dc0d7d08-d133-4880-a391-e8750932d507\") " pod="openshift-machine-config-operator/machine-config-server-468h5" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.684127 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkcp2\" (UniqueName: \"kubernetes.io/projected/bc38f0b5-944c-40ae-aed0-50ca39ea2627-kube-api-access-pkcp2\") pod \"control-plane-machine-set-operator-78cbb6b69f-hfc8p\" (UID: \"bc38f0b5-944c-40ae-aed0-50ca39ea2627\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hfc8p" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.684145 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/303bdbe6-3bb4-4ace-86b1-f489c795580f-secret-volume\") pod \"collect-profiles-29483205-527gk\" (UID: \"303bdbe6-3bb4-4ace-86b1-f489c795580f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-527gk" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.684168 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7e58845-f0a1-4320-b879-0765b6d57988-config-volume\") pod \"dns-default-znm6j\" (UID: \"b7e58845-f0a1-4320-b879-0765b6d57988\") " pod="openshift-dns/dns-default-znm6j" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.684164 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/409e44ed-8f6d-4321-9620-d8da23cf0fec-registration-dir\") pod \"csi-hostpathplugin-42f9f\" (UID: \"409e44ed-8f6d-4321-9620-d8da23cf0fec\") " pod="hostpath-provisioner/csi-hostpathplugin-42f9f" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.684183 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l45nv\" (UniqueName: \"kubernetes.io/projected/303bdbe6-3bb4-4ace-86b1-f489c795580f-kube-api-access-l45nv\") pod \"collect-profiles-29483205-527gk\" (UID: \"303bdbe6-3bb4-4ace-86b1-f489c795580f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-527gk" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.684295 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/409e44ed-8f6d-4321-9620-d8da23cf0fec-plugins-dir\") pod \"csi-hostpathplugin-42f9f\" (UID: \"409e44ed-8f6d-4321-9620-d8da23cf0fec\") " pod="hostpath-provisioner/csi-hostpathplugin-42f9f" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.684354 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/409e44ed-8f6d-4321-9620-d8da23cf0fec-socket-dir\") pod \"csi-hostpathplugin-42f9f\" (UID: \"409e44ed-8f6d-4321-9620-d8da23cf0fec\") " pod="hostpath-provisioner/csi-hostpathplugin-42f9f" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.684400 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vc5l6\" (UniqueName: \"kubernetes.io/projected/b7e58845-f0a1-4320-b879-0765b6d57988-kube-api-access-vc5l6\") pod \"dns-default-znm6j\" (UID: \"b7e58845-f0a1-4320-b879-0765b6d57988\") " pod="openshift-dns/dns-default-znm6j" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.684432 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqvwd\" (UniqueName: \"kubernetes.io/projected/dc0d7d08-d133-4880-a391-e8750932d507-kube-api-access-sqvwd\") pod \"machine-config-server-468h5\" (UID: \"dc0d7d08-d133-4880-a391-e8750932d507\") " pod="openshift-machine-config-operator/machine-config-server-468h5" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.684709 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/409e44ed-8f6d-4321-9620-d8da23cf0fec-plugins-dir\") pod \"csi-hostpathplugin-42f9f\" (UID: \"409e44ed-8f6d-4321-9620-d8da23cf0fec\") " pod="hostpath-provisioner/csi-hostpathplugin-42f9f" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.684825 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/409e44ed-8f6d-4321-9620-d8da23cf0fec-socket-dir\") pod \"csi-hostpathplugin-42f9f\" (UID: \"409e44ed-8f6d-4321-9620-d8da23cf0fec\") " pod="hostpath-provisioner/csi-hostpathplugin-42f9f" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.685094 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/409e44ed-8f6d-4321-9620-d8da23cf0fec-mountpoint-dir\") pod \"csi-hostpathplugin-42f9f\" (UID: \"409e44ed-8f6d-4321-9620-d8da23cf0fec\") " pod="hostpath-provisioner/csi-hostpathplugin-42f9f" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.685129 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/303bdbe6-3bb4-4ace-86b1-f489c795580f-config-volume\") pod \"collect-profiles-29483205-527gk\" (UID: \"303bdbe6-3bb4-4ace-86b1-f489c795580f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-527gk" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.685326 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/409e44ed-8f6d-4321-9620-d8da23cf0fec-csi-data-dir\") pod \"csi-hostpathplugin-42f9f\" (UID: \"409e44ed-8f6d-4321-9620-d8da23cf0fec\") " pod="hostpath-provisioner/csi-hostpathplugin-42f9f" Jan 21 10:58:32 crc kubenswrapper[4881]: E0121 10:58:32.686955 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:33.186939266 +0000 UTC m=+100.446895915 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.687165 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d53ea19f-eb9b-43d6-bab3-3fc7d6fa196f-config\") pod \"service-ca-operator-777779d784-f877x\" (UID: \"d53ea19f-eb9b-43d6-bab3-3fc7d6fa196f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f877x" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.688136 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7e58845-f0a1-4320-b879-0765b6d57988-config-volume\") pod \"dns-default-znm6j\" (UID: \"b7e58845-f0a1-4320-b879-0765b6d57988\") " pod="openshift-dns/dns-default-znm6j" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.707216 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d53ea19f-eb9b-43d6-bab3-3fc7d6fa196f-serving-cert\") pod \"service-ca-operator-777779d784-f877x\" (UID: \"d53ea19f-eb9b-43d6-bab3-3fc7d6fa196f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f877x" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.707668 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/dc0d7d08-d133-4880-a391-e8750932d507-node-bootstrap-token\") pod \"machine-config-server-468h5\" (UID: \"dc0d7d08-d133-4880-a391-e8750932d507\") " pod="openshift-machine-config-operator/machine-config-server-468h5" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.709859 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/dc0d7d08-d133-4880-a391-e8750932d507-certs\") pod \"machine-config-server-468h5\" (UID: \"dc0d7d08-d133-4880-a391-e8750932d507\") " pod="openshift-machine-config-operator/machine-config-server-468h5" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.709961 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b7e58845-f0a1-4320-b879-0765b6d57988-metrics-tls\") pod \"dns-default-znm6j\" (UID: \"b7e58845-f0a1-4320-b879-0765b6d57988\") " pod="openshift-dns/dns-default-znm6j" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.711809 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/bc38f0b5-944c-40ae-aed0-50ca39ea2627-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-hfc8p\" (UID: \"bc38f0b5-944c-40ae-aed0-50ca39ea2627\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hfc8p" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.713674 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/86acb693-c0d9-41f4-b33c-4716963ce268-cert\") pod \"ingress-canary-kl9j4\" (UID: \"86acb693-c0d9-41f4-b33c-4716963ce268\") " pod="openshift-ingress-canary/ingress-canary-kl9j4" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.714673 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/303bdbe6-3bb4-4ace-86b1-f489c795580f-secret-volume\") pod \"collect-profiles-29483205-527gk\" (UID: \"303bdbe6-3bb4-4ace-86b1-f489c795580f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-527gk" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.734346 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gc7m\" (UniqueName: \"kubernetes.io/projected/86acb693-c0d9-41f4-b33c-4716963ce268-kube-api-access-6gc7m\") pod \"ingress-canary-kl9j4\" (UID: \"86acb693-c0d9-41f4-b33c-4716963ce268\") " pod="openshift-ingress-canary/ingress-canary-kl9j4" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.779941 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rdgn6" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.785078 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:32 crc kubenswrapper[4881]: E0121 10:58:32.785293 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:33.28526018 +0000 UTC m=+100.545216659 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.785539 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:32 crc kubenswrapper[4881]: E0121 10:58:32.786226 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:33.286209674 +0000 UTC m=+100.546166143 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.806398 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vc5l6\" (UniqueName: \"kubernetes.io/projected/b7e58845-f0a1-4320-b879-0765b6d57988-kube-api-access-vc5l6\") pod \"dns-default-znm6j\" (UID: \"b7e58845-f0a1-4320-b879-0765b6d57988\") " pod="openshift-dns/dns-default-znm6j" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.817130 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqvwd\" (UniqueName: \"kubernetes.io/projected/dc0d7d08-d133-4880-a391-e8750932d507-kube-api-access-sqvwd\") pod \"machine-config-server-468h5\" (UID: \"dc0d7d08-d133-4880-a391-e8750932d507\") " pod="openshift-machine-config-operator/machine-config-server-468h5" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.820579 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l45nv\" (UniqueName: \"kubernetes.io/projected/303bdbe6-3bb4-4ace-86b1-f489c795580f-kube-api-access-l45nv\") pod \"collect-profiles-29483205-527gk\" (UID: \"303bdbe6-3bb4-4ace-86b1-f489c795580f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-527gk" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.828956 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkcp2\" (UniqueName: \"kubernetes.io/projected/bc38f0b5-944c-40ae-aed0-50ca39ea2627-kube-api-access-pkcp2\") pod \"control-plane-machine-set-operator-78cbb6b69f-hfc8p\" (UID: \"bc38f0b5-944c-40ae-aed0-50ca39ea2627\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hfc8p" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.833000 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7qxx\" (UniqueName: \"kubernetes.io/projected/409e44ed-8f6d-4321-9620-d8da23cf0fec-kube-api-access-b7qxx\") pod \"csi-hostpathplugin-42f9f\" (UID: \"409e44ed-8f6d-4321-9620-d8da23cf0fec\") " pod="hostpath-provisioner/csi-hostpathplugin-42f9f" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.845654 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-527gk" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.849382 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqf99\" (UniqueName: \"kubernetes.io/projected/d53ea19f-eb9b-43d6-bab3-3fc7d6fa196f-kube-api-access-pqf99\") pod \"service-ca-operator-777779d784-f877x\" (UID: \"d53ea19f-eb9b-43d6-bab3-3fc7d6fa196f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-f877x" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.863110 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-kl9j4" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.863642 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hfc8p" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.875638 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-468h5" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.887057 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:32 crc kubenswrapper[4881]: E0121 10:58:32.887393 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:33.387377639 +0000 UTC m=+100.647334108 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.908284 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-42f9f" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.917556 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-znm6j" Jan 21 10:58:32 crc kubenswrapper[4881]: I0121 10:58:32.988116 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:32 crc kubenswrapper[4881]: E0121 10:58:32.988669 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:33.488654716 +0000 UTC m=+100.748611185 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:33 crc kubenswrapper[4881]: I0121 10:58:33.090422 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:33 crc kubenswrapper[4881]: E0121 10:58:33.092057 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:33.592025365 +0000 UTC m=+100.851981834 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:33 crc kubenswrapper[4881]: I0121 10:58:33.093830 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:33 crc kubenswrapper[4881]: E0121 10:58:33.097141 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:33.59711702 +0000 UTC m=+100.857073499 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:33 crc kubenswrapper[4881]: I0121 10:58:33.136584 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-f877x" Jan 21 10:58:33 crc kubenswrapper[4881]: I0121 10:58:33.196453 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:33 crc kubenswrapper[4881]: E0121 10:58:33.196918 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:33.696901881 +0000 UTC m=+100.956858340 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:33 crc kubenswrapper[4881]: I0121 10:58:33.297736 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:33 crc kubenswrapper[4881]: E0121 10:58:33.298098 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:33.798084956 +0000 UTC m=+101.058041425 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:33 crc kubenswrapper[4881]: I0121 10:58:33.369228 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ntqvz" event={"ID":"1d6b8080-9c3f-4f6e-bcb4-3d1d0edaaa7c","Type":"ContainerStarted","Data":"b612ece999ac387cc8c5c1776465ef7f8d185dabd4a70b1869b7f4b1da0a539e"} Jan 21 10:58:33 crc kubenswrapper[4881]: I0121 10:58:33.370588 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-468h5" event={"ID":"dc0d7d08-d133-4880-a391-e8750932d507","Type":"ContainerStarted","Data":"969abce82a4756be549f93f591a3a1570c4abc95cb16b5c762a08c96568626b5"} Jan 21 10:58:33 crc kubenswrapper[4881]: I0121 10:58:33.370749 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-468h5" event={"ID":"dc0d7d08-d133-4880-a391-e8750932d507","Type":"ContainerStarted","Data":"c21ffc83923832013770136f03a6bcebbad73c3ba8141faa9374f398252e99d1"} Jan 21 10:58:33 crc kubenswrapper[4881]: I0121 10:58:33.372043 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-v7wnh" event={"ID":"52d94566-7844-4414-bf48-9122c2207dd6","Type":"ContainerStarted","Data":"7b763d882cbb654ffa22e465972973b093a08e49c3b49a08597217f1665401de"} Jan 21 10:58:33 crc kubenswrapper[4881]: I0121 10:58:33.372090 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-v7wnh" event={"ID":"52d94566-7844-4414-bf48-9122c2207dd6","Type":"ContainerStarted","Data":"c675755f41e28c775bdb8abb860df6e5c252ec3742596b9c9d30f78cad4f1d8e"} Jan 21 10:58:33 crc kubenswrapper[4881]: I0121 10:58:33.398314 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:33 crc kubenswrapper[4881]: E0121 10:58:33.399392 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:33.899362424 +0000 UTC m=+101.159318923 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:33 crc kubenswrapper[4881]: I0121 10:58:33.571432 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:33 crc kubenswrapper[4881]: E0121 10:58:33.571935 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:34.071918322 +0000 UTC m=+101.331874791 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:33 crc kubenswrapper[4881]: I0121 10:58:33.681429 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:33 crc kubenswrapper[4881]: E0121 10:58:33.681912 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:34.181888872 +0000 UTC m=+101.441845351 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:33 crc kubenswrapper[4881]: I0121 10:58:33.786822 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:33 crc kubenswrapper[4881]: E0121 10:58:33.787313 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:34.287291312 +0000 UTC m=+101.547247801 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:33 crc kubenswrapper[4881]: I0121 10:58:33.902881 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:33 crc kubenswrapper[4881]: E0121 10:58:33.903011 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:34.402971702 +0000 UTC m=+101.662928171 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:33 crc kubenswrapper[4881]: I0121 10:58:33.904924 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:33 crc kubenswrapper[4881]: E0121 10:58:33.905527 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:34.405510025 +0000 UTC m=+101.665466494 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:34 crc kubenswrapper[4881]: I0121 10:58:34.006023 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:34 crc kubenswrapper[4881]: E0121 10:58:34.009569 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:34.50954433 +0000 UTC m=+101.769500799 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:34 crc kubenswrapper[4881]: I0121 10:58:34.118068 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:34 crc kubenswrapper[4881]: E0121 10:58:34.118721 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:34.618701831 +0000 UTC m=+101.878658290 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:34 crc kubenswrapper[4881]: I0121 10:58:34.119271 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-v7wnh" Jan 21 10:58:34 crc kubenswrapper[4881]: I0121 10:58:34.219778 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:34 crc kubenswrapper[4881]: E0121 10:58:34.220180 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:34.720163303 +0000 UTC m=+101.980119772 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:34 crc kubenswrapper[4881]: I0121 10:58:34.335258 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-468h5" podStartSLOduration=5.335238189 podStartE2EDuration="5.335238189s" podCreationTimestamp="2026-01-21 10:58:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:34.334994604 +0000 UTC m=+101.594951073" watchObservedRunningTime="2026-01-21 10:58:34.335238189 +0000 UTC m=+101.595194658" Jan 21 10:58:34 crc kubenswrapper[4881]: I0121 10:58:34.337401 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-v7wnh" podStartSLOduration=79.337392393 podStartE2EDuration="1m19.337392393s" podCreationTimestamp="2026-01-21 10:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:34.304500244 +0000 UTC m=+101.564456713" watchObservedRunningTime="2026-01-21 10:58:34.337392393 +0000 UTC m=+101.597348862" Jan 21 10:58:34 crc kubenswrapper[4881]: I0121 10:58:34.425840 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:34 crc kubenswrapper[4881]: E0121 10:58:34.426239 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:34.926227474 +0000 UTC m=+102.186183943 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:34 crc kubenswrapper[4881]: I0121 10:58:34.527641 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:34 crc kubenswrapper[4881]: E0121 10:58:34.528057 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:35.028026505 +0000 UTC m=+102.287982974 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:34 crc kubenswrapper[4881]: I0121 10:58:34.629352 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:34 crc kubenswrapper[4881]: E0121 10:58:34.629925 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:35.129907087 +0000 UTC m=+102.389863556 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:34 crc kubenswrapper[4881]: I0121 10:58:34.688262 4881 patch_prober.go:28] interesting pod/router-default-5444994796-v7wnh container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 21 10:58:34 crc kubenswrapper[4881]: I0121 10:58:34.688337 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v7wnh" podUID="52d94566-7844-4414-bf48-9122c2207dd6" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 21 10:58:34 crc kubenswrapper[4881]: I0121 10:58:34.730502 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:34 crc kubenswrapper[4881]: E0121 10:58:34.730652 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:35.23062426 +0000 UTC m=+102.490580739 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:34 crc kubenswrapper[4881]: I0121 10:58:34.730718 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:34 crc kubenswrapper[4881]: E0121 10:58:34.731098 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:35.231087992 +0000 UTC m=+102.491044461 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:34 crc kubenswrapper[4881]: I0121 10:58:34.831923 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:34 crc kubenswrapper[4881]: E0121 10:58:34.832137 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:35.332110783 +0000 UTC m=+102.592067252 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:34 crc kubenswrapper[4881]: I0121 10:58:34.832208 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:34 crc kubenswrapper[4881]: E0121 10:58:34.832540 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:35.332525873 +0000 UTC m=+102.592482332 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:34 crc kubenswrapper[4881]: I0121 10:58:34.932999 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:34 crc kubenswrapper[4881]: E0121 10:58:34.933209 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:35.433182075 +0000 UTC m=+102.693138544 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:34 crc kubenswrapper[4881]: I0121 10:58:34.933439 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:34 crc kubenswrapper[4881]: E0121 10:58:34.933809 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:35.433796441 +0000 UTC m=+102.693752910 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:35 crc kubenswrapper[4881]: I0121 10:58:35.034441 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:35 crc kubenswrapper[4881]: E0121 10:58:35.034604 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:35.534579846 +0000 UTC m=+102.794536315 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:35 crc kubenswrapper[4881]: I0121 10:58:35.034719 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:35 crc kubenswrapper[4881]: E0121 10:58:35.035347 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:35.535325884 +0000 UTC m=+102.795282393 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:35 crc kubenswrapper[4881]: I0121 10:58:35.114623 4881 patch_prober.go:28] interesting pod/router-default-5444994796-v7wnh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:58:35 crc kubenswrapper[4881]: [-]has-synced failed: reason withheld Jan 21 10:58:35 crc kubenswrapper[4881]: [+]process-running ok Jan 21 10:58:35 crc kubenswrapper[4881]: healthz check failed Jan 21 10:58:35 crc kubenswrapper[4881]: I0121 10:58:35.115107 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v7wnh" podUID="52d94566-7844-4414-bf48-9122c2207dd6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:58:35 crc kubenswrapper[4881]: I0121 10:58:35.135548 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:35 crc kubenswrapper[4881]: E0121 10:58:35.135764 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:35.635734151 +0000 UTC m=+102.895690660 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:35 crc kubenswrapper[4881]: I0121 10:58:35.136437 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:35 crc kubenswrapper[4881]: E0121 10:58:35.136956 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:35.6369342 +0000 UTC m=+102.896890709 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:35 crc kubenswrapper[4881]: I0121 10:58:35.240540 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:35 crc kubenswrapper[4881]: E0121 10:58:35.241520 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:35.741500658 +0000 UTC m=+103.001457127 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:35 crc kubenswrapper[4881]: I0121 10:58:35.343315 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:35 crc kubenswrapper[4881]: E0121 10:58:35.343719 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:35.843703718 +0000 UTC m=+103.103660197 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:35 crc kubenswrapper[4881]: I0121 10:58:35.444827 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:35 crc kubenswrapper[4881]: E0121 10:58:35.446266 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:35.946245787 +0000 UTC m=+103.206202256 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:35 crc kubenswrapper[4881]: I0121 10:58:35.448755 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ntqvz" event={"ID":"1d6b8080-9c3f-4f6e-bcb4-3d1d0edaaa7c","Type":"ContainerStarted","Data":"11112e84ed0dda9f2ed7f2f8fa157e44126b69816a49cff9a91f43262ef2598d"} Jan 21 10:58:35 crc kubenswrapper[4881]: I0121 10:58:35.548001 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:35 crc kubenswrapper[4881]: E0121 10:58:35.548448 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:36.048431177 +0000 UTC m=+103.308387646 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:35 crc kubenswrapper[4881]: I0121 10:58:35.628006 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-ntqvz" podStartSLOduration=81.62798424 podStartE2EDuration="1m21.62798424s" podCreationTimestamp="2026-01-21 10:57:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:35.472275446 +0000 UTC m=+102.732231915" watchObservedRunningTime="2026-01-21 10:58:35.62798424 +0000 UTC m=+102.887940709" Jan 21 10:58:35 crc kubenswrapper[4881]: I0121 10:58:35.628372 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-qxzd9"] Jan 21 10:58:35 crc kubenswrapper[4881]: I0121 10:58:35.633224 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wjlxh"] Jan 21 10:58:35 crc kubenswrapper[4881]: I0121 10:58:35.635592 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-n2h44"] Jan 21 10:58:35 crc kubenswrapper[4881]: I0121 10:58:35.646446 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-w5l6w"] Jan 21 10:58:35 crc kubenswrapper[4881]: I0121 10:58:35.648964 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-svmbc"] Jan 21 10:58:35 crc kubenswrapper[4881]: I0121 10:58:35.649029 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:35 crc kubenswrapper[4881]: E0121 10:58:35.650822 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:36.150791381 +0000 UTC m=+103.410747940 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:35 crc kubenswrapper[4881]: I0121 10:58:35.752805 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:35 crc kubenswrapper[4881]: E0121 10:58:35.753164 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:36.253151235 +0000 UTC m=+103.513107694 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:35 crc kubenswrapper[4881]: I0121 10:58:35.853730 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:35 crc kubenswrapper[4881]: E0121 10:58:35.853898 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:36.353880069 +0000 UTC m=+103.613836538 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:35 crc kubenswrapper[4881]: I0121 10:58:35.854007 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:35 crc kubenswrapper[4881]: E0121 10:58:35.854255 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:36.354247728 +0000 UTC m=+103.614204197 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:35 crc kubenswrapper[4881]: I0121 10:58:35.957431 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:35 crc kubenswrapper[4881]: E0121 10:58:35.957619 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:36.457596476 +0000 UTC m=+103.717552945 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:35 crc kubenswrapper[4881]: I0121 10:58:35.957896 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:35 crc kubenswrapper[4881]: E0121 10:58:35.958270 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:36.458257432 +0000 UTC m=+103.718213901 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.059024 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:36 crc kubenswrapper[4881]: E0121 10:58:36.059260 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:36.559228252 +0000 UTC m=+103.819184731 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.059598 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:36 crc kubenswrapper[4881]: E0121 10:58:36.060192 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:36.560164525 +0000 UTC m=+103.820121034 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.067988 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483205-527gk"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.070868 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-rslv2"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.087187 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7cs59"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.103068 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8kvzw"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.104939 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-hqjnl"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.115440 4881 patch_prober.go:28] interesting pod/router-default-5444994796-v7wnh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:58:36 crc kubenswrapper[4881]: [-]has-synced failed: reason withheld Jan 21 10:58:36 crc kubenswrapper[4881]: [+]process-running ok Jan 21 10:58:36 crc kubenswrapper[4881]: healthz check failed Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.115491 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v7wnh" podUID="52d94566-7844-4414-bf48-9122c2207dd6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.119384 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qpdx4"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.142268 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-jvxv4"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.144707 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-zjqz6"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.153863 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-h97cd"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.153922 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-whh46"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.154175 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lm4k2"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.160612 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.160884 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3552adbd-011f-4552-9e69-233b92c554c8-metrics-certs\") pod \"network-metrics-daemon-dtv4t\" (UID: \"3552adbd-011f-4552-9e69-233b92c554c8\") " pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:58:36 crc kubenswrapper[4881]: E0121 10:58:36.161041 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:36.661022153 +0000 UTC m=+103.920978622 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.163754 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pjbh7"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.163837 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vfcd9"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.168551 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3552adbd-011f-4552-9e69-233b92c554c8-metrics-certs\") pod \"network-metrics-daemon-dtv4t\" (UID: \"3552adbd-011f-4552-9e69-233b92c554c8\") " pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:58:36 crc kubenswrapper[4881]: W0121 10:58:36.171527 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e960def_7bc7_4041_94dc_8ccea63f8bb8.slice/crio-0790a402c93806fd2f05db80cba862f512e12e5dd1ae94ff92722face7b15059 WatchSource:0}: Error finding container 0790a402c93806fd2f05db80cba862f512e12e5dd1ae94ff92722face7b15059: Status 404 returned error can't find the container with id 0790a402c93806fd2f05db80cba862f512e12e5dd1ae94ff92722face7b15059 Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.176727 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5xwk8"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.183268 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-wrqpb"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.185279 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-cclnc"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.187137 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zkkpc"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.197930 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-f877x"] Jan 21 10:58:36 crc kubenswrapper[4881]: W0121 10:58:36.206404 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86ac2c23_01e6_4a22_a79d_77a3269fb5a0.slice/crio-79fceb069012ae79a981dcdc297ad76c1e3189b6f4784ea3791d374fc4482001 WatchSource:0}: Error finding container 79fceb069012ae79a981dcdc297ad76c1e3189b6f4784ea3791d374fc4482001: Status 404 returned error can't find the container with id 79fceb069012ae79a981dcdc297ad76c1e3189b6f4784ea3791d374fc4482001 Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.218354 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-znm6j"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.231035 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-phm68"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.234199 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cfw2n"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.240304 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.243249 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-j4s5w"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.244985 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vp6qk"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.261916 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:36 crc kubenswrapper[4881]: E0121 10:58:36.262193 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:36.762179717 +0000 UTC m=+104.022136186 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.359483 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dtv4t" Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.363514 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:36 crc kubenswrapper[4881]: E0121 10:58:36.363693 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:36.86366538 +0000 UTC m=+104.123621849 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.363871 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:36 crc kubenswrapper[4881]: E0121 10:58:36.364225 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:36.864215393 +0000 UTC m=+104.124172052 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.460590 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-kl9j4"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.464816 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:36 crc kubenswrapper[4881]: E0121 10:58:36.464967 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:36.964944117 +0000 UTC m=+104.224900586 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.465726 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:36 crc kubenswrapper[4881]: E0121 10:58:36.466232 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:36.966216668 +0000 UTC m=+104.226173317 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.470102 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cfw2n" event={"ID":"c510b795-d750-4f94-bc9a-88ba625bd556","Type":"ContainerStarted","Data":"8edb718f49287ee1e5992d45a6b5d6efe3fc50ba77f6eae4e83b19f6c3c44a42"} Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.471439 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8kvzw" event={"ID":"5d68a50c-6a38-4aba-bb02-9a25712d2212","Type":"ContainerStarted","Data":"75ffb299185e9e6d371ecbdb7eb473f4b6ff637b2eba3dc8b863fdf20d1ae25c"} Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.476076 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-wrqpb" event={"ID":"628cb8f4-a587-498f-9398-403e0af5eec4","Type":"ContainerStarted","Data":"763ae62f18116f1fe4593545b01b2553ad3792b2e87cdae45827fd67eae883d2"} Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.478585 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zkkpc" event={"ID":"0007a585-5b17-44bd-89b8-2d1d233a03d4","Type":"ContainerStarted","Data":"4f564ff03cbbaa0f8042cde333f5b5b3cea9b7169727da8459685bf907581ef3"} Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.479364 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-vwqwb"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.482325 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-whh46" event={"ID":"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad","Type":"ContainerStarted","Data":"216606908c8b27d34a9f3f57e132945839e5bd3eae4f856f2671c9e8308d7423"} Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.499847 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hfc8p"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.521534 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-42f9f"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.539406 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xmq82"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.557276 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-llgd7"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.559566 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hqjnl" event={"ID":"5c8e7010-8b57-47ed-9270-417650a2a7c5","Type":"ContainerStarted","Data":"ea296fe97b057f6f0df6ff84011de8b3bc8a0c8c0c89e26121d88777e3751daa"} Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.566521 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:36 crc kubenswrapper[4881]: E0121 10:58:36.567277 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:37.06724363 +0000 UTC m=+104.327200099 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.588916 4881 csr.go:261] certificate signing request csr-d74tp is approved, waiting to be issued Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.591578 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-svmbc" event={"ID":"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57","Type":"ContainerStarted","Data":"c802ddbd8e9b079a0a6e4ee9d0dd87824bf3cb502a0912f44e02f0cca256b8e4"} Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.593386 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-72bt6"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.595444 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rslv2" event={"ID":"537a87a4-8f58-441f-9199-62c5849c693c","Type":"ContainerStarted","Data":"80b7b3ce063567cc1fbf487ef2d0e5ee3c9f8664a2046c9a8fae503691d6d224"} Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.596941 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5xwk8" event={"ID":"706c6a3b-823b-4ea3-b7a8-e20d571d3ace","Type":"ContainerStarted","Data":"22d022e22752b1a845c64ff7297933c2f9f91e223d3640540e2ab737fe1ace78"} Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.598289 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-w5l6w" event={"ID":"0ceebcd8-2c53-4e4d-97bb-5d81008a6442","Type":"ContainerStarted","Data":"447a68b7525d82522d86c9766479b34dac564e482edb660c3decf67342a91ca6"} Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.600693 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vfcd9" event={"ID":"29dca8bf-7bce-455b-812f-fca8861518ca","Type":"ContainerStarted","Data":"665263747c5f8cab9e1f53a92fa637a838a92a1c9eff3ee375c09f2912a7f3ff"} Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.602630 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7cs59" event={"ID":"1e960def-7bc7-4041-94dc-8ccea63f8bb8","Type":"ContainerStarted","Data":"0790a402c93806fd2f05db80cba862f512e12e5dd1ae94ff92722face7b15059"} Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.604210 4881 csr.go:257] certificate signing request csr-d74tp is issued Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.614605 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" event={"ID":"146cbde4-d891-47d8-a09f-d4f4b50bfe6d","Type":"ContainerStarted","Data":"d68cad796c69f936cad4980c773067b142f355f7552d6b0961feb10ece906af6"} Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.618068 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-qxzd9" event={"ID":"bb8fc8b3-9818-40e2-afb2-860e2d1efae1","Type":"ContainerStarted","Data":"d060bd9f87ed03936c0be9ee17418f9087722140490e6ad49375f3c789b2e023"} Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.619638 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rdgn6"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.623679 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-j4s5w" event={"ID":"6742e18f-a187-4a77-a734-bdec89bd89e0","Type":"ContainerStarted","Data":"d24e8a3dde9ad4c180d564caa8a04bc0ccde594c7182df9414c0736c020bf2cf"} Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.634820 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-527gk" event={"ID":"303bdbe6-3bb4-4ace-86b1-f489c795580f","Type":"ContainerStarted","Data":"b3d019b82236dd15b24f4a31ba5ebc67107e80ee3f592acc46c51b2bbe16aba5"} Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.635951 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vp6qk" event={"ID":"f997bb38-4f6e-495f-acb8-e8e0d1730947","Type":"ContainerStarted","Data":"befe8bd3ce126f78f32908d2279e0d5e1763ebfdf011f99e818f13ef4ab1771f"} Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.637654 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-jvxv4" event={"ID":"96e1443d-dd18-4343-b200-756f9511c163","Type":"ContainerStarted","Data":"109869b853f39c175423d29e72a66cb9bb0801e9b6b3b8a0e533ada32404b37e"} Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.639638 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lm4k2" event={"ID":"27c4b3cb-57d3-4282-93fe-16cfab039277","Type":"ContainerStarted","Data":"546be78d7fa80cb5217f9ec956561952bcb0ad7e720be5961027598bb51fa46c"} Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.641639 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qpdx4" event={"ID":"86ac2c23-01e6-4a22-a79d-77a3269fb5a0","Type":"ContainerStarted","Data":"79fceb069012ae79a981dcdc297ad76c1e3189b6f4784ea3791d374fc4482001"} Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.642311 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7gdkq"] Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.644439 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pjbh7" event={"ID":"e6d131df-3eb3-4bb1-a45a-ff6ae44b5ecb","Type":"ContainerStarted","Data":"45f19fd34c35f1237d72f2fec0fc6c65d58ffab5dace1b67d0280f650700ba1e"} Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.668066 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:36 crc kubenswrapper[4881]: E0121 10:58:36.669175 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:37.169155483 +0000 UTC m=+104.429111952 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.674370 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-znm6j" event={"ID":"b7e58845-f0a1-4320-b879-0765b6d57988","Type":"ContainerStarted","Data":"64cb4c239b87efb7cd9b98d2d413218f385bd070aff7cdefce602a2185c738ce"} Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.676614 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-zjqz6" event={"ID":"f1f74368-89f6-44fb-aaa2-9159a217b4d7","Type":"ContainerStarted","Data":"0ab556548ff44637ee5a7cefce9e8d6aecb22153bf70df9fc5dadbbc343f7eec"} Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.678267 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-zjqz6" Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.681879 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-n2h44" event={"ID":"863eda44-9a47-42de-b2de-49234ac647f0","Type":"ContainerStarted","Data":"b72f810c040ef84ae1cad3cba480a5966669a8f0f0c8fbf4634e0daffff50f1e"} Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.683014 4881 patch_prober.go:28] interesting pod/console-operator-58897d9998-zjqz6 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.683052 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-zjqz6" podUID="f1f74368-89f6-44fb-aaa2-9159a217b4d7" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.683932 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-f877x" event={"ID":"d53ea19f-eb9b-43d6-bab3-3fc7d6fa196f","Type":"ContainerStarted","Data":"82c7d32520d2436d4f7a9663e687243c491257f7dc62b2e72e1981db2f9c8144"} Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.687459 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-cclnc" event={"ID":"8465162e-dd9f-45b4-83a6-94666ac2b87b","Type":"ContainerStarted","Data":"a33b3cb1960cd9728cac6829f5670abf510f7506478b90d9f1a890f442173bb0"} Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.693521 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-h97cd" event={"ID":"3201b51c-af63-40e7-8037-9e581791d495","Type":"ContainerStarted","Data":"7b5feb131a5e4a06103b5280c54ad0837a19c465a0aa933409bc7c15f7f0734f"} Jan 21 10:58:36 crc kubenswrapper[4881]: W0121 10:58:36.746840 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86acb693_c0d9_41f4_b33c_4716963ce268.slice/crio-758110cf0b46064de00bb150d4a98573f91f8fdf43e0f8ade86d25a387cec9db WatchSource:0}: Error finding container 758110cf0b46064de00bb150d4a98573f91f8fdf43e0f8ade86d25a387cec9db: Status 404 returned error can't find the container with id 758110cf0b46064de00bb150d4a98573f91f8fdf43e0f8ade86d25a387cec9db Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.754607 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-wjlxh" event={"ID":"002a39eb-e2e0-4d3e-8f61-89a539a653a9","Type":"ContainerStarted","Data":"fec206b72c4648e66af3adcacd7cb5106e2766bcb34d529fae1cd757bd777535"} Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.754811 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-wjlxh" Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.758047 4881 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-wjlxh container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.758099 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-wjlxh" podUID="002a39eb-e2e0-4d3e-8f61-89a539a653a9" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.769566 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:36 crc kubenswrapper[4881]: E0121 10:58:36.772143 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:37.271918146 +0000 UTC m=+104.531874635 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:36 crc kubenswrapper[4881]: W0121 10:58:36.775026 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc38f0b5_944c_40ae_aed0_50ca39ea2627.slice/crio-a91a58002d4d6f4f72bda9c7484e2bb65cd6b6f5f5601a84f2427afb828fb570 WatchSource:0}: Error finding container a91a58002d4d6f4f72bda9c7484e2bb65cd6b6f5f5601a84f2427afb828fb570: Status 404 returned error can't find the container with id a91a58002d4d6f4f72bda9c7484e2bb65cd6b6f5f5601a84f2427afb828fb570 Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.783276 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-zjqz6" podStartSLOduration=81.783255605 podStartE2EDuration="1m21.783255605s" podCreationTimestamp="2026-01-21 10:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:36.694436424 +0000 UTC m=+103.954392903" watchObservedRunningTime="2026-01-21 10:58:36.783255605 +0000 UTC m=+104.043212064" Jan 21 10:58:36 crc kubenswrapper[4881]: I0121 10:58:36.904088 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:36 crc kubenswrapper[4881]: E0121 10:58:36.904532 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:37.404517774 +0000 UTC m=+104.664474243 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.004670 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:37 crc kubenswrapper[4881]: E0121 10:58:37.005470 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:37.505451723 +0000 UTC m=+104.765408192 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.109175 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:37 crc kubenswrapper[4881]: E0121 10:58:37.110250 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:37.610237086 +0000 UTC m=+104.870193555 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.118605 4881 patch_prober.go:28] interesting pod/router-default-5444994796-v7wnh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:58:37 crc kubenswrapper[4881]: [-]has-synced failed: reason withheld Jan 21 10:58:37 crc kubenswrapper[4881]: [+]process-running ok Jan 21 10:58:37 crc kubenswrapper[4881]: healthz check failed Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.118645 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v7wnh" podUID="52d94566-7844-4414-bf48-9122c2207dd6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.210568 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:37 crc kubenswrapper[4881]: E0121 10:58:37.210731 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:37.710702854 +0000 UTC m=+104.970659313 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.211691 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:37 crc kubenswrapper[4881]: E0121 10:58:37.212398 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:37.712383105 +0000 UTC m=+104.972339574 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.349990 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:37 crc kubenswrapper[4881]: E0121 10:58:37.350766 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:37.850739394 +0000 UTC m=+105.110695863 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.351364 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:37 crc kubenswrapper[4881]: E0121 10:58:37.351852 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:37.851838171 +0000 UTC m=+105.111794640 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.459282 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:37 crc kubenswrapper[4881]: E0121 10:58:37.459664 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:37.959647719 +0000 UTC m=+105.219604188 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.562032 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:37 crc kubenswrapper[4881]: E0121 10:58:37.562306 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:38.062293679 +0000 UTC m=+105.322250148 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.611850 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-21 10:53:36 +0000 UTC, rotation deadline is 2026-10-24 06:14:37.839131868 +0000 UTC Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.611884 4881 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6619h16m0.227250451s for next certificate rotation Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.667640 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:37 crc kubenswrapper[4881]: E0121 10:58:37.668300 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:38.168277663 +0000 UTC m=+105.428234132 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.668853 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:37 crc kubenswrapper[4881]: E0121 10:58:37.669190 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:38.169182734 +0000 UTC m=+105.429139203 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.770290 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:37 crc kubenswrapper[4881]: E0121 10:58:37.770919 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:38.270896803 +0000 UTC m=+105.530853272 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.798347 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rdgn6" event={"ID":"7f30da15-7c75-4c87-9dc4-78653d6f84cd","Type":"ContainerStarted","Data":"a5013592e5c35fc53140f3477485624f58e610e910f930df124104e361b84262"} Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.805654 4881 generic.go:334] "Generic (PLEG): container finished" podID="537a87a4-8f58-441f-9199-62c5849c693c" containerID="0a0a4a7159c4ae5e1ca01e1e58266bb2b9687170b75097cbf61c3f3b4f8bda14" exitCode=0 Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.805736 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rslv2" event={"ID":"537a87a4-8f58-441f-9199-62c5849c693c","Type":"ContainerDied","Data":"0a0a4a7159c4ae5e1ca01e1e58266bb2b9687170b75097cbf61c3f3b4f8bda14"} Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.809568 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lm4k2" event={"ID":"27c4b3cb-57d3-4282-93fe-16cfab039277","Type":"ContainerStarted","Data":"b60d23e022ddf3d5f79a677eec9a91d2de918a75469e7207637b9578d8a94ec8"} Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.811362 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-527gk" event={"ID":"303bdbe6-3bb4-4ace-86b1-f489c795580f","Type":"ContainerStarted","Data":"2f6a1a1e4268540ee682b58127eb41126b116ba4e30186b584ee325d0961ebec"} Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.812959 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5xwk8" event={"ID":"706c6a3b-823b-4ea3-b7a8-e20d571d3ace","Type":"ContainerStarted","Data":"9c8c8d93509d2a29c183d63351f0748ec6e60414dbb285df980924884b598111"} Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.813662 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5xwk8" Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.825400 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-wjlxh" event={"ID":"002a39eb-e2e0-4d3e-8f61-89a539a653a9","Type":"ContainerStarted","Data":"6b8fc2aac0518f9de92cee69b4b59a05f08ed2161c480a5655d85171be0e5a8b"} Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.927155 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:37 crc kubenswrapper[4881]: E0121 10:58:37.928316 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:38.428297329 +0000 UTC m=+105.688254008 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.930235 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vfcd9" event={"ID":"29dca8bf-7bce-455b-812f-fca8861518ca","Type":"ContainerStarted","Data":"b85b87998c08018dbb35f00249d9602951f94a6261f25f4978adba90ce64b127"} Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.937707 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-wjlxh" podStartSLOduration=82.937673889 podStartE2EDuration="1m22.937673889s" podCreationTimestamp="2026-01-21 10:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:36.785187073 +0000 UTC m=+104.045143542" watchObservedRunningTime="2026-01-21 10:58:37.937673889 +0000 UTC m=+105.197630358" Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.948206 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-h97cd" event={"ID":"3201b51c-af63-40e7-8037-9e581791d495","Type":"ContainerStarted","Data":"b653068c58321173ed5dbd8e4e933839f3338650924c22f27cb139db0b90ffe4"} Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.950438 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-jvxv4" event={"ID":"96e1443d-dd18-4343-b200-756f9511c163","Type":"ContainerStarted","Data":"d11972dea06114e95feae0748a3287e910f887f3cb8603d81723a20528613969"} Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.951896 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-72bt6" event={"ID":"2957ef21-9f30-4c81-8c6a-4a7f9d7315db","Type":"ContainerStarted","Data":"e8cb541f96d7a4ec14d9a3260ed76cf4f3c8fd2e5d5a593d5d8bec92ed22c9a9"} Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.952958 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hqjnl" event={"ID":"5c8e7010-8b57-47ed-9270-417650a2a7c5","Type":"ContainerStarted","Data":"28f8e69023156cf8f9966f5f6a94a44ae9e681e64b2e5ebc4c585613bcd6eea4"} Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.954022 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-42f9f" event={"ID":"409e44ed-8f6d-4321-9620-d8da23cf0fec","Type":"ContainerStarted","Data":"0a1c334b89e7e575b7c043f32c04a7431a6ac04ac6256966d01bbc7cc00aad26"} Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.955188 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7gdkq" event={"ID":"c56c4a24-e6c6-4aa0-8a62-94d3179dfe54","Type":"ContainerStarted","Data":"6a35c30526df04d0205ad662fc3bb9f352a26dfd4273236fe9c24b4ffbe74b53"} Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.962509 4881 generic.go:334] "Generic (PLEG): container finished" podID="3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57" containerID="6f04d5d4e813545e106e07923bb6b0e2a0341cba5339d2bf5c5d9a0d6610f808" exitCode=0 Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.962561 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-svmbc" event={"ID":"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57","Type":"ContainerDied","Data":"6f04d5d4e813545e106e07923bb6b0e2a0341cba5339d2bf5c5d9a0d6610f808"} Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.978331 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-w5l6w" event={"ID":"0ceebcd8-2c53-4e4d-97bb-5d81008a6442","Type":"ContainerStarted","Data":"efc58c3509ff202fa895654e7d0ac50244b04c0ab623a0b41ed93222292364c4"} Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.979516 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-wrqpb" event={"ID":"628cb8f4-a587-498f-9398-403e0af5eec4","Type":"ContainerStarted","Data":"8ac6e934bf2c65c273e37127eb78e3c49f6ab743027f68c7c31810cbe67f929a"} Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.980377 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-wrqpb" Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.981596 4881 patch_prober.go:28] interesting pod/downloads-7954f5f757-wrqpb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.981640 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wrqpb" podUID="628cb8f4-a587-498f-9398-403e0af5eec4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.983600 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-zjqz6" event={"ID":"f1f74368-89f6-44fb-aaa2-9159a217b4d7","Type":"ContainerStarted","Data":"ab69032099ffb0c7c07dfa25b0cc882b8ffc1cd68bb960103922cd624933ac71"} Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.985699 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xmq82" event={"ID":"e94f1e92-21b2-44c9-b499-b879850c288d","Type":"ContainerStarted","Data":"123c57f996d77041997b15262c61902d2eed5d15c9314dac5b070f52214a0ad3"} Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.986663 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vwqwb" event={"ID":"7470431a-2a31-41ae-b021-510ae5e3c505","Type":"ContainerStarted","Data":"be3a5105ee1882e177d05b3246339ad51f6d68a1328dbb49c8d87d096b42f33b"} Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.988599 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vp6qk" event={"ID":"f997bb38-4f6e-495f-acb8-e8e0d1730947","Type":"ContainerStarted","Data":"3feb878b277a04b1568c451615458cb131092ba8b5c93591ab07f1fc8b5f5092"} Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.991312 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8kvzw" event={"ID":"5d68a50c-6a38-4aba-bb02-9a25712d2212","Type":"ContainerStarted","Data":"5dbcbaf778250d0b18bb2f19dab01c57165692f21a151850181f8e36142ee2e4"} Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.994060 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-whh46" event={"ID":"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad","Type":"ContainerStarted","Data":"35ce5ecabc873c14d35cf37aa4dd5c20723f513985dbc4caa43cffafe43e41fe"} Jan 21 10:58:37 crc kubenswrapper[4881]: I0121 10:58:37.995091 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.007394 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-kl9j4" event={"ID":"86acb693-c0d9-41f4-b33c-4716963ce268","Type":"ContainerStarted","Data":"758110cf0b46064de00bb150d4a98573f91f8fdf43e0f8ade86d25a387cec9db"} Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.009604 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qpdx4" event={"ID":"86ac2c23-01e6-4a22-a79d-77a3269fb5a0","Type":"ContainerStarted","Data":"c422ab28c4be18f85caf1bf6a22eb9b1707b5f01f7e10b067205620fb2baacb7"} Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.011657 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pjbh7" event={"ID":"e6d131df-3eb3-4bb1-a45a-ff6ae44b5ecb","Type":"ContainerStarted","Data":"341235e6ea2901d1c63a118152a9dc368ad288a306e3bbde5a5f5fe867756e78"} Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.032069 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-phm68" event={"ID":"b745a377-4575-45fb-a206-ea4754ecff76","Type":"ContainerStarted","Data":"0b0d5a92ec2b9f828ddc94573c379732dac2871a26ee02d0fa250bd34f099f95"} Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.036889 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-f877x" event={"ID":"d53ea19f-eb9b-43d6-bab3-3fc7d6fa196f","Type":"ContainerStarted","Data":"d01d1b820366e91afc7ff04d0a7a94c20c13e1911f2c5ca9eee7fe90727f6d77"} Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.038773 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:38 crc kubenswrapper[4881]: E0121 10:58:38.039066 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:38.539047279 +0000 UTC m=+105.799003738 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.039598 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-j4s5w" event={"ID":"6742e18f-a187-4a77-a734-bdec89bd89e0","Type":"ContainerStarted","Data":"14286972e56053dbcb9d0135891d1dba55e7082cb155d960d866368fa5f331be"} Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.041038 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-qxzd9" event={"ID":"bb8fc8b3-9818-40e2-afb2-860e2d1efae1","Type":"ContainerStarted","Data":"8f2ac82a3ce8ce5983172b3cbd1e9a6aa27d2f48fa81d54ee2ef2ad283fa8d47"} Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.042544 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hfc8p" event={"ID":"bc38f0b5-944c-40ae-aed0-50ca39ea2627","Type":"ContainerStarted","Data":"a91a58002d4d6f4f72bda9c7484e2bb65cd6b6f5f5601a84f2427afb828fb570"} Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.043753 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-n2h44" event={"ID":"863eda44-9a47-42de-b2de-49234ac647f0","Type":"ContainerStarted","Data":"35bc198bea517d29ac125c74b3ad165d16a4bb617772670696d3229c208e4dec"} Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.044118 4881 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-whh46 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.33:6443/healthz\": dial tcp 10.217.0.33:6443: connect: connection refused" start-of-body= Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.044168 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-whh46" podUID="2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.33:6443/healthz\": dial tcp 10.217.0.33:6443: connect: connection refused" Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.053472 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-llgd7" event={"ID":"5f2944a8-8d91-4461-aa64-8908ca17f59e","Type":"ContainerStarted","Data":"1dce8a5c711904a4576ee0efa99a5227bad6330604f6039e5268181ffa724e4f"} Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.181336 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lm4k2" podStartSLOduration=83.181307663 podStartE2EDuration="1m23.181307663s" podCreationTimestamp="2026-01-21 10:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:38.179881498 +0000 UTC m=+105.439837977" watchObservedRunningTime="2026-01-21 10:58:38.181307663 +0000 UTC m=+105.441264132" Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.183063 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-wjlxh" Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.186695 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:38 crc kubenswrapper[4881]: E0121 10:58:38.201955 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:38.70193728 +0000 UTC m=+105.961893939 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.209110 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-cclnc" event={"ID":"8465162e-dd9f-45b4-83a6-94666ac2b87b","Type":"ContainerStarted","Data":"fefa0e429b0c82f9f54c61490c4c91d30aeebb41b0d2233b56bd40f1ebf61528"} Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.232150 4881 patch_prober.go:28] interesting pod/router-default-5444994796-v7wnh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:58:38 crc kubenswrapper[4881]: [-]has-synced failed: reason withheld Jan 21 10:58:38 crc kubenswrapper[4881]: [+]process-running ok Jan 21 10:58:38 crc kubenswrapper[4881]: healthz check failed Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.232219 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v7wnh" podUID="52d94566-7844-4414-bf48-9122c2207dd6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.249800 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-527gk" podStartSLOduration=84.249759344 podStartE2EDuration="1m24.249759344s" podCreationTimestamp="2026-01-21 10:57:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:38.23247161 +0000 UTC m=+105.492428079" watchObservedRunningTime="2026-01-21 10:58:38.249759344 +0000 UTC m=+105.509715813" Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.255463 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-zjqz6" Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.296553 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.297682 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-dtv4t"] Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.298562 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5xwk8" podStartSLOduration=82.298538793 podStartE2EDuration="1m22.298538793s" podCreationTimestamp="2026-01-21 10:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:38.297494317 +0000 UTC m=+105.557450796" watchObservedRunningTime="2026-01-21 10:58:38.298538793 +0000 UTC m=+105.558495262" Jan 21 10:58:38 crc kubenswrapper[4881]: E0121 10:58:38.300137 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:38.798398109 +0000 UTC m=+106.058354588 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.477010 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:38 crc kubenswrapper[4881]: E0121 10:58:38.477851 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:38.977841006 +0000 UTC m=+106.237797475 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.479311 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vfcd9" podStartSLOduration=83.479302002 podStartE2EDuration="1m23.479302002s" podCreationTimestamp="2026-01-21 10:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:38.47391316 +0000 UTC m=+105.733869629" watchObservedRunningTime="2026-01-21 10:58:38.479302002 +0000 UTC m=+105.739258471" Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.536573 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-wrqpb" podStartSLOduration=83.536556069 podStartE2EDuration="1m23.536556069s" podCreationTimestamp="2026-01-21 10:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:38.529739341 +0000 UTC m=+105.789695830" watchObservedRunningTime="2026-01-21 10:58:38.536556069 +0000 UTC m=+105.796512528" Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.565065 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-f877x" podStartSLOduration=82.565049818 podStartE2EDuration="1m22.565049818s" podCreationTimestamp="2026-01-21 10:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:38.564079164 +0000 UTC m=+105.824035633" watchObservedRunningTime="2026-01-21 10:58:38.565049818 +0000 UTC m=+105.825006287" Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.578302 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:38 crc kubenswrapper[4881]: E0121 10:58:38.578988 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:39.07897283 +0000 UTC m=+106.338929299 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.588927 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-jvxv4" podStartSLOduration=83.588910194 podStartE2EDuration="1m23.588910194s" podCreationTimestamp="2026-01-21 10:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:38.588375972 +0000 UTC m=+105.848332441" watchObservedRunningTime="2026-01-21 10:58:38.588910194 +0000 UTC m=+105.848866663" Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.658767 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5xwk8" Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.680823 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:38 crc kubenswrapper[4881]: E0121 10:58:38.681143 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:39.181131189 +0000 UTC m=+106.441087658 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:38 crc kubenswrapper[4881]: W0121 10:58:38.690288 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3552adbd_011f_4552_9e69_233b92c554c8.slice/crio-cb037e397d3c2f6ee7a3ec761c68c5d0ce2c3eb79704e242f2c5186055512710 WatchSource:0}: Error finding container cb037e397d3c2f6ee7a3ec761c68c5d0ce2c3eb79704e242f2c5186055512710: Status 404 returned error can't find the container with id cb037e397d3c2f6ee7a3ec761c68c5d0ce2c3eb79704e242f2c5186055512710 Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.782384 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:38 crc kubenswrapper[4881]: E0121 10:58:38.782712 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:39.282694274 +0000 UTC m=+106.542650743 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:38 crc kubenswrapper[4881]: I0121 10:58:38.784011 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-whh46" podStartSLOduration=83.783986865 podStartE2EDuration="1m23.783986865s" podCreationTimestamp="2026-01-21 10:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:38.783148375 +0000 UTC m=+106.043104854" watchObservedRunningTime="2026-01-21 10:58:38.783986865 +0000 UTC m=+106.043943334" Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.060080 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:39 crc kubenswrapper[4881]: E0121 10:58:39.060452 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:39.560438536 +0000 UTC m=+106.820395005 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.168051 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:39 crc kubenswrapper[4881]: E0121 10:58:39.168609 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:39.668585562 +0000 UTC m=+106.928542021 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.168833 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:39 crc kubenswrapper[4881]: E0121 10:58:39.169432 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:39.669415672 +0000 UTC m=+106.929372141 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.291048 4881 patch_prober.go:28] interesting pod/router-default-5444994796-v7wnh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:58:39 crc kubenswrapper[4881]: [-]has-synced failed: reason withheld Jan 21 10:58:39 crc kubenswrapper[4881]: [+]process-running ok Jan 21 10:58:39 crc kubenswrapper[4881]: healthz check failed Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.291407 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v7wnh" podUID="52d94566-7844-4414-bf48-9122c2207dd6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.297002 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:39 crc kubenswrapper[4881]: E0121 10:58:39.297424 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:39.797405086 +0000 UTC m=+107.057361555 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.364029 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-8kvzw" podStartSLOduration=84.364010482 podStartE2EDuration="1m24.364010482s" podCreationTimestamp="2026-01-21 10:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:39.363316554 +0000 UTC m=+106.623273023" watchObservedRunningTime="2026-01-21 10:58:39.364010482 +0000 UTC m=+106.623966951" Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.393562 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qpdx4" event={"ID":"86ac2c23-01e6-4a22-a79d-77a3269fb5a0","Type":"ContainerStarted","Data":"8b4cad29766b29072bceaa5b7cc7191e97f805f94716f7bca9f31541f11c8cd4"} Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.404159 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-znm6j" event={"ID":"b7e58845-f0a1-4320-b879-0765b6d57988","Type":"ContainerStarted","Data":"185130ada6207293ab7deb8a704c133c5228de3f527054bde6ae9d2ee08f16c1"} Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.405420 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-dtv4t" event={"ID":"3552adbd-011f-4552-9e69-233b92c554c8","Type":"ContainerStarted","Data":"cb037e397d3c2f6ee7a3ec761c68c5d0ce2c3eb79704e242f2c5186055512710"} Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.407412 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7cs59" event={"ID":"1e960def-7bc7-4041-94dc-8ccea63f8bb8","Type":"ContainerStarted","Data":"b44ff908b2f8dfd966c3bd6b0812f139b916876a910951331a7b4443a147daf2"} Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.410197 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cfw2n" event={"ID":"c510b795-d750-4f94-bc9a-88ba625bd556","Type":"ContainerStarted","Data":"0c2e39fe292a484df8ff829c890024ee05cda0266b24499047cb35459ba7adc5"} Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.423256 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-w5l6w" event={"ID":"0ceebcd8-2c53-4e4d-97bb-5d81008a6442","Type":"ContainerStarted","Data":"0703ffdd4428db1abfe60d34b9c956929891c819f94b8271bf04da222464da4b"} Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.429930 4881 generic.go:334] "Generic (PLEG): container finished" podID="146cbde4-d891-47d8-a09f-d4f4b50bfe6d" containerID="baaf7152a0f657da62f5788c917e44ec25680b9897914479e48a0d080a327e47" exitCode=0 Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.430065 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" event={"ID":"146cbde4-d891-47d8-a09f-d4f4b50bfe6d","Type":"ContainerDied","Data":"baaf7152a0f657da62f5788c917e44ec25680b9897914479e48a0d080a327e47"} Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.432641 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:39 crc kubenswrapper[4881]: E0121 10:58:39.432987 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:39.932973995 +0000 UTC m=+107.192930464 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.436173 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zkkpc" event={"ID":"0007a585-5b17-44bd-89b8-2d1d233a03d4","Type":"ContainerStarted","Data":"1fea7f326694ba0a7adc23fea091401d4b3aa9790d8492a890004e28df288843"} Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.436321 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zkkpc" Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.442233 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-qxzd9" podStartSLOduration=84.442223843 podStartE2EDuration="1m24.442223843s" podCreationTimestamp="2026-01-21 10:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:39.399235487 +0000 UTC m=+106.659191996" watchObservedRunningTime="2026-01-21 10:58:39.442223843 +0000 UTC m=+106.702180312" Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.443000 4881 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-whh46 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.33:6443/healthz\": dial tcp 10.217.0.33:6443: connect: connection refused" start-of-body= Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.443104 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-whh46" podUID="2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.33:6443/healthz\": dial tcp 10.217.0.33:6443: connect: connection refused" Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.443886 4881 patch_prober.go:28] interesting pod/downloads-7954f5f757-wrqpb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.443982 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wrqpb" podUID="628cb8f4-a587-498f-9398-403e0af5eec4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.476050 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-pjbh7" podStartSLOduration=84.476028803 podStartE2EDuration="1m24.476028803s" podCreationTimestamp="2026-01-21 10:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:39.473718886 +0000 UTC m=+106.733675355" watchObservedRunningTime="2026-01-21 10:58:39.476028803 +0000 UTC m=+106.735985272" Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.534344 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:39 crc kubenswrapper[4881]: E0121 10:58:39.537889 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:40.037868592 +0000 UTC m=+107.297825061 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.638484 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:39 crc kubenswrapper[4881]: E0121 10:58:39.691883 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:40.191857074 +0000 UTC m=+107.451813543 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.708304 4881 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-zkkpc container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.708427 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zkkpc" podUID="0007a585-5b17-44bd-89b8-2d1d233a03d4" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.781704 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:39 crc kubenswrapper[4881]: E0121 10:58:39.782482 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:40.282450019 +0000 UTC m=+107.542406488 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.895611 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:39 crc kubenswrapper[4881]: E0121 10:58:39.896193 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:40.396175342 +0000 UTC m=+107.656131811 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.896863 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-h97cd" podStartSLOduration=84.896840928 podStartE2EDuration="1m24.896840928s" podCreationTimestamp="2026-01-21 10:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:39.697515873 +0000 UTC m=+106.957472342" watchObservedRunningTime="2026-01-21 10:58:39.896840928 +0000 UTC m=+107.156797397" Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.898593 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-vp6qk" podStartSLOduration=84.898585431 podStartE2EDuration="1m24.898585431s" podCreationTimestamp="2026-01-21 10:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:39.519521391 +0000 UTC m=+106.779477870" watchObservedRunningTime="2026-01-21 10:58:39.898585431 +0000 UTC m=+107.158541900" Jan 21 10:58:39 crc kubenswrapper[4881]: I0121 10:58:39.951468 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-w5l6w" podStartSLOduration=84.951450449 podStartE2EDuration="1m24.951450449s" podCreationTimestamp="2026-01-21 10:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:39.949286747 +0000 UTC m=+107.209243216" watchObservedRunningTime="2026-01-21 10:58:39.951450449 +0000 UTC m=+107.211406918" Jan 21 10:58:40 crc kubenswrapper[4881]: I0121 10:58:40.002036 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:40 crc kubenswrapper[4881]: E0121 10:58:40.002391 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:40.502375281 +0000 UTC m=+107.762331760 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:40 crc kubenswrapper[4881]: I0121 10:58:40.021996 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zkkpc" podStartSLOduration=84.021966362 podStartE2EDuration="1m24.021966362s" podCreationTimestamp="2026-01-21 10:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:39.979480258 +0000 UTC m=+107.239436717" watchObservedRunningTime="2026-01-21 10:58:40.021966362 +0000 UTC m=+107.281922831" Jan 21 10:58:40 crc kubenswrapper[4881]: I0121 10:58:40.022610 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qpdx4" podStartSLOduration=85.022602807 podStartE2EDuration="1m25.022602807s" podCreationTimestamp="2026-01-21 10:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:40.004702027 +0000 UTC m=+107.264658526" watchObservedRunningTime="2026-01-21 10:58:40.022602807 +0000 UTC m=+107.282559276" Jan 21 10:58:40 crc kubenswrapper[4881]: I0121 10:58:40.112578 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7cs59" podStartSLOduration=85.112546716 podStartE2EDuration="1m25.112546716s" podCreationTimestamp="2026-01-21 10:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:40.104142359 +0000 UTC m=+107.364098828" watchObservedRunningTime="2026-01-21 10:58:40.112546716 +0000 UTC m=+107.372503185" Jan 21 10:58:40 crc kubenswrapper[4881]: I0121 10:58:40.116033 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:40 crc kubenswrapper[4881]: E0121 10:58:40.116532 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:40.616514124 +0000 UTC m=+107.876470593 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:40 crc kubenswrapper[4881]: I0121 10:58:40.122801 4881 patch_prober.go:28] interesting pod/router-default-5444994796-v7wnh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:58:40 crc kubenswrapper[4881]: [-]has-synced failed: reason withheld Jan 21 10:58:40 crc kubenswrapper[4881]: [+]process-running ok Jan 21 10:58:40 crc kubenswrapper[4881]: healthz check failed Jan 21 10:58:40 crc kubenswrapper[4881]: I0121 10:58:40.123024 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v7wnh" podUID="52d94566-7844-4414-bf48-9122c2207dd6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:58:40 crc kubenswrapper[4881]: I0121 10:58:40.159313 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cfw2n" podStartSLOduration=85.159294455 podStartE2EDuration="1m25.159294455s" podCreationTimestamp="2026-01-21 10:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:40.141000406 +0000 UTC m=+107.400956875" watchObservedRunningTime="2026-01-21 10:58:40.159294455 +0000 UTC m=+107.419250924" Jan 21 10:58:40 crc kubenswrapper[4881]: I0121 10:58:40.216687 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:40 crc kubenswrapper[4881]: E0121 10:58:40.217571 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:40.717525495 +0000 UTC m=+107.977481964 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:40 crc kubenswrapper[4881]: I0121 10:58:40.349975 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:40 crc kubenswrapper[4881]: E0121 10:58:40.350604 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:40.850586303 +0000 UTC m=+108.110542772 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:40 crc kubenswrapper[4881]: I0121 10:58:40.452208 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:40 crc kubenswrapper[4881]: E0121 10:58:40.452531 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:40.952500876 +0000 UTC m=+108.212457345 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:40 crc kubenswrapper[4881]: I0121 10:58:40.452739 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:40 crc kubenswrapper[4881]: E0121 10:58:40.453297 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:40.953279386 +0000 UTC m=+108.213235855 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:40 crc kubenswrapper[4881]: I0121 10:58:40.512040 4881 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-whh46 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.33:6443/healthz\": dial tcp 10.217.0.33:6443: connect: connection refused" start-of-body= Jan 21 10:58:40 crc kubenswrapper[4881]: I0121 10:58:40.512099 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-whh46" podUID="2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.33:6443/healthz\": dial tcp 10.217.0.33:6443: connect: connection refused" Jan 21 10:58:40 crc kubenswrapper[4881]: I0121 10:58:40.517038 4881 patch_prober.go:28] interesting pod/downloads-7954f5f757-wrqpb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 21 10:58:40 crc kubenswrapper[4881]: I0121 10:58:40.517114 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wrqpb" podUID="628cb8f4-a587-498f-9398-403e0af5eec4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 21 10:58:40 crc kubenswrapper[4881]: I0121 10:58:40.517218 4881 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-zkkpc container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 21 10:58:40 crc kubenswrapper[4881]: I0121 10:58:40.517237 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zkkpc" podUID="0007a585-5b17-44bd-89b8-2d1d233a03d4" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 21 10:58:40 crc kubenswrapper[4881]: I0121 10:58:40.555203 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:40 crc kubenswrapper[4881]: E0121 10:58:40.555614 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:41.055574018 +0000 UTC m=+108.315530527 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:40 crc kubenswrapper[4881]: I0121 10:58:40.556385 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:40 crc kubenswrapper[4881]: E0121 10:58:40.561673 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:41.061645076 +0000 UTC m=+108.321601755 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:40 crc kubenswrapper[4881]: I0121 10:58:40.657820 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:40 crc kubenswrapper[4881]: E0121 10:58:40.657964 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:41.157924801 +0000 UTC m=+108.417881270 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:40 crc kubenswrapper[4881]: I0121 10:58:40.658451 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:40 crc kubenswrapper[4881]: E0121 10:58:40.659007 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:41.158988208 +0000 UTC m=+108.418944677 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:40 crc kubenswrapper[4881]: I0121 10:58:40.860265 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:40 crc kubenswrapper[4881]: E0121 10:58:40.860422 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:41.360398224 +0000 UTC m=+108.620354693 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:40 crc kubenswrapper[4881]: I0121 10:58:40.860884 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:40 crc kubenswrapper[4881]: E0121 10:58:40.861511 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:41.361483761 +0000 UTC m=+108.621440270 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.031446 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:41 crc kubenswrapper[4881]: E0121 10:58:41.032009 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:41.531982569 +0000 UTC m=+108.791939038 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.135537 4881 patch_prober.go:28] interesting pod/router-default-5444994796-v7wnh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:58:41 crc kubenswrapper[4881]: [-]has-synced failed: reason withheld Jan 21 10:58:41 crc kubenswrapper[4881]: [+]process-running ok Jan 21 10:58:41 crc kubenswrapper[4881]: healthz check failed Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.135613 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v7wnh" podUID="52d94566-7844-4414-bf48-9122c2207dd6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.136873 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:41 crc kubenswrapper[4881]: E0121 10:58:41.137279 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:41.637264954 +0000 UTC m=+108.897221423 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.237829 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:41 crc kubenswrapper[4881]: E0121 10:58:41.238202 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:41.738185203 +0000 UTC m=+108.998141672 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.339514 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:41 crc kubenswrapper[4881]: E0121 10:58:41.340947 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:41.840936557 +0000 UTC m=+109.100893026 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.443764 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:41 crc kubenswrapper[4881]: E0121 10:58:41.444296 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:41.944273894 +0000 UTC m=+109.204230363 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.555051 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:41 crc kubenswrapper[4881]: E0121 10:58:41.555499 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:42.055472806 +0000 UTC m=+109.315429275 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.564079 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-72bt6" event={"ID":"2957ef21-9f30-4c81-8c6a-4a7f9d7315db","Type":"ContainerStarted","Data":"274060c852c28f0aa96e0ad4d532d1dcb9096dd3bcdb95eb1c0a740452bc99e2"} Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.580027 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vwqwb" event={"ID":"7470431a-2a31-41ae-b021-510ae5e3c505","Type":"ContainerStarted","Data":"bbc24598d39fe0e64db70dc4aacf9d02d6d8b03d34f37bffa5a9aa3ec6f35658"} Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.599646 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-phm68" event={"ID":"b745a377-4575-45fb-a206-ea4754ecff76","Type":"ContainerStarted","Data":"b41967d3bdb4370227d82839dc1862e1f74b1c61b2e573915f3a2a8ab7402fa8"} Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.617442 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-n2h44" event={"ID":"863eda44-9a47-42de-b2de-49234ac647f0","Type":"ContainerStarted","Data":"7f1e8826c38ff99f84057f36a7902286122626aa449227eefad6555a07039a2e"} Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.672266 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:41 crc kubenswrapper[4881]: E0121 10:58:41.672422 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:42.172390177 +0000 UTC m=+109.432346646 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.672768 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.673007 4881 generic.go:334] "Generic (PLEG): container finished" podID="303bdbe6-3bb4-4ace-86b1-f489c795580f" containerID="2f6a1a1e4268540ee682b58127eb41126b116ba4e30186b584ee325d0961ebec" exitCode=0 Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.673082 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-527gk" event={"ID":"303bdbe6-3bb4-4ace-86b1-f489c795580f","Type":"ContainerDied","Data":"2f6a1a1e4268540ee682b58127eb41126b116ba4e30186b584ee325d0961ebec"} Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.673154 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-n2h44" podStartSLOduration=86.673143306 podStartE2EDuration="1m26.673143306s" podCreationTimestamp="2026-01-21 10:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:41.672437859 +0000 UTC m=+108.932394318" watchObservedRunningTime="2026-01-21 10:58:41.673143306 +0000 UTC m=+108.933099775" Jan 21 10:58:41 crc kubenswrapper[4881]: E0121 10:58:41.673646 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:42.173633348 +0000 UTC m=+109.433589817 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.685107 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xmq82" event={"ID":"e94f1e92-21b2-44c9-b499-b879850c288d","Type":"ContainerStarted","Data":"814fc7d7b657d30002e0169875973f3d65029d02d56ac8702f4d08fa12940079"} Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.686067 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-xmq82" Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.689384 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hfc8p" event={"ID":"bc38f0b5-944c-40ae-aed0-50ca39ea2627","Type":"ContainerStarted","Data":"6503778a0e40497db90ff5d56281380f9d5aa7132b164aeb728970c4ece7f655"} Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.750268 4881 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-xmq82 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.750318 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-xmq82" podUID="e94f1e92-21b2-44c9-b499-b879850c288d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.755747 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hqjnl" event={"ID":"5c8e7010-8b57-47ed-9270-417650a2a7c5","Type":"ContainerStarted","Data":"2b0d06dde0904501ce111fd57e37adca846e4da2eb029ea2a8db58ed1417d15d"} Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.757690 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7gdkq" event={"ID":"c56c4a24-e6c6-4aa0-8a62-94d3179dfe54","Type":"ContainerStarted","Data":"61ebc8fd525d43c2fed8d3c5eb147049c107d40ccc8ed9533e7103a63058c427"} Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.758417 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7gdkq" Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.763383 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-kl9j4" event={"ID":"86acb693-c0d9-41f4-b33c-4716963ce268","Type":"ContainerStarted","Data":"35c9fd04d6545158e671724659b662ae119dc9bf1e2056a673d83d4e2c182473"} Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.765169 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-llgd7" event={"ID":"5f2944a8-8d91-4461-aa64-8908ca17f59e","Type":"ContainerStarted","Data":"57795c377416a2b444b6643cd056439aca4bebab0c719d95342ddf54bfc67891"} Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.773984 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:41 crc kubenswrapper[4881]: E0121 10:58:41.775560 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:42.275540711 +0000 UTC m=+109.535497180 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.833806 4881 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-7gdkq container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.834165 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7gdkq" podUID="c56c4a24-e6c6-4aa0-8a62-94d3179dfe54" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.873130 4881 patch_prober.go:28] interesting pod/downloads-7954f5f757-wrqpb container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.873183 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-wrqpb" podUID="628cb8f4-a587-498f-9398-403e0af5eec4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.873140 4881 patch_prober.go:28] interesting pod/downloads-7954f5f757-wrqpb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.873233 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wrqpb" podUID="628cb8f4-a587-498f-9398-403e0af5eec4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.874922 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:41 crc kubenswrapper[4881]: E0121 10:58:41.875276 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:42.37526174 +0000 UTC m=+109.635218209 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.875265 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-xmq82" podStartSLOduration=85.87525475 podStartE2EDuration="1m25.87525475s" podCreationTimestamp="2026-01-21 10:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:41.833572176 +0000 UTC m=+109.093528645" watchObservedRunningTime="2026-01-21 10:58:41.87525475 +0000 UTC m=+109.135211219" Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.875581 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-hfc8p" podStartSLOduration=85.875567267 podStartE2EDuration="1m25.875567267s" podCreationTimestamp="2026-01-21 10:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:41.875381593 +0000 UTC m=+109.135338062" watchObservedRunningTime="2026-01-21 10:58:41.875567267 +0000 UTC m=+109.135523736" Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.902855 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-qxzd9" Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.902908 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-qxzd9" Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.908070 4881 patch_prober.go:28] interesting pod/console-f9d7485db-qxzd9 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.908130 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-qxzd9" podUID="bb8fc8b3-9818-40e2-afb2-860e2d1efae1" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.976453 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:41 crc kubenswrapper[4881]: E0121 10:58:41.976950 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:42.476933188 +0000 UTC m=+109.736889657 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:41 crc kubenswrapper[4881]: I0121 10:58:41.977082 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:41 crc kubenswrapper[4881]: E0121 10:58:41.978887 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:42.478878705 +0000 UTC m=+109.738835174 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.078177 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:42 crc kubenswrapper[4881]: E0121 10:58:42.078505 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:42.578490641 +0000 UTC m=+109.838447110 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.087470 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-llgd7" podStartSLOduration=86.087448402 podStartE2EDuration="1m26.087448402s" podCreationTimestamp="2026-01-21 10:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:41.933212034 +0000 UTC m=+109.193168503" watchObservedRunningTime="2026-01-21 10:58:42.087448402 +0000 UTC m=+109.347404871" Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.132276 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-hqjnl" podStartSLOduration=87.132257153 podStartE2EDuration="1m27.132257153s" podCreationTimestamp="2026-01-21 10:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:42.086092279 +0000 UTC m=+109.346048748" watchObservedRunningTime="2026-01-21 10:58:42.132257153 +0000 UTC m=+109.392213622" Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.134397 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-v7wnh" Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.146001 4881 patch_prober.go:28] interesting pod/router-default-5444994796-v7wnh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:58:42 crc kubenswrapper[4881]: [-]has-synced failed: reason withheld Jan 21 10:58:42 crc kubenswrapper[4881]: [+]process-running ok Jan 21 10:58:42 crc kubenswrapper[4881]: healthz check failed Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.146060 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v7wnh" podUID="52d94566-7844-4414-bf48-9122c2207dd6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.181295 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7gdkq" podStartSLOduration=86.181277637 podStartE2EDuration="1m26.181277637s" podCreationTimestamp="2026-01-21 10:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:42.134264502 +0000 UTC m=+109.394220971" watchObservedRunningTime="2026-01-21 10:58:42.181277637 +0000 UTC m=+109.441234106" Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.246425 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:42 crc kubenswrapper[4881]: E0121 10:58:42.246757 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:42.746742045 +0000 UTC m=+110.006698514 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.348287 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:42 crc kubenswrapper[4881]: E0121 10:58:42.348422 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:42.848399811 +0000 UTC m=+110.108356280 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.348517 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:42 crc kubenswrapper[4881]: E0121 10:58:42.348867 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:42.848852882 +0000 UTC m=+110.108809351 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.449624 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.450610 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:42 crc kubenswrapper[4881]: E0121 10:58:42.450803 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:42.950769245 +0000 UTC m=+110.210725714 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.450919 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:42 crc kubenswrapper[4881]: E0121 10:58:42.451274 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:42.951267268 +0000 UTC m=+110.211223737 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.551800 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:42 crc kubenswrapper[4881]: E0121 10:58:42.552678 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:43.052661668 +0000 UTC m=+110.312618137 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.619055 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-kl9j4" podStartSLOduration=13.619036268 podStartE2EDuration="13.619036268s" podCreationTimestamp="2026-01-21 10:58:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:42.188566635 +0000 UTC m=+109.448523104" watchObservedRunningTime="2026-01-21 10:58:42.619036268 +0000 UTC m=+109.878992737" Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.653040 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:42 crc kubenswrapper[4881]: E0121 10:58:42.653366 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:43.153354671 +0000 UTC m=+110.413311140 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.680811 4881 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-xmq82 container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.680885 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-xmq82" podUID="e94f1e92-21b2-44c9-b499-b879850c288d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.687586 4881 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-7gdkq container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.687639 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7gdkq" podUID="c56c4a24-e6c6-4aa0-8a62-94d3179dfe54" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.688074 4881 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-7gdkq container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.688103 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7gdkq" podUID="c56c4a24-e6c6-4aa0-8a62-94d3179dfe54" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.688324 4881 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-xmq82 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.688352 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-xmq82" podUID="e94f1e92-21b2-44c9-b499-b879850c288d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.688879 4881 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-zkkpc container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.688907 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zkkpc" podUID="0007a585-5b17-44bd-89b8-2d1d233a03d4" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.688972 4881 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-zkkpc container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.688990 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zkkpc" podUID="0007a585-5b17-44bd-89b8-2d1d233a03d4" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.804261 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:42 crc kubenswrapper[4881]: E0121 10:58:42.804822 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:43.304802741 +0000 UTC m=+110.564759210 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.881240 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-j4s5w" event={"ID":"6742e18f-a187-4a77-a734-bdec89bd89e0","Type":"ContainerStarted","Data":"a55860c76dea8cc83448f2c5a84a34699b18e04ed2bd2c673062b583a1fe43b9"} Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.908011 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-znm6j" event={"ID":"b7e58845-f0a1-4320-b879-0765b6d57988","Type":"ContainerStarted","Data":"b74f6f2753b51931c8c7886efc96ad27509b6327f8db826907947ae3fa7e5941"} Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.908046 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-znm6j" Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.910571 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:42 crc kubenswrapper[4881]: E0121 10:58:42.910923 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:43.410911647 +0000 UTC m=+110.670868116 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.911301 4881 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-xmq82 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.911343 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-xmq82" podUID="e94f1e92-21b2-44c9-b499-b879850c288d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.911747 4881 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-7gdkq container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 21 10:58:42 crc kubenswrapper[4881]: I0121 10:58:42.911771 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7gdkq" podUID="c56c4a24-e6c6-4aa0-8a62-94d3179dfe54" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 21 10:58:43 crc kubenswrapper[4881]: I0121 10:58:43.014871 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:43 crc kubenswrapper[4881]: E0121 10:58:43.016393 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:43.516373037 +0000 UTC m=+110.776329516 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:43 crc kubenswrapper[4881]: I0121 10:58:43.117775 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:43 crc kubenswrapper[4881]: E0121 10:58:43.118104 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:43.618092666 +0000 UTC m=+110.878049135 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:43 crc kubenswrapper[4881]: I0121 10:58:43.134506 4881 patch_prober.go:28] interesting pod/router-default-5444994796-v7wnh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:58:43 crc kubenswrapper[4881]: [-]has-synced failed: reason withheld Jan 21 10:58:43 crc kubenswrapper[4881]: [+]process-running ok Jan 21 10:58:43 crc kubenswrapper[4881]: healthz check failed Jan 21 10:58:43 crc kubenswrapper[4881]: I0121 10:58:43.134570 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v7wnh" podUID="52d94566-7844-4414-bf48-9122c2207dd6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:58:43 crc kubenswrapper[4881]: I0121 10:58:43.220740 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:43 crc kubenswrapper[4881]: E0121 10:58:43.221479 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:43.721460905 +0000 UTC m=+110.981417374 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:43 crc kubenswrapper[4881]: I0121 10:58:43.261296 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-j4s5w" podStartSLOduration=88.261273872 podStartE2EDuration="1m28.261273872s" podCreationTimestamp="2026-01-21 10:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:43.067583685 +0000 UTC m=+110.327540154" watchObservedRunningTime="2026-01-21 10:58:43.261273872 +0000 UTC m=+110.521230341" Jan 21 10:58:43 crc kubenswrapper[4881]: I0121 10:58:43.322933 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:43 crc kubenswrapper[4881]: E0121 10:58:43.323707 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:43.823692455 +0000 UTC m=+111.083648924 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:43 crc kubenswrapper[4881]: I0121 10:58:43.345389 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-znm6j" podStartSLOduration=14.345370437 podStartE2EDuration="14.345370437s" podCreationTimestamp="2026-01-21 10:58:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:43.262195705 +0000 UTC m=+110.522152184" watchObservedRunningTime="2026-01-21 10:58:43.345370437 +0000 UTC m=+110.605326906" Jan 21 10:58:43 crc kubenswrapper[4881]: I0121 10:58:43.423815 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:43 crc kubenswrapper[4881]: E0121 10:58:43.424235 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:43.924216815 +0000 UTC m=+111.184173284 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:43 crc kubenswrapper[4881]: I0121 10:58:43.525138 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:43 crc kubenswrapper[4881]: E0121 10:58:43.525549 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:44.025532643 +0000 UTC m=+111.285489112 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:43 crc kubenswrapper[4881]: I0121 10:58:43.636303 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:43 crc kubenswrapper[4881]: E0121 10:58:43.636579 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:44.136542079 +0000 UTC m=+111.396498548 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:43 crc kubenswrapper[4881]: I0121 10:58:43.636864 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:43 crc kubenswrapper[4881]: E0121 10:58:43.637416 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:44.137404071 +0000 UTC m=+111.397360540 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:43 crc kubenswrapper[4881]: I0121 10:58:43.737963 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:43 crc kubenswrapper[4881]: E0121 10:58:43.738141 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:44.238116204 +0000 UTC m=+111.498072673 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:43 crc kubenswrapper[4881]: I0121 10:58:43.738572 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:43 crc kubenswrapper[4881]: E0121 10:58:43.738878 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:44.238865892 +0000 UTC m=+111.498822361 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:43 crc kubenswrapper[4881]: I0121 10:58:43.840555 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:43 crc kubenswrapper[4881]: E0121 10:58:43.841864 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:44.341829562 +0000 UTC m=+111.601786041 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:43 crc kubenswrapper[4881]: I0121 10:58:43.922512 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-dtv4t" event={"ID":"3552adbd-011f-4552-9e69-233b92c554c8","Type":"ContainerStarted","Data":"2f70c26dd006302ba39fd20f4edc424c87daa3fb0cb961652a77e27d4c4c5f81"} Jan 21 10:58:43 crc kubenswrapper[4881]: I0121 10:58:43.924262 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-42f9f" event={"ID":"409e44ed-8f6d-4321-9620-d8da23cf0fec","Type":"ContainerStarted","Data":"bf0cd8f2e1a07f1495e8b5070edd36bdf049cee20ed91cae8e65491224ad9404"} Jan 21 10:58:43 crc kubenswrapper[4881]: I0121 10:58:43.926838 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-svmbc" event={"ID":"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57","Type":"ContainerStarted","Data":"aa95308cf74bd69f9dd89eda71c93fb0b953f4273db045079776216eba82ac6c"} Jan 21 10:58:43 crc kubenswrapper[4881]: I0121 10:58:43.930277 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-cclnc" event={"ID":"8465162e-dd9f-45b4-83a6-94666ac2b87b","Type":"ContainerStarted","Data":"5346a63af1f87f1840ae91c7e61204fd86101b16375b797b124a38d2d1a4d526"} Jan 21 10:58:43 crc kubenswrapper[4881]: I0121 10:58:43.934133 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rdgn6" event={"ID":"7f30da15-7c75-4c87-9dc4-78653d6f84cd","Type":"ContainerStarted","Data":"bf541b970161b08f2e69d709b38c1f8215e1e67f2b3172fe3c3545b6f18c8d31"} Jan 21 10:58:43 crc kubenswrapper[4881]: I0121 10:58:43.935253 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rdgn6" Jan 21 10:58:43 crc kubenswrapper[4881]: I0121 10:58:43.949909 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:43 crc kubenswrapper[4881]: E0121 10:58:43.950455 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:44.450440749 +0000 UTC m=+111.710397218 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:43 crc kubenswrapper[4881]: I0121 10:58:43.966614 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-cclnc" podStartSLOduration=87.966595786 podStartE2EDuration="1m27.966595786s" podCreationTimestamp="2026-01-21 10:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:43.965498769 +0000 UTC m=+111.225455238" watchObservedRunningTime="2026-01-21 10:58:43.966595786 +0000 UTC m=+111.226552255" Jan 21 10:58:43 crc kubenswrapper[4881]: I0121 10:58:43.974872 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" event={"ID":"146cbde4-d891-47d8-a09f-d4f4b50bfe6d","Type":"ContainerStarted","Data":"7623aa552682368b5ab7546c7abf5426a9fc54a24390c180b2fd1c52a1fc3c59"} Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.006723 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rdgn6" podStartSLOduration=88.006704741 podStartE2EDuration="1m28.006704741s" podCreationTimestamp="2026-01-21 10:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:44.004446365 +0000 UTC m=+111.264402834" watchObservedRunningTime="2026-01-21 10:58:44.006704741 +0000 UTC m=+111.266661210" Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.008649 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-72bt6" event={"ID":"2957ef21-9f30-4c81-8c6a-4a7f9d7315db","Type":"ContainerStarted","Data":"82cff7c637ca9ea34404cbbdd6bb09a799782c323f2954c300f85111c45a2087"} Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.009419 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-72bt6" Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.024994 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vwqwb" event={"ID":"7470431a-2a31-41ae-b021-510ae5e3c505","Type":"ContainerStarted","Data":"d8d9070bb71902da921f2644b474d206bf23dae6634bd4a1926be15aaa2266a2"} Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.051365 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:44 crc kubenswrapper[4881]: E0121 10:58:44.051932 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:44.551913811 +0000 UTC m=+111.811870280 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.058072 4881 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-rdgn6 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:5443/healthz\": dial tcp 10.217.0.29:5443: connect: connection refused" start-of-body= Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.058137 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rdgn6" podUID="7f30da15-7c75-4c87-9dc4-78653d6f84cd" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.29:5443/healthz\": dial tcp 10.217.0.29:5443: connect: connection refused" Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.075236 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-phm68" event={"ID":"b745a377-4575-45fb-a206-ea4754ecff76","Type":"ContainerStarted","Data":"3033d149a930a00978fa1ff937f61c5442e5512fd3248aab1dddf52694995bdd"} Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.081353 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rslv2" event={"ID":"537a87a4-8f58-441f-9199-62c5849c693c","Type":"ContainerStarted","Data":"f49722c43dfa54ea40a3a717b4d9f4d1e23fd65e4ceaaf2c1d50a6e52c41eba1"} Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.082024 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rslv2" Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.082247 4881 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-xmq82 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.082294 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-xmq82" podUID="e94f1e92-21b2-44c9-b499-b879850c288d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.083241 4881 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-7gdkq container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.083287 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7gdkq" podUID="c56c4a24-e6c6-4aa0-8a62-94d3179dfe54" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.153720 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:44 crc kubenswrapper[4881]: E0121 10:58:44.171560 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:44.671522969 +0000 UTC m=+111.931479478 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.244642 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-72bt6" podStartSLOduration=88.244614634 podStartE2EDuration="1m28.244614634s" podCreationTimestamp="2026-01-21 10:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:44.170222247 +0000 UTC m=+111.430178726" watchObservedRunningTime="2026-01-21 10:58:44.244614634 +0000 UTC m=+111.504571103" Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.244930 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-phm68" podStartSLOduration=90.244924162 podStartE2EDuration="1m30.244924162s" podCreationTimestamp="2026-01-21 10:57:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:44.244061511 +0000 UTC m=+111.504017980" watchObservedRunningTime="2026-01-21 10:58:44.244924162 +0000 UTC m=+111.504880631" Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.257778 4881 patch_prober.go:28] interesting pod/router-default-5444994796-v7wnh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:58:44 crc kubenswrapper[4881]: [-]has-synced failed: reason withheld Jan 21 10:58:44 crc kubenswrapper[4881]: [+]process-running ok Jan 21 10:58:44 crc kubenswrapper[4881]: healthz check failed Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.257845 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v7wnh" podUID="52d94566-7844-4414-bf48-9122c2207dd6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.258266 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:44 crc kubenswrapper[4881]: E0121 10:58:44.259532 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:44.75951845 +0000 UTC m=+112.019474909 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.361223 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:44 crc kubenswrapper[4881]: E0121 10:58:44.362250 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:44.862232333 +0000 UTC m=+112.122188802 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.464246 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:44 crc kubenswrapper[4881]: E0121 10:58:44.464649 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:44.964633888 +0000 UTC m=+112.224590357 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.480134 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vwqwb" podStartSLOduration=89.480108418 podStartE2EDuration="1m29.480108418s" podCreationTimestamp="2026-01-21 10:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:44.308730479 +0000 UTC m=+111.568686968" watchObservedRunningTime="2026-01-21 10:58:44.480108418 +0000 UTC m=+111.740064877" Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.580042 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:44 crc kubenswrapper[4881]: E0121 10:58:44.580430 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:45.080412381 +0000 UTC m=+112.340368850 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.632373 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-527gk" Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.655119 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rslv2" podStartSLOduration=89.655103087 podStartE2EDuration="1m29.655103087s" podCreationTimestamp="2026-01-21 10:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:44.485220223 +0000 UTC m=+111.745176692" watchObservedRunningTime="2026-01-21 10:58:44.655103087 +0000 UTC m=+111.915059556" Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.685146 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:44 crc kubenswrapper[4881]: E0121 10:58:44.685546 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:45.185529163 +0000 UTC m=+112.445485632 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.786215 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/303bdbe6-3bb4-4ace-86b1-f489c795580f-secret-volume\") pod \"303bdbe6-3bb4-4ace-86b1-f489c795580f\" (UID: \"303bdbe6-3bb4-4ace-86b1-f489c795580f\") " Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.786714 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l45nv\" (UniqueName: \"kubernetes.io/projected/303bdbe6-3bb4-4ace-86b1-f489c795580f-kube-api-access-l45nv\") pod \"303bdbe6-3bb4-4ace-86b1-f489c795580f\" (UID: \"303bdbe6-3bb4-4ace-86b1-f489c795580f\") " Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.786977 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/303bdbe6-3bb4-4ace-86b1-f489c795580f-config-volume\") pod \"303bdbe6-3bb4-4ace-86b1-f489c795580f\" (UID: \"303bdbe6-3bb4-4ace-86b1-f489c795580f\") " Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.787190 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:44 crc kubenswrapper[4881]: E0121 10:58:44.787653 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:45.287637451 +0000 UTC m=+112.547593930 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.794751 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/303bdbe6-3bb4-4ace-86b1-f489c795580f-config-volume" (OuterVolumeSpecName: "config-volume") pod "303bdbe6-3bb4-4ace-86b1-f489c795580f" (UID: "303bdbe6-3bb4-4ace-86b1-f489c795580f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.821422 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/303bdbe6-3bb4-4ace-86b1-f489c795580f-kube-api-access-l45nv" (OuterVolumeSpecName: "kube-api-access-l45nv") pod "303bdbe6-3bb4-4ace-86b1-f489c795580f" (UID: "303bdbe6-3bb4-4ace-86b1-f489c795580f"). InnerVolumeSpecName "kube-api-access-l45nv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.821844 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/303bdbe6-3bb4-4ace-86b1-f489c795580f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "303bdbe6-3bb4-4ace-86b1-f489c795580f" (UID: "303bdbe6-3bb4-4ace-86b1-f489c795580f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.894045 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.894417 4881 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/303bdbe6-3bb4-4ace-86b1-f489c795580f-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.894432 4881 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/303bdbe6-3bb4-4ace-86b1-f489c795580f-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.894442 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l45nv\" (UniqueName: \"kubernetes.io/projected/303bdbe6-3bb4-4ace-86b1-f489c795580f-kube-api-access-l45nv\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:44 crc kubenswrapper[4881]: E0121 10:58:44.894509 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:45.394494586 +0000 UTC m=+112.654451055 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:44 crc kubenswrapper[4881]: I0121 10:58:44.995192 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:44 crc kubenswrapper[4881]: E0121 10:58:44.995478 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:45.495467116 +0000 UTC m=+112.755423585 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.096390 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:45 crc kubenswrapper[4881]: E0121 10:58:45.096578 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:45.596552909 +0000 UTC m=+112.856509378 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.096622 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:45 crc kubenswrapper[4881]: E0121 10:58:45.097123 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:45.597107962 +0000 UTC m=+112.857064431 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.122355 4881 patch_prober.go:28] interesting pod/router-default-5444994796-v7wnh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:58:45 crc kubenswrapper[4881]: [-]has-synced failed: reason withheld Jan 21 10:58:45 crc kubenswrapper[4881]: [+]process-running ok Jan 21 10:58:45 crc kubenswrapper[4881]: healthz check failed Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.122425 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v7wnh" podUID="52d94566-7844-4414-bf48-9122c2207dd6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.122502 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-527gk" event={"ID":"303bdbe6-3bb4-4ace-86b1-f489c795580f","Type":"ContainerDied","Data":"b3d019b82236dd15b24f4a31ba5ebc67107e80ee3f592acc46c51b2bbe16aba5"} Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.122539 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3d019b82236dd15b24f4a31ba5ebc67107e80ee3f592acc46c51b2bbe16aba5" Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.122609 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-527gk" Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.136772 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-svmbc" event={"ID":"3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57","Type":"ContainerStarted","Data":"f1c07b5b1a05d1bf9768ec195ee3a2c9acc9824cfa685e9f6db9da31ab9c0a77"} Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.146366 4881 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-rdgn6 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:5443/healthz\": dial tcp 10.217.0.29:5443: connect: connection refused" start-of-body= Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.146658 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rdgn6" podUID="7f30da15-7c75-4c87-9dc4-78653d6f84cd" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.29:5443/healthz\": dial tcp 10.217.0.29:5443: connect: connection refused" Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.147059 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-dtv4t" event={"ID":"3552adbd-011f-4552-9e69-233b92c554c8","Type":"ContainerStarted","Data":"4bb2b3e87d7c25e84c22d640a23b187cb954c20cb8555c8fc9006393fea81bd7"} Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.150206 4881 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-rslv2 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.150331 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rslv2" podUID="537a87a4-8f58-441f-9199-62c5849c693c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.185602 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-dtv4t" podStartSLOduration=90.185584546 podStartE2EDuration="1m30.185584546s" podCreationTimestamp="2026-01-21 10:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:45.182077369 +0000 UTC m=+112.442033848" watchObservedRunningTime="2026-01-21 10:58:45.185584546 +0000 UTC m=+112.445541015" Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.201438 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:45 crc kubenswrapper[4881]: E0121 10:58:45.201893 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:45.701876795 +0000 UTC m=+112.961833264 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.227316 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 21 10:58:45 crc kubenswrapper[4881]: E0121 10:58:45.227503 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="303bdbe6-3bb4-4ace-86b1-f489c795580f" containerName="collect-profiles" Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.227514 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="303bdbe6-3bb4-4ace-86b1-f489c795580f" containerName="collect-profiles" Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.227623 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="303bdbe6-3bb4-4ace-86b1-f489c795580f" containerName="collect-profiles" Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.227974 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.233567 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.233735 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.248812 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" podStartSLOduration=89.248771318 podStartE2EDuration="1m29.248771318s" podCreationTimestamp="2026-01-21 10:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:45.233042311 +0000 UTC m=+112.492998770" watchObservedRunningTime="2026-01-21 10:58:45.248771318 +0000 UTC m=+112.508727817" Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.251513 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.260453 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-svmbc" podStartSLOduration=90.260431574 podStartE2EDuration="1m30.260431574s" podCreationTimestamp="2026-01-21 10:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:45.259271495 +0000 UTC m=+112.519227964" watchObservedRunningTime="2026-01-21 10:58:45.260431574 +0000 UTC m=+112.520388043" Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.321975 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bac3c741-e8bc-4059-8914-a6f834cee8dd-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"bac3c741-e8bc-4059-8914-a6f834cee8dd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.322516 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bac3c741-e8bc-4059-8914-a6f834cee8dd-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"bac3c741-e8bc-4059-8914-a6f834cee8dd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.322672 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:45 crc kubenswrapper[4881]: E0121 10:58:45.332340 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:45.832325489 +0000 UTC m=+113.092281958 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.423512 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:45 crc kubenswrapper[4881]: E0121 10:58:45.424070 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:45.924037852 +0000 UTC m=+113.183994311 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.424407 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bac3c741-e8bc-4059-8914-a6f834cee8dd-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"bac3c741-e8bc-4059-8914-a6f834cee8dd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.424473 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bac3c741-e8bc-4059-8914-a6f834cee8dd-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"bac3c741-e8bc-4059-8914-a6f834cee8dd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.424508 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:45 crc kubenswrapper[4881]: E0121 10:58:45.424936 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:45.924918813 +0000 UTC m=+113.184875282 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.425261 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bac3c741-e8bc-4059-8914-a6f834cee8dd-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"bac3c741-e8bc-4059-8914-a6f834cee8dd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.474951 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bac3c741-e8bc-4059-8914-a6f834cee8dd-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"bac3c741-e8bc-4059-8914-a6f834cee8dd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.544037 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:45 crc kubenswrapper[4881]: E0121 10:58:45.544434 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:46.044412519 +0000 UTC m=+113.304368988 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.562050 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.646944 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:45 crc kubenswrapper[4881]: E0121 10:58:45.647331 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:46.147316266 +0000 UTC m=+113.407272735 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.748089 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:45 crc kubenswrapper[4881]: E0121 10:58:45.749744 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:46.249722701 +0000 UTC m=+113.509679180 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.850760 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:45 crc kubenswrapper[4881]: E0121 10:58:45.851305 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:46.351280196 +0000 UTC m=+113.611236665 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.916520 4881 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.957160 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:45 crc kubenswrapper[4881]: E0121 10:58:45.957250 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:46.457221888 +0000 UTC m=+113.717178357 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:45 crc kubenswrapper[4881]: I0121 10:58:45.957873 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:45 crc kubenswrapper[4881]: E0121 10:58:45.958344 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:46.458327135 +0000 UTC m=+113.718283604 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:46 crc kubenswrapper[4881]: I0121 10:58:46.059395 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:46 crc kubenswrapper[4881]: E0121 10:58:46.059560 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:58:46.559528611 +0000 UTC m=+113.819485080 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:46 crc kubenswrapper[4881]: I0121 10:58:46.059746 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:46 crc kubenswrapper[4881]: E0121 10:58:46.060080 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:58:46.560073224 +0000 UTC m=+113.820029693 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n98tz" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:58:46 crc kubenswrapper[4881]: I0121 10:58:46.129902 4881 patch_prober.go:28] interesting pod/router-default-5444994796-v7wnh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:58:46 crc kubenswrapper[4881]: [-]has-synced failed: reason withheld Jan 21 10:58:46 crc kubenswrapper[4881]: [+]process-running ok Jan 21 10:58:46 crc kubenswrapper[4881]: healthz check failed Jan 21 10:58:46 crc kubenswrapper[4881]: I0121 10:58:46.130021 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v7wnh" podUID="52d94566-7844-4414-bf48-9122c2207dd6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:58:46 crc kubenswrapper[4881]: I0121 10:58:46.138578 4881 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-21T10:58:45.91657925Z","Handler":null,"Name":""} Jan 21 10:58:46 crc kubenswrapper[4881]: I0121 10:58:46.140494 4881 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 21 10:58:46 crc kubenswrapper[4881]: I0121 10:58:46.140532 4881 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 21 10:58:46 crc kubenswrapper[4881]: I0121 10:58:46.160627 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:58:46 crc kubenswrapper[4881]: I0121 10:58:46.175556 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 21 10:58:46 crc kubenswrapper[4881]: I0121 10:58:46.231846 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-42f9f" event={"ID":"409e44ed-8f6d-4321-9620-d8da23cf0fec","Type":"ContainerStarted","Data":"c4b0f42b255ce85c83eb57dee5cfd3b3f516049e2da8fe43c690d7827b428eb3"} Jan 21 10:58:46 crc kubenswrapper[4881]: I0121 10:58:46.232460 4881 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-rslv2 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 21 10:58:46 crc kubenswrapper[4881]: I0121 10:58:46.232529 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rslv2" podUID="537a87a4-8f58-441f-9199-62c5849c693c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 21 10:58:46 crc kubenswrapper[4881]: I0121 10:58:46.297834 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:46 crc kubenswrapper[4881]: I0121 10:58:46.310444 4881 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 10:58:46 crc kubenswrapper[4881]: I0121 10:58:46.310517 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:46 crc kubenswrapper[4881]: I0121 10:58:46.401437 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 10:58:46 crc kubenswrapper[4881]: I0121 10:58:46.541140 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n98tz\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:46 crc kubenswrapper[4881]: I0121 10:58:46.551341 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:46 crc kubenswrapper[4881]: I0121 10:58:46.796519 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 21 10:58:46 crc kubenswrapper[4881]: W0121 10:58:46.817617 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podbac3c741_e8bc_4059_8914_a6f834cee8dd.slice/crio-d000f23cdd5d4f1ece21017d747b89cc98e096184b532595e3b8592df18c9c55 WatchSource:0}: Error finding container d000f23cdd5d4f1ece21017d747b89cc98e096184b532595e3b8592df18c9c55: Status 404 returned error can't find the container with id d000f23cdd5d4f1ece21017d747b89cc98e096184b532595e3b8592df18c9c55 Jan 21 10:58:46 crc kubenswrapper[4881]: I0121 10:58:46.862923 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-q6dn5"] Jan 21 10:58:46 crc kubenswrapper[4881]: I0121 10:58:46.864309 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q6dn5" Jan 21 10:58:46 crc kubenswrapper[4881]: I0121 10:58:46.868015 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:46 crc kubenswrapper[4881]: I0121 10:58:46.868204 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 21 10:58:46 crc kubenswrapper[4881]: I0121 10:58:46.868375 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:46 crc kubenswrapper[4881]: I0121 10:58:46.870577 4881 patch_prober.go:28] interesting pod/apiserver-76f77b778f-svmbc container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.8:8443/livez\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 21 10:58:46 crc kubenswrapper[4881]: I0121 10:58:46.870621 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-svmbc" podUID="3d8b2de9-20f2-4d9b-ba38-d2e6649b1a57" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.8:8443/livez\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 21 10:58:46 crc kubenswrapper[4881]: I0121 10:58:46.900966 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q6dn5"] Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.007100 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rdgn6" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.007319 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e002e57-13ab-477a-9e16-980e13b5e47f-catalog-content\") pod \"certified-operators-q6dn5\" (UID: \"8e002e57-13ab-477a-9e16-980e13b5e47f\") " pod="openshift-marketplace/certified-operators-q6dn5" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.007485 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e002e57-13ab-477a-9e16-980e13b5e47f-utilities\") pod \"certified-operators-q6dn5\" (UID: \"8e002e57-13ab-477a-9e16-980e13b5e47f\") " pod="openshift-marketplace/certified-operators-q6dn5" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.007634 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g42w8\" (UniqueName: \"kubernetes.io/projected/8e002e57-13ab-477a-9e16-980e13b5e47f-kube-api-access-g42w8\") pod \"certified-operators-q6dn5\" (UID: \"8e002e57-13ab-477a-9e16-980e13b5e47f\") " pod="openshift-marketplace/certified-operators-q6dn5" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.036576 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-v5n2s"] Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.038226 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v5n2s" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.041057 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.063413 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v5n2s"] Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.100464 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.104372 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.116640 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e002e57-13ab-477a-9e16-980e13b5e47f-catalog-content\") pod \"certified-operators-q6dn5\" (UID: \"8e002e57-13ab-477a-9e16-980e13b5e47f\") " pod="openshift-marketplace/certified-operators-q6dn5" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.117028 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e002e57-13ab-477a-9e16-980e13b5e47f-utilities\") pod \"certified-operators-q6dn5\" (UID: \"8e002e57-13ab-477a-9e16-980e13b5e47f\") " pod="openshift-marketplace/certified-operators-q6dn5" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.117243 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g42w8\" (UniqueName: \"kubernetes.io/projected/8e002e57-13ab-477a-9e16-980e13b5e47f-kube-api-access-g42w8\") pod \"certified-operators-q6dn5\" (UID: \"8e002e57-13ab-477a-9e16-980e13b5e47f\") " pod="openshift-marketplace/certified-operators-q6dn5" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.118071 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e002e57-13ab-477a-9e16-980e13b5e47f-catalog-content\") pod \"certified-operators-q6dn5\" (UID: \"8e002e57-13ab-477a-9e16-980e13b5e47f\") " pod="openshift-marketplace/certified-operators-q6dn5" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.118252 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e002e57-13ab-477a-9e16-980e13b5e47f-utilities\") pod \"certified-operators-q6dn5\" (UID: \"8e002e57-13ab-477a-9e16-980e13b5e47f\") " pod="openshift-marketplace/certified-operators-q6dn5" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.123116 4881 patch_prober.go:28] interesting pod/router-default-5444994796-v7wnh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:58:47 crc kubenswrapper[4881]: [-]has-synced failed: reason withheld Jan 21 10:58:47 crc kubenswrapper[4881]: [+]process-running ok Jan 21 10:58:47 crc kubenswrapper[4881]: healthz check failed Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.123172 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v7wnh" podUID="52d94566-7844-4414-bf48-9122c2207dd6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.136981 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.161726 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g42w8\" (UniqueName: \"kubernetes.io/projected/8e002e57-13ab-477a-9e16-980e13b5e47f-kube-api-access-g42w8\") pod \"certified-operators-q6dn5\" (UID: \"8e002e57-13ab-477a-9e16-980e13b5e47f\") " pod="openshift-marketplace/certified-operators-q6dn5" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.195121 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q6dn5" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.219160 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a-utilities\") pod \"community-operators-v5n2s\" (UID: \"e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a\") " pod="openshift-marketplace/community-operators-v5n2s" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.219230 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a-catalog-content\") pod \"community-operators-v5n2s\" (UID: \"e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a\") " pod="openshift-marketplace/community-operators-v5n2s" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.219318 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf89m\" (UniqueName: \"kubernetes.io/projected/e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a-kube-api-access-mf89m\") pod \"community-operators-v5n2s\" (UID: \"e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a\") " pod="openshift-marketplace/community-operators-v5n2s" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.239519 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2sqlm"] Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.240661 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2sqlm" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.298860 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2sqlm"] Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.303504 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"bac3c741-e8bc-4059-8914-a6f834cee8dd","Type":"ContainerStarted","Data":"d000f23cdd5d4f1ece21017d747b89cc98e096184b532595e3b8592df18c9c55"} Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.305957 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-42f9f" event={"ID":"409e44ed-8f6d-4321-9620-d8da23cf0fec","Type":"ContainerStarted","Data":"8245115cf5cd1ff0788aba3d223fbe0052e99f64b818eb3fccd9c5e9e87ad2e4"} Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.320631 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mf89m\" (UniqueName: \"kubernetes.io/projected/e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a-kube-api-access-mf89m\") pod \"community-operators-v5n2s\" (UID: \"e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a\") " pod="openshift-marketplace/community-operators-v5n2s" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.320986 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a-utilities\") pod \"community-operators-v5n2s\" (UID: \"e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a\") " pod="openshift-marketplace/community-operators-v5n2s" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.321111 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a-catalog-content\") pod \"community-operators-v5n2s\" (UID: \"e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a\") " pod="openshift-marketplace/community-operators-v5n2s" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.321832 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a-catalog-content\") pod \"community-operators-v5n2s\" (UID: \"e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a\") " pod="openshift-marketplace/community-operators-v5n2s" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.322944 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a-utilities\") pod \"community-operators-v5n2s\" (UID: \"e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a\") " pod="openshift-marketplace/community-operators-v5n2s" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.357232 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.422564 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b12596d-1f5f-4d81-b664-d0ddee72552c-catalog-content\") pod \"certified-operators-2sqlm\" (UID: \"5b12596d-1f5f-4d81-b664-d0ddee72552c\") " pod="openshift-marketplace/certified-operators-2sqlm" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.422701 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b12596d-1f5f-4d81-b664-d0ddee72552c-utilities\") pod \"certified-operators-2sqlm\" (UID: \"5b12596d-1f5f-4d81-b664-d0ddee72552c\") " pod="openshift-marketplace/certified-operators-2sqlm" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.422768 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrsm4\" (UniqueName: \"kubernetes.io/projected/5b12596d-1f5f-4d81-b664-d0ddee72552c-kube-api-access-lrsm4\") pod \"certified-operators-2sqlm\" (UID: \"5b12596d-1f5f-4d81-b664-d0ddee72552c\") " pod="openshift-marketplace/certified-operators-2sqlm" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.434262 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-769kz" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.446986 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mf89m\" (UniqueName: \"kubernetes.io/projected/e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a-kube-api-access-mf89m\") pod \"community-operators-v5n2s\" (UID: \"e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a\") " pod="openshift-marketplace/community-operators-v5n2s" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.451413 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6rmvm"] Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.452670 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6rmvm" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.462525 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6rmvm"] Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.523486 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b12596d-1f5f-4d81-b664-d0ddee72552c-utilities\") pod \"certified-operators-2sqlm\" (UID: \"5b12596d-1f5f-4d81-b664-d0ddee72552c\") " pod="openshift-marketplace/certified-operators-2sqlm" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.523546 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2dkc\" (UniqueName: \"kubernetes.io/projected/2c460bf5-05a1-4977-b889-1a5c3263df33-kube-api-access-p2dkc\") pod \"community-operators-6rmvm\" (UID: \"2c460bf5-05a1-4977-b889-1a5c3263df33\") " pod="openshift-marketplace/community-operators-6rmvm" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.523603 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrsm4\" (UniqueName: \"kubernetes.io/projected/5b12596d-1f5f-4d81-b664-d0ddee72552c-kube-api-access-lrsm4\") pod \"certified-operators-2sqlm\" (UID: \"5b12596d-1f5f-4d81-b664-d0ddee72552c\") " pod="openshift-marketplace/certified-operators-2sqlm" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.523629 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c460bf5-05a1-4977-b889-1a5c3263df33-catalog-content\") pod \"community-operators-6rmvm\" (UID: \"2c460bf5-05a1-4977-b889-1a5c3263df33\") " pod="openshift-marketplace/community-operators-6rmvm" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.523678 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c460bf5-05a1-4977-b889-1a5c3263df33-utilities\") pod \"community-operators-6rmvm\" (UID: \"2c460bf5-05a1-4977-b889-1a5c3263df33\") " pod="openshift-marketplace/community-operators-6rmvm" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.523740 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b12596d-1f5f-4d81-b664-d0ddee72552c-catalog-content\") pod \"certified-operators-2sqlm\" (UID: \"5b12596d-1f5f-4d81-b664-d0ddee72552c\") " pod="openshift-marketplace/certified-operators-2sqlm" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.524272 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b12596d-1f5f-4d81-b664-d0ddee72552c-catalog-content\") pod \"certified-operators-2sqlm\" (UID: \"5b12596d-1f5f-4d81-b664-d0ddee72552c\") " pod="openshift-marketplace/certified-operators-2sqlm" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.524543 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b12596d-1f5f-4d81-b664-d0ddee72552c-utilities\") pod \"certified-operators-2sqlm\" (UID: \"5b12596d-1f5f-4d81-b664-d0ddee72552c\") " pod="openshift-marketplace/certified-operators-2sqlm" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.537113 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rslv2" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.586211 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrsm4\" (UniqueName: \"kubernetes.io/projected/5b12596d-1f5f-4d81-b664-d0ddee72552c-kube-api-access-lrsm4\") pod \"certified-operators-2sqlm\" (UID: \"5b12596d-1f5f-4d81-b664-d0ddee72552c\") " pod="openshift-marketplace/certified-operators-2sqlm" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.607179 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-n98tz"] Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.625630 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2dkc\" (UniqueName: \"kubernetes.io/projected/2c460bf5-05a1-4977-b889-1a5c3263df33-kube-api-access-p2dkc\") pod \"community-operators-6rmvm\" (UID: \"2c460bf5-05a1-4977-b889-1a5c3263df33\") " pod="openshift-marketplace/community-operators-6rmvm" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.625712 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c460bf5-05a1-4977-b889-1a5c3263df33-catalog-content\") pod \"community-operators-6rmvm\" (UID: \"2c460bf5-05a1-4977-b889-1a5c3263df33\") " pod="openshift-marketplace/community-operators-6rmvm" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.625765 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c460bf5-05a1-4977-b889-1a5c3263df33-utilities\") pod \"community-operators-6rmvm\" (UID: \"2c460bf5-05a1-4977-b889-1a5c3263df33\") " pod="openshift-marketplace/community-operators-6rmvm" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.627594 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c460bf5-05a1-4977-b889-1a5c3263df33-catalog-content\") pod \"community-operators-6rmvm\" (UID: \"2c460bf5-05a1-4977-b889-1a5c3263df33\") " pod="openshift-marketplace/community-operators-6rmvm" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.638082 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c460bf5-05a1-4977-b889-1a5c3263df33-utilities\") pod \"community-operators-6rmvm\" (UID: \"2c460bf5-05a1-4977-b889-1a5c3263df33\") " pod="openshift-marketplace/community-operators-6rmvm" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.665327 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v5n2s" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.686660 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2dkc\" (UniqueName: \"kubernetes.io/projected/2c460bf5-05a1-4977-b889-1a5c3263df33-kube-api-access-p2dkc\") pod \"community-operators-6rmvm\" (UID: \"2c460bf5-05a1-4977-b889-1a5c3263df33\") " pod="openshift-marketplace/community-operators-6rmvm" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.727014 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2sqlm" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.814686 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6rmvm" Jan 21 10:58:47 crc kubenswrapper[4881]: I0121 10:58:47.835746 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q6dn5"] Jan 21 10:58:48 crc kubenswrapper[4881]: I0121 10:58:48.170003 4881 patch_prober.go:28] interesting pod/router-default-5444994796-v7wnh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:58:48 crc kubenswrapper[4881]: [-]has-synced failed: reason withheld Jan 21 10:58:48 crc kubenswrapper[4881]: [+]process-running ok Jan 21 10:58:48 crc kubenswrapper[4881]: healthz check failed Jan 21 10:58:48 crc kubenswrapper[4881]: I0121 10:58:48.170071 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v7wnh" podUID="52d94566-7844-4414-bf48-9122c2207dd6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:58:48 crc kubenswrapper[4881]: I0121 10:58:48.332243 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-42f9f" event={"ID":"409e44ed-8f6d-4321-9620-d8da23cf0fec","Type":"ContainerStarted","Data":"9f7a161cdf8f6dfa4d2425914e51e1e5b1421a4f039da2cadabde7c7bee8b711"} Jan 21 10:58:48 crc kubenswrapper[4881]: I0121 10:58:48.334753 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"bac3c741-e8bc-4059-8914-a6f834cee8dd","Type":"ContainerStarted","Data":"18ebf1075ea1988b5e7d28c03859275c513980ea48c6783e2ceaeba7f10417b0"} Jan 21 10:58:48 crc kubenswrapper[4881]: I0121 10:58:48.337885 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" event={"ID":"ec369bed-0b60-48b0-9de0-fcfd6ca7776d","Type":"ContainerStarted","Data":"2afb4777e26b8b9ed3649e0224c3ebc4424187c098e907f770d7f03bdea5704c"} Jan 21 10:58:48 crc kubenswrapper[4881]: I0121 10:58:48.337922 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" event={"ID":"ec369bed-0b60-48b0-9de0-fcfd6ca7776d","Type":"ContainerStarted","Data":"5474c3ee513cde1d48c15d56d09e1c7f705a56319c7e90c496d397eeca80a458"} Jan 21 10:58:48 crc kubenswrapper[4881]: I0121 10:58:48.338031 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:58:48 crc kubenswrapper[4881]: I0121 10:58:48.339723 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q6dn5" event={"ID":"8e002e57-13ab-477a-9e16-980e13b5e47f","Type":"ContainerStarted","Data":"a5c87f9c9c2e9ea53443d498b2b01400a8b6111456d79eeb2d2d4b28aa714ca1"} Jan 21 10:58:48 crc kubenswrapper[4881]: I0121 10:58:48.389249 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-42f9f" podStartSLOduration=19.389233491 podStartE2EDuration="19.389233491s" podCreationTimestamp="2026-01-21 10:58:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:48.384960676 +0000 UTC m=+115.644917155" watchObservedRunningTime="2026-01-21 10:58:48.389233491 +0000 UTC m=+115.649189960" Jan 21 10:58:48 crc kubenswrapper[4881]: I0121 10:58:48.412330 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" podStartSLOduration=93.412315938 podStartE2EDuration="1m33.412315938s" podCreationTimestamp="2026-01-21 10:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:48.410998545 +0000 UTC m=+115.670955004" watchObservedRunningTime="2026-01-21 10:58:48.412315938 +0000 UTC m=+115.672272407" Jan 21 10:58:48 crc kubenswrapper[4881]: I0121 10:58:48.434397 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.43437378 podStartE2EDuration="3.43437378s" podCreationTimestamp="2026-01-21 10:58:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:48.43030344 +0000 UTC m=+115.690259909" watchObservedRunningTime="2026-01-21 10:58:48.43437378 +0000 UTC m=+115.694330249" Jan 21 10:58:48 crc kubenswrapper[4881]: I0121 10:58:48.518908 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2sqlm"] Jan 21 10:58:48 crc kubenswrapper[4881]: I0121 10:58:48.543296 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v5n2s"] Jan 21 10:58:48 crc kubenswrapper[4881]: I0121 10:58:48.586064 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6rmvm"] Jan 21 10:58:48 crc kubenswrapper[4881]: I0121 10:58:48.824655 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-89m75"] Jan 21 10:58:48 crc kubenswrapper[4881]: I0121 10:58:48.827147 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-89m75" Jan 21 10:58:48 crc kubenswrapper[4881]: I0121 10:58:48.832032 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 21 10:58:48 crc kubenswrapper[4881]: I0121 10:58:48.853006 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-89m75"] Jan 21 10:58:48 crc kubenswrapper[4881]: I0121 10:58:48.910355 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2qtc\" (UniqueName: \"kubernetes.io/projected/075db786-6ad0-4982-b70e-bd05d4f240ec-kube-api-access-q2qtc\") pod \"redhat-marketplace-89m75\" (UID: \"075db786-6ad0-4982-b70e-bd05d4f240ec\") " pod="openshift-marketplace/redhat-marketplace-89m75" Jan 21 10:58:48 crc kubenswrapper[4881]: I0121 10:58:48.910445 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/075db786-6ad0-4982-b70e-bd05d4f240ec-utilities\") pod \"redhat-marketplace-89m75\" (UID: \"075db786-6ad0-4982-b70e-bd05d4f240ec\") " pod="openshift-marketplace/redhat-marketplace-89m75" Jan 21 10:58:48 crc kubenswrapper[4881]: I0121 10:58:48.910541 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/075db786-6ad0-4982-b70e-bd05d4f240ec-catalog-content\") pod \"redhat-marketplace-89m75\" (UID: \"075db786-6ad0-4982-b70e-bd05d4f240ec\") " pod="openshift-marketplace/redhat-marketplace-89m75" Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.011104 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/075db786-6ad0-4982-b70e-bd05d4f240ec-catalog-content\") pod \"redhat-marketplace-89m75\" (UID: \"075db786-6ad0-4982-b70e-bd05d4f240ec\") " pod="openshift-marketplace/redhat-marketplace-89m75" Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.011225 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2qtc\" (UniqueName: \"kubernetes.io/projected/075db786-6ad0-4982-b70e-bd05d4f240ec-kube-api-access-q2qtc\") pod \"redhat-marketplace-89m75\" (UID: \"075db786-6ad0-4982-b70e-bd05d4f240ec\") " pod="openshift-marketplace/redhat-marketplace-89m75" Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.011255 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/075db786-6ad0-4982-b70e-bd05d4f240ec-utilities\") pod \"redhat-marketplace-89m75\" (UID: \"075db786-6ad0-4982-b70e-bd05d4f240ec\") " pod="openshift-marketplace/redhat-marketplace-89m75" Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.011965 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/075db786-6ad0-4982-b70e-bd05d4f240ec-utilities\") pod \"redhat-marketplace-89m75\" (UID: \"075db786-6ad0-4982-b70e-bd05d4f240ec\") " pod="openshift-marketplace/redhat-marketplace-89m75" Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.011991 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/075db786-6ad0-4982-b70e-bd05d4f240ec-catalog-content\") pod \"redhat-marketplace-89m75\" (UID: \"075db786-6ad0-4982-b70e-bd05d4f240ec\") " pod="openshift-marketplace/redhat-marketplace-89m75" Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.035281 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2qtc\" (UniqueName: \"kubernetes.io/projected/075db786-6ad0-4982-b70e-bd05d4f240ec-kube-api-access-q2qtc\") pod \"redhat-marketplace-89m75\" (UID: \"075db786-6ad0-4982-b70e-bd05d4f240ec\") " pod="openshift-marketplace/redhat-marketplace-89m75" Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.114917 4881 patch_prober.go:28] interesting pod/router-default-5444994796-v7wnh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:58:49 crc kubenswrapper[4881]: [-]has-synced failed: reason withheld Jan 21 10:58:49 crc kubenswrapper[4881]: [+]process-running ok Jan 21 10:58:49 crc kubenswrapper[4881]: healthz check failed Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.115040 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v7wnh" podUID="52d94566-7844-4414-bf48-9122c2207dd6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.232343 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vljfh"] Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.234816 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vljfh" Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.259947 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vljfh"] Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.373293 4881 generic.go:334] "Generic (PLEG): container finished" podID="bac3c741-e8bc-4059-8914-a6f834cee8dd" containerID="18ebf1075ea1988b5e7d28c03859275c513980ea48c6783e2ceaeba7f10417b0" exitCode=0 Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.373373 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"bac3c741-e8bc-4059-8914-a6f834cee8dd","Type":"ContainerDied","Data":"18ebf1075ea1988b5e7d28c03859275c513980ea48c6783e2ceaeba7f10417b0"} Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.384684 4881 generic.go:334] "Generic (PLEG): container finished" podID="2c460bf5-05a1-4977-b889-1a5c3263df33" containerID="21ab48233ffe1978a9c9e6217e5905832c0304da6f07fa2e19daa5ca75ac0da7" exitCode=0 Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.384818 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6rmvm" event={"ID":"2c460bf5-05a1-4977-b889-1a5c3263df33","Type":"ContainerDied","Data":"21ab48233ffe1978a9c9e6217e5905832c0304da6f07fa2e19daa5ca75ac0da7"} Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.384869 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6rmvm" event={"ID":"2c460bf5-05a1-4977-b889-1a5c3263df33","Type":"ContainerStarted","Data":"c3a0b0298aa8ab878f3e521eb0f166ff0e56c334391018119468d1c2b03f0be9"} Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.388216 4881 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.393481 4881 generic.go:334] "Generic (PLEG): container finished" podID="e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a" containerID="8f66d538b15eac6e19eeb1b6e73b0917e7cb4600d289674a11496b4ddb805259" exitCode=0 Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.403832 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v5n2s" event={"ID":"e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a","Type":"ContainerDied","Data":"8f66d538b15eac6e19eeb1b6e73b0917e7cb4600d289674a11496b4ddb805259"} Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.403904 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v5n2s" event={"ID":"e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a","Type":"ContainerStarted","Data":"79b5df43169324987a329525742a5078ed6a8e75640eab433d3baf2cf413407f"} Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.404667 4881 generic.go:334] "Generic (PLEG): container finished" podID="5b12596d-1f5f-4d81-b664-d0ddee72552c" containerID="5aed93291404e255299931c1a9f3a011b1cb4d3b3ce796db1f1b3e7ec12c142e" exitCode=0 Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.404766 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2sqlm" event={"ID":"5b12596d-1f5f-4d81-b664-d0ddee72552c","Type":"ContainerDied","Data":"5aed93291404e255299931c1a9f3a011b1cb4d3b3ce796db1f1b3e7ec12c142e"} Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.404809 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2sqlm" event={"ID":"5b12596d-1f5f-4d81-b664-d0ddee72552c","Type":"ContainerStarted","Data":"06bab0b00f0f71fd0a092b84dfd550234e778896541edbd10dbb4f1a0cb5d5b8"} Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.411015 4881 generic.go:334] "Generic (PLEG): container finished" podID="8e002e57-13ab-477a-9e16-980e13b5e47f" containerID="1ccb96495e693b437b8f3969fa58a55b9e7011c267f14a44820d1cfd34daabf3" exitCode=0 Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.412255 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q6dn5" event={"ID":"8e002e57-13ab-477a-9e16-980e13b5e47f","Type":"ContainerDied","Data":"1ccb96495e693b437b8f3969fa58a55b9e7011c267f14a44820d1cfd34daabf3"} Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.518466 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.519233 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.534866 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.538886 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.555833 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.628645 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d66b837-f7b1-4795-895f-08cdabe48b37-catalog-content\") pod \"redhat-marketplace-vljfh\" (UID: \"1d66b837-f7b1-4795-895f-08cdabe48b37\") " pod="openshift-marketplace/redhat-marketplace-vljfh" Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.628845 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d66b837-f7b1-4795-895f-08cdabe48b37-utilities\") pod \"redhat-marketplace-vljfh\" (UID: \"1d66b837-f7b1-4795-895f-08cdabe48b37\") " pod="openshift-marketplace/redhat-marketplace-vljfh" Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.628925 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b56ld\" (UniqueName: \"kubernetes.io/projected/1d66b837-f7b1-4795-895f-08cdabe48b37-kube-api-access-b56ld\") pod \"redhat-marketplace-vljfh\" (UID: \"1d66b837-f7b1-4795-895f-08cdabe48b37\") " pod="openshift-marketplace/redhat-marketplace-vljfh" Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.692302 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-89m75" Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.730460 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d66b837-f7b1-4795-895f-08cdabe48b37-catalog-content\") pod \"redhat-marketplace-vljfh\" (UID: \"1d66b837-f7b1-4795-895f-08cdabe48b37\") " pod="openshift-marketplace/redhat-marketplace-vljfh" Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.730884 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/82118904-aa61-43ac-968f-283dc807d0c9-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"82118904-aa61-43ac-968f-283dc807d0c9\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.730906 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/82118904-aa61-43ac-968f-283dc807d0c9-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"82118904-aa61-43ac-968f-283dc807d0c9\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.730930 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d66b837-f7b1-4795-895f-08cdabe48b37-utilities\") pod \"redhat-marketplace-vljfh\" (UID: \"1d66b837-f7b1-4795-895f-08cdabe48b37\") " pod="openshift-marketplace/redhat-marketplace-vljfh" Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.730983 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b56ld\" (UniqueName: \"kubernetes.io/projected/1d66b837-f7b1-4795-895f-08cdabe48b37-kube-api-access-b56ld\") pod \"redhat-marketplace-vljfh\" (UID: \"1d66b837-f7b1-4795-895f-08cdabe48b37\") " pod="openshift-marketplace/redhat-marketplace-vljfh" Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.731648 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d66b837-f7b1-4795-895f-08cdabe48b37-catalog-content\") pod \"redhat-marketplace-vljfh\" (UID: \"1d66b837-f7b1-4795-895f-08cdabe48b37\") " pod="openshift-marketplace/redhat-marketplace-vljfh" Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.733722 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d66b837-f7b1-4795-895f-08cdabe48b37-utilities\") pod \"redhat-marketplace-vljfh\" (UID: \"1d66b837-f7b1-4795-895f-08cdabe48b37\") " pod="openshift-marketplace/redhat-marketplace-vljfh" Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.793587 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b56ld\" (UniqueName: \"kubernetes.io/projected/1d66b837-f7b1-4795-895f-08cdabe48b37-kube-api-access-b56ld\") pod \"redhat-marketplace-vljfh\" (UID: \"1d66b837-f7b1-4795-895f-08cdabe48b37\") " pod="openshift-marketplace/redhat-marketplace-vljfh" Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.831867 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/82118904-aa61-43ac-968f-283dc807d0c9-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"82118904-aa61-43ac-968f-283dc807d0c9\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.831913 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/82118904-aa61-43ac-968f-283dc807d0c9-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"82118904-aa61-43ac-968f-283dc807d0c9\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.832056 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/82118904-aa61-43ac-968f-283dc807d0c9-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"82118904-aa61-43ac-968f-283dc807d0c9\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 10:58:49 crc kubenswrapper[4881]: I0121 10:58:49.853180 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/82118904-aa61-43ac-968f-283dc807d0c9-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"82118904-aa61-43ac-968f-283dc807d0c9\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.027484 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kfmhs"] Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.028755 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kfmhs" Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.041771 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.075607 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.075962 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vljfh" Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.120941 4881 patch_prober.go:28] interesting pod/router-default-5444994796-v7wnh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:58:50 crc kubenswrapper[4881]: [-]has-synced failed: reason withheld Jan 21 10:58:50 crc kubenswrapper[4881]: [+]process-running ok Jan 21 10:58:50 crc kubenswrapper[4881]: healthz check failed Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.121026 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v7wnh" podUID="52d94566-7844-4414-bf48-9122c2207dd6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.144640 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc6f2\" (UniqueName: \"kubernetes.io/projected/d318e830-067f-4722-9d74-a45fcefc939d-kube-api-access-fc6f2\") pod \"redhat-operators-kfmhs\" (UID: \"d318e830-067f-4722-9d74-a45fcefc939d\") " pod="openshift-marketplace/redhat-operators-kfmhs" Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.144742 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d318e830-067f-4722-9d74-a45fcefc939d-catalog-content\") pod \"redhat-operators-kfmhs\" (UID: \"d318e830-067f-4722-9d74-a45fcefc939d\") " pod="openshift-marketplace/redhat-operators-kfmhs" Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.145005 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d318e830-067f-4722-9d74-a45fcefc939d-utilities\") pod \"redhat-operators-kfmhs\" (UID: \"d318e830-067f-4722-9d74-a45fcefc939d\") " pod="openshift-marketplace/redhat-operators-kfmhs" Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.159979 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kfmhs"] Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.246376 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fc6f2\" (UniqueName: \"kubernetes.io/projected/d318e830-067f-4722-9d74-a45fcefc939d-kube-api-access-fc6f2\") pod \"redhat-operators-kfmhs\" (UID: \"d318e830-067f-4722-9d74-a45fcefc939d\") " pod="openshift-marketplace/redhat-operators-kfmhs" Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.246436 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d318e830-067f-4722-9d74-a45fcefc939d-catalog-content\") pod \"redhat-operators-kfmhs\" (UID: \"d318e830-067f-4722-9d74-a45fcefc939d\") " pod="openshift-marketplace/redhat-operators-kfmhs" Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.246469 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d318e830-067f-4722-9d74-a45fcefc939d-utilities\") pod \"redhat-operators-kfmhs\" (UID: \"d318e830-067f-4722-9d74-a45fcefc939d\") " pod="openshift-marketplace/redhat-operators-kfmhs" Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.246914 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d318e830-067f-4722-9d74-a45fcefc939d-utilities\") pod \"redhat-operators-kfmhs\" (UID: \"d318e830-067f-4722-9d74-a45fcefc939d\") " pod="openshift-marketplace/redhat-operators-kfmhs" Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.247378 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d318e830-067f-4722-9d74-a45fcefc939d-catalog-content\") pod \"redhat-operators-kfmhs\" (UID: \"d318e830-067f-4722-9d74-a45fcefc939d\") " pod="openshift-marketplace/redhat-operators-kfmhs" Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.288032 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fc6f2\" (UniqueName: \"kubernetes.io/projected/d318e830-067f-4722-9d74-a45fcefc939d-kube-api-access-fc6f2\") pod \"redhat-operators-kfmhs\" (UID: \"d318e830-067f-4722-9d74-a45fcefc939d\") " pod="openshift-marketplace/redhat-operators-kfmhs" Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.361393 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kfmhs" Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.429420 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t4zlb"] Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.430487 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t4zlb" Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.469925 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t4zlb"] Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.558491 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn5jn\" (UniqueName: \"kubernetes.io/projected/b83e71f8-970c-4afc-ac31-264c7ca6625a-kube-api-access-sn5jn\") pod \"redhat-operators-t4zlb\" (UID: \"b83e71f8-970c-4afc-ac31-264c7ca6625a\") " pod="openshift-marketplace/redhat-operators-t4zlb" Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.558558 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b83e71f8-970c-4afc-ac31-264c7ca6625a-utilities\") pod \"redhat-operators-t4zlb\" (UID: \"b83e71f8-970c-4afc-ac31-264c7ca6625a\") " pod="openshift-marketplace/redhat-operators-t4zlb" Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.558611 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b83e71f8-970c-4afc-ac31-264c7ca6625a-catalog-content\") pod \"redhat-operators-t4zlb\" (UID: \"b83e71f8-970c-4afc-ac31-264c7ca6625a\") " pod="openshift-marketplace/redhat-operators-t4zlb" Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.659687 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sn5jn\" (UniqueName: \"kubernetes.io/projected/b83e71f8-970c-4afc-ac31-264c7ca6625a-kube-api-access-sn5jn\") pod \"redhat-operators-t4zlb\" (UID: \"b83e71f8-970c-4afc-ac31-264c7ca6625a\") " pod="openshift-marketplace/redhat-operators-t4zlb" Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.660004 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b83e71f8-970c-4afc-ac31-264c7ca6625a-utilities\") pod \"redhat-operators-t4zlb\" (UID: \"b83e71f8-970c-4afc-ac31-264c7ca6625a\") " pod="openshift-marketplace/redhat-operators-t4zlb" Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.660042 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b83e71f8-970c-4afc-ac31-264c7ca6625a-catalog-content\") pod \"redhat-operators-t4zlb\" (UID: \"b83e71f8-970c-4afc-ac31-264c7ca6625a\") " pod="openshift-marketplace/redhat-operators-t4zlb" Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.660762 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b83e71f8-970c-4afc-ac31-264c7ca6625a-catalog-content\") pod \"redhat-operators-t4zlb\" (UID: \"b83e71f8-970c-4afc-ac31-264c7ca6625a\") " pod="openshift-marketplace/redhat-operators-t4zlb" Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.661300 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b83e71f8-970c-4afc-ac31-264c7ca6625a-utilities\") pod \"redhat-operators-t4zlb\" (UID: \"b83e71f8-970c-4afc-ac31-264c7ca6625a\") " pod="openshift-marketplace/redhat-operators-t4zlb" Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.665634 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-89m75"] Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.669720 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vljfh"] Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.691798 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sn5jn\" (UniqueName: \"kubernetes.io/projected/b83e71f8-970c-4afc-ac31-264c7ca6625a-kube-api-access-sn5jn\") pod \"redhat-operators-t4zlb\" (UID: \"b83e71f8-970c-4afc-ac31-264c7ca6625a\") " pod="openshift-marketplace/redhat-operators-t4zlb" Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.772470 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t4zlb" Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.928925 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-znm6j" Jan 21 10:58:50 crc kubenswrapper[4881]: I0121 10:58:50.987033 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 21 10:58:51 crc kubenswrapper[4881]: I0121 10:58:51.114753 4881 patch_prober.go:28] interesting pod/router-default-5444994796-v7wnh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:58:51 crc kubenswrapper[4881]: [-]has-synced failed: reason withheld Jan 21 10:58:51 crc kubenswrapper[4881]: [+]process-running ok Jan 21 10:58:51 crc kubenswrapper[4881]: healthz check failed Jan 21 10:58:51 crc kubenswrapper[4881]: I0121 10:58:51.115358 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v7wnh" podUID="52d94566-7844-4414-bf48-9122c2207dd6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:58:51 crc kubenswrapper[4881]: I0121 10:58:51.175710 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 10:58:51 crc kubenswrapper[4881]: I0121 10:58:51.176618 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kfmhs"] Jan 21 10:58:51 crc kubenswrapper[4881]: W0121 10:58:51.186380 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd318e830_067f_4722_9d74_a45fcefc939d.slice/crio-b87ddedd309d60e82b2425e90c86377b7db5b6d93701316fb318e5a216d01095 WatchSource:0}: Error finding container b87ddedd309d60e82b2425e90c86377b7db5b6d93701316fb318e5a216d01095: Status 404 returned error can't find the container with id b87ddedd309d60e82b2425e90c86377b7db5b6d93701316fb318e5a216d01095 Jan 21 10:58:51 crc kubenswrapper[4881]: I0121 10:58:51.323298 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bac3c741-e8bc-4059-8914-a6f834cee8dd-kube-api-access\") pod \"bac3c741-e8bc-4059-8914-a6f834cee8dd\" (UID: \"bac3c741-e8bc-4059-8914-a6f834cee8dd\") " Jan 21 10:58:51 crc kubenswrapper[4881]: I0121 10:58:51.323978 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bac3c741-e8bc-4059-8914-a6f834cee8dd-kubelet-dir\") pod \"bac3c741-e8bc-4059-8914-a6f834cee8dd\" (UID: \"bac3c741-e8bc-4059-8914-a6f834cee8dd\") " Jan 21 10:58:51 crc kubenswrapper[4881]: I0121 10:58:51.324301 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bac3c741-e8bc-4059-8914-a6f834cee8dd-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "bac3c741-e8bc-4059-8914-a6f834cee8dd" (UID: "bac3c741-e8bc-4059-8914-a6f834cee8dd"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:58:51 crc kubenswrapper[4881]: I0121 10:58:51.374649 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bac3c741-e8bc-4059-8914-a6f834cee8dd-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "bac3c741-e8bc-4059-8914-a6f834cee8dd" (UID: "bac3c741-e8bc-4059-8914-a6f834cee8dd"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:58:51 crc kubenswrapper[4881]: I0121 10:58:51.427498 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bac3c741-e8bc-4059-8914-a6f834cee8dd-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:51 crc kubenswrapper[4881]: I0121 10:58:51.427607 4881 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bac3c741-e8bc-4059-8914-a6f834cee8dd-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:51 crc kubenswrapper[4881]: I0121 10:58:51.480034 4881 generic.go:334] "Generic (PLEG): container finished" podID="1d66b837-f7b1-4795-895f-08cdabe48b37" containerID="ec4a8cdf9092080c2fbbc3ac32eca21f15705f2f8424796b41499693e29b4095" exitCode=0 Jan 21 10:58:51 crc kubenswrapper[4881]: I0121 10:58:51.521933 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vljfh" event={"ID":"1d66b837-f7b1-4795-895f-08cdabe48b37","Type":"ContainerDied","Data":"ec4a8cdf9092080c2fbbc3ac32eca21f15705f2f8424796b41499693e29b4095"} Jan 21 10:58:51 crc kubenswrapper[4881]: I0121 10:58:51.521970 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vljfh" event={"ID":"1d66b837-f7b1-4795-895f-08cdabe48b37","Type":"ContainerStarted","Data":"eb22a93b2892f0c51c953eb6eb827724775592dd8224db01464d1014b0260e0e"} Jan 21 10:58:51 crc kubenswrapper[4881]: I0121 10:58:51.521984 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kfmhs" event={"ID":"d318e830-067f-4722-9d74-a45fcefc939d","Type":"ContainerStarted","Data":"b87ddedd309d60e82b2425e90c86377b7db5b6d93701316fb318e5a216d01095"} Jan 21 10:58:51 crc kubenswrapper[4881]: I0121 10:58:51.521995 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"82118904-aa61-43ac-968f-283dc807d0c9","Type":"ContainerStarted","Data":"b26c5cdd64634480b84bf6f21afe37c6fbfc185f021cc85c79dec71325038fa3"} Jan 21 10:58:51 crc kubenswrapper[4881]: I0121 10:58:51.524439 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 10:58:51 crc kubenswrapper[4881]: I0121 10:58:51.524480 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"bac3c741-e8bc-4059-8914-a6f834cee8dd","Type":"ContainerDied","Data":"d000f23cdd5d4f1ece21017d747b89cc98e096184b532595e3b8592df18c9c55"} Jan 21 10:58:51 crc kubenswrapper[4881]: I0121 10:58:51.524531 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d000f23cdd5d4f1ece21017d747b89cc98e096184b532595e3b8592df18c9c55" Jan 21 10:58:51 crc kubenswrapper[4881]: I0121 10:58:51.627510 4881 generic.go:334] "Generic (PLEG): container finished" podID="075db786-6ad0-4982-b70e-bd05d4f240ec" containerID="aa990b30489b423fbac7484510b784c9211e2f63bd3366b894aa031bc0754115" exitCode=0 Jan 21 10:58:51 crc kubenswrapper[4881]: I0121 10:58:51.627631 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-89m75" event={"ID":"075db786-6ad0-4982-b70e-bd05d4f240ec","Type":"ContainerDied","Data":"aa990b30489b423fbac7484510b784c9211e2f63bd3366b894aa031bc0754115"} Jan 21 10:58:51 crc kubenswrapper[4881]: I0121 10:58:51.627721 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-89m75" event={"ID":"075db786-6ad0-4982-b70e-bd05d4f240ec","Type":"ContainerStarted","Data":"97ca6fad994e892affd0e053e6d3515afda4b44ce01474758415dca871d6c00b"} Jan 21 10:58:51 crc kubenswrapper[4881]: I0121 10:58:51.707819 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t4zlb"] Jan 21 10:58:51 crc kubenswrapper[4881]: W0121 10:58:51.741065 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb83e71f8_970c_4afc_ac31_264c7ca6625a.slice/crio-16d7bf5b9f969471865c2f6c0d0043006c1b79484bd1c97e826d3a03374ea542 WatchSource:0}: Error finding container 16d7bf5b9f969471865c2f6c0d0043006c1b79484bd1c97e826d3a03374ea542: Status 404 returned error can't find the container with id 16d7bf5b9f969471865c2f6c0d0043006c1b79484bd1c97e826d3a03374ea542 Jan 21 10:58:51 crc kubenswrapper[4881]: I0121 10:58:51.870599 4881 patch_prober.go:28] interesting pod/downloads-7954f5f757-wrqpb container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 21 10:58:51 crc kubenswrapper[4881]: I0121 10:58:51.870670 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-wrqpb" podUID="628cb8f4-a587-498f-9398-403e0af5eec4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 21 10:58:51 crc kubenswrapper[4881]: I0121 10:58:51.870822 4881 patch_prober.go:28] interesting pod/downloads-7954f5f757-wrqpb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 21 10:58:51 crc kubenswrapper[4881]: I0121 10:58:51.870899 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wrqpb" podUID="628cb8f4-a587-498f-9398-403e0af5eec4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 21 10:58:51 crc kubenswrapper[4881]: I0121 10:58:51.877485 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:51 crc kubenswrapper[4881]: I0121 10:58:51.882594 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-svmbc" Jan 21 10:58:51 crc kubenswrapper[4881]: I0121 10:58:51.933064 4881 patch_prober.go:28] interesting pod/console-f9d7485db-qxzd9 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 21 10:58:51 crc kubenswrapper[4881]: I0121 10:58:51.933163 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-qxzd9" podUID="bb8fc8b3-9818-40e2-afb2-860e2d1efae1" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 21 10:58:52 crc kubenswrapper[4881]: I0121 10:58:52.134218 4881 patch_prober.go:28] interesting pod/router-default-5444994796-v7wnh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:58:52 crc kubenswrapper[4881]: [-]has-synced failed: reason withheld Jan 21 10:58:52 crc kubenswrapper[4881]: [+]process-running ok Jan 21 10:58:52 crc kubenswrapper[4881]: healthz check failed Jan 21 10:58:52 crc kubenswrapper[4881]: I0121 10:58:52.134270 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-v7wnh" podUID="52d94566-7844-4414-bf48-9122c2207dd6" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:58:52 crc kubenswrapper[4881]: I0121 10:58:52.557479 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7gdkq" Jan 21 10:58:52 crc kubenswrapper[4881]: I0121 10:58:52.561793 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-xmq82" Jan 21 10:58:52 crc kubenswrapper[4881]: I0121 10:58:52.562570 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-zkkpc" Jan 21 10:58:52 crc kubenswrapper[4881]: I0121 10:58:52.666390 4881 generic.go:334] "Generic (PLEG): container finished" podID="b83e71f8-970c-4afc-ac31-264c7ca6625a" containerID="ae4974769900e5c543fbbb2d217e3f9cdfc7b9998621c36ae6d12bcf65b9b593" exitCode=0 Jan 21 10:58:52 crc kubenswrapper[4881]: I0121 10:58:52.666490 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t4zlb" event={"ID":"b83e71f8-970c-4afc-ac31-264c7ca6625a","Type":"ContainerDied","Data":"ae4974769900e5c543fbbb2d217e3f9cdfc7b9998621c36ae6d12bcf65b9b593"} Jan 21 10:58:52 crc kubenswrapper[4881]: I0121 10:58:52.666528 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t4zlb" event={"ID":"b83e71f8-970c-4afc-ac31-264c7ca6625a","Type":"ContainerStarted","Data":"16d7bf5b9f969471865c2f6c0d0043006c1b79484bd1c97e826d3a03374ea542"} Jan 21 10:58:52 crc kubenswrapper[4881]: I0121 10:58:52.683400 4881 generic.go:334] "Generic (PLEG): container finished" podID="d318e830-067f-4722-9d74-a45fcefc939d" containerID="b9a009384ba81492213bce1a87a61e1b83f262354a9aea725ad849bc0749a5f7" exitCode=0 Jan 21 10:58:52 crc kubenswrapper[4881]: I0121 10:58:52.683546 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kfmhs" event={"ID":"d318e830-067f-4722-9d74-a45fcefc939d","Type":"ContainerDied","Data":"b9a009384ba81492213bce1a87a61e1b83f262354a9aea725ad849bc0749a5f7"} Jan 21 10:58:52 crc kubenswrapper[4881]: I0121 10:58:52.699596 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"82118904-aa61-43ac-968f-283dc807d0c9","Type":"ContainerStarted","Data":"d835f915c7d824c5b21ac8719e0140dad6bbeb5334b91bb6f7250e0eba251ba9"} Jan 21 10:58:53 crc kubenswrapper[4881]: I0121 10:58:53.250220 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-v7wnh" Jan 21 10:58:53 crc kubenswrapper[4881]: I0121 10:58:53.538890 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-v7wnh" Jan 21 10:58:53 crc kubenswrapper[4881]: I0121 10:58:53.557029 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=4.557002337 podStartE2EDuration="4.557002337s" podCreationTimestamp="2026-01-21 10:58:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:52.784435442 +0000 UTC m=+120.044391911" watchObservedRunningTime="2026-01-21 10:58:53.557002337 +0000 UTC m=+120.816958806" Jan 21 10:58:54 crc kubenswrapper[4881]: I0121 10:58:54.936769 4881 generic.go:334] "Generic (PLEG): container finished" podID="82118904-aa61-43ac-968f-283dc807d0c9" containerID="d835f915c7d824c5b21ac8719e0140dad6bbeb5334b91bb6f7250e0eba251ba9" exitCode=0 Jan 21 10:58:54 crc kubenswrapper[4881]: I0121 10:58:54.936907 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"82118904-aa61-43ac-968f-283dc807d0c9","Type":"ContainerDied","Data":"d835f915c7d824c5b21ac8719e0140dad6bbeb5334b91bb6f7250e0eba251ba9"} Jan 21 10:58:55 crc kubenswrapper[4881]: I0121 10:58:55.978829 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-phm68_b745a377-4575-45fb-a206-ea4754ecff76/cluster-samples-operator/0.log" Jan 21 10:58:55 crc kubenswrapper[4881]: I0121 10:58:55.979107 4881 generic.go:334] "Generic (PLEG): container finished" podID="b745a377-4575-45fb-a206-ea4754ecff76" containerID="b41967d3bdb4370227d82839dc1862e1f74b1c61b2e573915f3a2a8ab7402fa8" exitCode=2 Jan 21 10:58:55 crc kubenswrapper[4881]: I0121 10:58:55.979316 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-phm68" event={"ID":"b745a377-4575-45fb-a206-ea4754ecff76","Type":"ContainerDied","Data":"b41967d3bdb4370227d82839dc1862e1f74b1c61b2e573915f3a2a8ab7402fa8"} Jan 21 10:58:55 crc kubenswrapper[4881]: I0121 10:58:55.980134 4881 scope.go:117] "RemoveContainer" containerID="b41967d3bdb4370227d82839dc1862e1f74b1c61b2e573915f3a2a8ab7402fa8" Jan 21 10:58:57 crc kubenswrapper[4881]: I0121 10:58:57.005135 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-phm68_b745a377-4575-45fb-a206-ea4754ecff76/cluster-samples-operator/0.log" Jan 21 10:58:57 crc kubenswrapper[4881]: I0121 10:58:57.005753 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-phm68" event={"ID":"b745a377-4575-45fb-a206-ea4754ecff76","Type":"ContainerStarted","Data":"a88d091e94ff32e45195f85298f3f39e99eee297d0dc561dddf06b5b92b18ab6"} Jan 21 10:58:57 crc kubenswrapper[4881]: I0121 10:58:57.494338 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 10:58:57 crc kubenswrapper[4881]: I0121 10:58:57.625291 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/82118904-aa61-43ac-968f-283dc807d0c9-kube-api-access\") pod \"82118904-aa61-43ac-968f-283dc807d0c9\" (UID: \"82118904-aa61-43ac-968f-283dc807d0c9\") " Jan 21 10:58:57 crc kubenswrapper[4881]: I0121 10:58:57.625498 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/82118904-aa61-43ac-968f-283dc807d0c9-kubelet-dir\") pod \"82118904-aa61-43ac-968f-283dc807d0c9\" (UID: \"82118904-aa61-43ac-968f-283dc807d0c9\") " Jan 21 10:58:57 crc kubenswrapper[4881]: I0121 10:58:57.625894 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82118904-aa61-43ac-968f-283dc807d0c9-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "82118904-aa61-43ac-968f-283dc807d0c9" (UID: "82118904-aa61-43ac-968f-283dc807d0c9"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:58:57 crc kubenswrapper[4881]: I0121 10:58:57.718799 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82118904-aa61-43ac-968f-283dc807d0c9-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "82118904-aa61-43ac-968f-283dc807d0c9" (UID: "82118904-aa61-43ac-968f-283dc807d0c9"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:58:57 crc kubenswrapper[4881]: I0121 10:58:57.745646 4881 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/82118904-aa61-43ac-968f-283dc807d0c9-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:57 crc kubenswrapper[4881]: I0121 10:58:57.745712 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/82118904-aa61-43ac-968f-283dc807d0c9-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:58 crc kubenswrapper[4881]: I0121 10:58:58.053941 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"82118904-aa61-43ac-968f-283dc807d0c9","Type":"ContainerDied","Data":"b26c5cdd64634480b84bf6f21afe37c6fbfc185f021cc85c79dec71325038fa3"} Jan 21 10:58:58 crc kubenswrapper[4881]: I0121 10:58:58.054402 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b26c5cdd64634480b84bf6f21afe37c6fbfc185f021cc85c79dec71325038fa3" Jan 21 10:58:58 crc kubenswrapper[4881]: I0121 10:58:58.054117 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 10:59:01 crc kubenswrapper[4881]: I0121 10:59:01.866867 4881 patch_prober.go:28] interesting pod/downloads-7954f5f757-wrqpb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 21 10:59:01 crc kubenswrapper[4881]: I0121 10:59:01.867222 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wrqpb" podUID="628cb8f4-a587-498f-9398-403e0af5eec4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 21 10:59:01 crc kubenswrapper[4881]: I0121 10:59:01.867278 4881 patch_prober.go:28] interesting pod/downloads-7954f5f757-wrqpb container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 21 10:59:01 crc kubenswrapper[4881]: I0121 10:59:01.867351 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-wrqpb" podUID="628cb8f4-a587-498f-9398-403e0af5eec4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 21 10:59:01 crc kubenswrapper[4881]: I0121 10:59:01.867400 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-wrqpb" Jan 21 10:59:01 crc kubenswrapper[4881]: I0121 10:59:01.868455 4881 patch_prober.go:28] interesting pod/downloads-7954f5f757-wrqpb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 21 10:59:01 crc kubenswrapper[4881]: I0121 10:59:01.868480 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wrqpb" podUID="628cb8f4-a587-498f-9398-403e0af5eec4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 21 10:59:01 crc kubenswrapper[4881]: I0121 10:59:01.869097 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"8ac6e934bf2c65c273e37127eb78e3c49f6ab743027f68c7c31810cbe67f929a"} pod="openshift-console/downloads-7954f5f757-wrqpb" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 21 10:59:01 crc kubenswrapper[4881]: I0121 10:59:01.869195 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-wrqpb" podUID="628cb8f4-a587-498f-9398-403e0af5eec4" containerName="download-server" containerID="cri-o://8ac6e934bf2c65c273e37127eb78e3c49f6ab743027f68c7c31810cbe67f929a" gracePeriod=2 Jan 21 10:59:01 crc kubenswrapper[4881]: I0121 10:59:01.993554 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-qxzd9" Jan 21 10:59:01 crc kubenswrapper[4881]: I0121 10:59:01.998504 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-qxzd9" Jan 21 10:59:02 crc kubenswrapper[4881]: I0121 10:59:02.250750 4881 generic.go:334] "Generic (PLEG): container finished" podID="628cb8f4-a587-498f-9398-403e0af5eec4" containerID="8ac6e934bf2c65c273e37127eb78e3c49f6ab743027f68c7c31810cbe67f929a" exitCode=0 Jan 21 10:59:02 crc kubenswrapper[4881]: I0121 10:59:02.250990 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-wrqpb" event={"ID":"628cb8f4-a587-498f-9398-403e0af5eec4","Type":"ContainerDied","Data":"8ac6e934bf2c65c273e37127eb78e3c49f6ab743027f68c7c31810cbe67f929a"} Jan 21 10:59:06 crc kubenswrapper[4881]: I0121 10:59:06.681982 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 10:59:11 crc kubenswrapper[4881]: I0121 10:59:11.877261 4881 patch_prober.go:28] interesting pod/downloads-7954f5f757-wrqpb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 21 10:59:11 crc kubenswrapper[4881]: I0121 10:59:11.877678 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wrqpb" podUID="628cb8f4-a587-498f-9398-403e0af5eec4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 21 10:59:21 crc kubenswrapper[4881]: I0121 10:59:21.866673 4881 patch_prober.go:28] interesting pod/downloads-7954f5f757-wrqpb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 21 10:59:21 crc kubenswrapper[4881]: I0121 10:59:21.867803 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wrqpb" podUID="628cb8f4-a587-498f-9398-403e0af5eec4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 21 10:59:22 crc kubenswrapper[4881]: I0121 10:59:22.551268 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-72bt6" Jan 21 10:59:23 crc kubenswrapper[4881]: I0121 10:59:23.069754 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:59:23 crc kubenswrapper[4881]: I0121 10:59:23.070334 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:59:23 crc kubenswrapper[4881]: I0121 10:59:23.072770 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 21 10:59:23 crc kubenswrapper[4881]: I0121 10:59:23.073110 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 21 10:59:23 crc kubenswrapper[4881]: I0121 10:59:23.081353 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:59:23 crc kubenswrapper[4881]: I0121 10:59:23.089468 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:59:23 crc kubenswrapper[4881]: I0121 10:59:23.171821 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:59:23 crc kubenswrapper[4881]: I0121 10:59:23.171922 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:59:23 crc kubenswrapper[4881]: I0121 10:59:23.174599 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 21 10:59:23 crc kubenswrapper[4881]: I0121 10:59:23.185513 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 21 10:59:23 crc kubenswrapper[4881]: I0121 10:59:23.199880 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:59:23 crc kubenswrapper[4881]: I0121 10:59:23.203140 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:59:23 crc kubenswrapper[4881]: I0121 10:59:23.228226 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:59:23 crc kubenswrapper[4881]: I0121 10:59:23.430347 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:59:23 crc kubenswrapper[4881]: I0121 10:59:23.469196 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:59:27 crc kubenswrapper[4881]: I0121 10:59:27.120887 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 21 10:59:27 crc kubenswrapper[4881]: E0121 10:59:27.122169 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82118904-aa61-43ac-968f-283dc807d0c9" containerName="pruner" Jan 21 10:59:27 crc kubenswrapper[4881]: I0121 10:59:27.122189 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="82118904-aa61-43ac-968f-283dc807d0c9" containerName="pruner" Jan 21 10:59:27 crc kubenswrapper[4881]: E0121 10:59:27.122207 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bac3c741-e8bc-4059-8914-a6f834cee8dd" containerName="pruner" Jan 21 10:59:27 crc kubenswrapper[4881]: I0121 10:59:27.122214 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="bac3c741-e8bc-4059-8914-a6f834cee8dd" containerName="pruner" Jan 21 10:59:27 crc kubenswrapper[4881]: I0121 10:59:27.122482 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="82118904-aa61-43ac-968f-283dc807d0c9" containerName="pruner" Jan 21 10:59:27 crc kubenswrapper[4881]: I0121 10:59:27.122499 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="bac3c741-e8bc-4059-8914-a6f834cee8dd" containerName="pruner" Jan 21 10:59:27 crc kubenswrapper[4881]: I0121 10:59:27.124820 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 10:59:27 crc kubenswrapper[4881]: I0121 10:59:27.130840 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 21 10:59:27 crc kubenswrapper[4881]: I0121 10:59:27.131436 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 21 10:59:27 crc kubenswrapper[4881]: I0121 10:59:27.131604 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 21 10:59:27 crc kubenswrapper[4881]: I0121 10:59:27.174258 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/297f4cbb-3661-40d1-bfe7-518b3f934f71-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"297f4cbb-3661-40d1-bfe7-518b3f934f71\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 10:59:27 crc kubenswrapper[4881]: I0121 10:59:27.174369 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/297f4cbb-3661-40d1-bfe7-518b3f934f71-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"297f4cbb-3661-40d1-bfe7-518b3f934f71\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 10:59:27 crc kubenswrapper[4881]: I0121 10:59:27.275561 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/297f4cbb-3661-40d1-bfe7-518b3f934f71-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"297f4cbb-3661-40d1-bfe7-518b3f934f71\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 10:59:27 crc kubenswrapper[4881]: I0121 10:59:27.275639 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/297f4cbb-3661-40d1-bfe7-518b3f934f71-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"297f4cbb-3661-40d1-bfe7-518b3f934f71\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 10:59:27 crc kubenswrapper[4881]: I0121 10:59:27.276377 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/297f4cbb-3661-40d1-bfe7-518b3f934f71-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"297f4cbb-3661-40d1-bfe7-518b3f934f71\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 10:59:27 crc kubenswrapper[4881]: I0121 10:59:27.305595 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/297f4cbb-3661-40d1-bfe7-518b3f934f71-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"297f4cbb-3661-40d1-bfe7-518b3f934f71\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 10:59:27 crc kubenswrapper[4881]: I0121 10:59:27.455043 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 10:59:29 crc kubenswrapper[4881]: I0121 10:59:29.851451 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:59:29 crc kubenswrapper[4881]: I0121 10:59:29.852189 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:59:31 crc kubenswrapper[4881]: I0121 10:59:31.868392 4881 patch_prober.go:28] interesting pod/downloads-7954f5f757-wrqpb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 21 10:59:31 crc kubenswrapper[4881]: I0121 10:59:31.868501 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wrqpb" podUID="628cb8f4-a587-498f-9398-403e0af5eec4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 21 10:59:32 crc kubenswrapper[4881]: I0121 10:59:32.526212 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 21 10:59:32 crc kubenswrapper[4881]: I0121 10:59:32.527466 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 10:59:32 crc kubenswrapper[4881]: I0121 10:59:32.539973 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 21 10:59:32 crc kubenswrapper[4881]: I0121 10:59:32.649228 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/41bc4c78-71b2-4ca1-b593-410715cb877b-kubelet-dir\") pod \"installer-9-crc\" (UID: \"41bc4c78-71b2-4ca1-b593-410715cb877b\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 10:59:32 crc kubenswrapper[4881]: I0121 10:59:32.649481 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/41bc4c78-71b2-4ca1-b593-410715cb877b-var-lock\") pod \"installer-9-crc\" (UID: \"41bc4c78-71b2-4ca1-b593-410715cb877b\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 10:59:32 crc kubenswrapper[4881]: I0121 10:59:32.649548 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/41bc4c78-71b2-4ca1-b593-410715cb877b-kube-api-access\") pod \"installer-9-crc\" (UID: \"41bc4c78-71b2-4ca1-b593-410715cb877b\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 10:59:32 crc kubenswrapper[4881]: I0121 10:59:32.750557 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/41bc4c78-71b2-4ca1-b593-410715cb877b-var-lock\") pod \"installer-9-crc\" (UID: \"41bc4c78-71b2-4ca1-b593-410715cb877b\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 10:59:32 crc kubenswrapper[4881]: I0121 10:59:32.750930 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/41bc4c78-71b2-4ca1-b593-410715cb877b-kube-api-access\") pod \"installer-9-crc\" (UID: \"41bc4c78-71b2-4ca1-b593-410715cb877b\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 10:59:32 crc kubenswrapper[4881]: I0121 10:59:32.751053 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/41bc4c78-71b2-4ca1-b593-410715cb877b-kubelet-dir\") pod \"installer-9-crc\" (UID: \"41bc4c78-71b2-4ca1-b593-410715cb877b\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 10:59:32 crc kubenswrapper[4881]: I0121 10:59:32.751202 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/41bc4c78-71b2-4ca1-b593-410715cb877b-kubelet-dir\") pod \"installer-9-crc\" (UID: \"41bc4c78-71b2-4ca1-b593-410715cb877b\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 10:59:32 crc kubenswrapper[4881]: I0121 10:59:32.751235 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/41bc4c78-71b2-4ca1-b593-410715cb877b-var-lock\") pod \"installer-9-crc\" (UID: \"41bc4c78-71b2-4ca1-b593-410715cb877b\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 10:59:32 crc kubenswrapper[4881]: I0121 10:59:32.801261 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/41bc4c78-71b2-4ca1-b593-410715cb877b-kube-api-access\") pod \"installer-9-crc\" (UID: \"41bc4c78-71b2-4ca1-b593-410715cb877b\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 10:59:32 crc kubenswrapper[4881]: I0121 10:59:32.866160 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 10:59:34 crc kubenswrapper[4881]: E0121 10:59:34.437211 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 21 10:59:34 crc kubenswrapper[4881]: E0121 10:59:34.439056 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mf89m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-v5n2s_openshift-marketplace(e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 10:59:34 crc kubenswrapper[4881]: E0121 10:59:34.440340 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-v5n2s" podUID="e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a" Jan 21 10:59:35 crc kubenswrapper[4881]: E0121 10:59:35.824165 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-v5n2s" podUID="e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a" Jan 21 10:59:35 crc kubenswrapper[4881]: E0121 10:59:35.917016 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 21 10:59:35 crc kubenswrapper[4881]: E0121 10:59:35.917565 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b56ld,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-vljfh_openshift-marketplace(1d66b837-f7b1-4795-895f-08cdabe48b37): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 10:59:35 crc kubenswrapper[4881]: E0121 10:59:35.918766 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-vljfh" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" Jan 21 10:59:35 crc kubenswrapper[4881]: E0121 10:59:35.929603 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 21 10:59:35 crc kubenswrapper[4881]: E0121 10:59:35.929826 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p2dkc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-6rmvm_openshift-marketplace(2c460bf5-05a1-4977-b889-1a5c3263df33): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 10:59:35 crc kubenswrapper[4881]: E0121 10:59:35.931021 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-6rmvm" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" Jan 21 10:59:39 crc kubenswrapper[4881]: E0121 10:59:39.805364 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-6rmvm" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" Jan 21 10:59:39 crc kubenswrapper[4881]: E0121 10:59:39.805845 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vljfh" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" Jan 21 10:59:39 crc kubenswrapper[4881]: E0121 10:59:39.896059 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 21 10:59:39 crc kubenswrapper[4881]: E0121 10:59:39.896769 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sn5jn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-t4zlb_openshift-marketplace(b83e71f8-970c-4afc-ac31-264c7ca6625a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 10:59:39 crc kubenswrapper[4881]: E0121 10:59:39.898028 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-t4zlb" podUID="b83e71f8-970c-4afc-ac31-264c7ca6625a" Jan 21 10:59:41 crc kubenswrapper[4881]: E0121 10:59:41.263857 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-t4zlb" podUID="b83e71f8-970c-4afc-ac31-264c7ca6625a" Jan 21 10:59:41 crc kubenswrapper[4881]: E0121 10:59:41.360924 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 21 10:59:41 crc kubenswrapper[4881]: E0121 10:59:41.361206 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q2qtc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-89m75_openshift-marketplace(075db786-6ad0-4982-b70e-bd05d4f240ec): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 10:59:41 crc kubenswrapper[4881]: E0121 10:59:41.362633 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-89m75" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" Jan 21 10:59:41 crc kubenswrapper[4881]: E0121 10:59:41.368414 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 21 10:59:41 crc kubenswrapper[4881]: E0121 10:59:41.369221 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lrsm4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-2sqlm_openshift-marketplace(5b12596d-1f5f-4d81-b664-d0ddee72552c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 10:59:41 crc kubenswrapper[4881]: E0121 10:59:41.370534 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-2sqlm" podUID="5b12596d-1f5f-4d81-b664-d0ddee72552c" Jan 21 10:59:41 crc kubenswrapper[4881]: E0121 10:59:41.374138 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 21 10:59:41 crc kubenswrapper[4881]: E0121 10:59:41.374525 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fc6f2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-kfmhs_openshift-marketplace(d318e830-067f-4722-9d74-a45fcefc939d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 10:59:41 crc kubenswrapper[4881]: E0121 10:59:41.376752 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-kfmhs" podUID="d318e830-067f-4722-9d74-a45fcefc939d" Jan 21 10:59:41 crc kubenswrapper[4881]: E0121 10:59:41.420297 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 21 10:59:41 crc kubenswrapper[4881]: E0121 10:59:41.420561 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g42w8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-q6dn5_openshift-marketplace(8e002e57-13ab-477a-9e16-980e13b5e47f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 10:59:41 crc kubenswrapper[4881]: E0121 10:59:41.423812 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-q6dn5" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" Jan 21 10:59:41 crc kubenswrapper[4881]: I0121 10:59:41.867146 4881 patch_prober.go:28] interesting pod/downloads-7954f5f757-wrqpb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 21 10:59:41 crc kubenswrapper[4881]: I0121 10:59:41.867667 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wrqpb" podUID="628cb8f4-a587-498f-9398-403e0af5eec4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 21 10:59:41 crc kubenswrapper[4881]: I0121 10:59:41.897475 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-wrqpb" event={"ID":"628cb8f4-a587-498f-9398-403e0af5eec4","Type":"ContainerStarted","Data":"e6fdfddd04f97ac6678436a8d986fc15a9f59365abe393ade8c3fd53ab3ad81b"} Jan 21 10:59:41 crc kubenswrapper[4881]: I0121 10:59:41.897955 4881 patch_prober.go:28] interesting pod/downloads-7954f5f757-wrqpb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 21 10:59:41 crc kubenswrapper[4881]: I0121 10:59:41.897987 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wrqpb" podUID="628cb8f4-a587-498f-9398-403e0af5eec4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 21 10:59:41 crc kubenswrapper[4881]: I0121 10:59:41.898256 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-wrqpb" Jan 21 10:59:41 crc kubenswrapper[4881]: E0121 10:59:41.898701 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-q6dn5" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" Jan 21 10:59:41 crc kubenswrapper[4881]: E0121 10:59:41.898887 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-2sqlm" podUID="5b12596d-1f5f-4d81-b664-d0ddee72552c" Jan 21 10:59:41 crc kubenswrapper[4881]: E0121 10:59:41.900614 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-kfmhs" podUID="d318e830-067f-4722-9d74-a45fcefc939d" Jan 21 10:59:41 crc kubenswrapper[4881]: E0121 10:59:41.902306 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-89m75" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" Jan 21 10:59:42 crc kubenswrapper[4881]: I0121 10:59:42.082064 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 21 10:59:42 crc kubenswrapper[4881]: I0121 10:59:42.086054 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 21 10:59:42 crc kubenswrapper[4881]: W0121 10:59:42.117690 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod297f4cbb_3661_40d1_bfe7_518b3f934f71.slice/crio-63025c330fe1b460c8485833df18772b34861db69d20da6c48f086fa46d98f67 WatchSource:0}: Error finding container 63025c330fe1b460c8485833df18772b34861db69d20da6c48f086fa46d98f67: Status 404 returned error can't find the container with id 63025c330fe1b460c8485833df18772b34861db69d20da6c48f086fa46d98f67 Jan 21 10:59:42 crc kubenswrapper[4881]: W0121 10:59:42.245893 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-0b0bdf164de368a9f532c9b0db44e3257336e81a051a6283ac30c242895ceccc WatchSource:0}: Error finding container 0b0bdf164de368a9f532c9b0db44e3257336e81a051a6283ac30c242895ceccc: Status 404 returned error can't find the container with id 0b0bdf164de368a9f532c9b0db44e3257336e81a051a6283ac30c242895ceccc Jan 21 10:59:42 crc kubenswrapper[4881]: I0121 10:59:42.904525 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"297f4cbb-3661-40d1-bfe7-518b3f934f71","Type":"ContainerStarted","Data":"63025c330fe1b460c8485833df18772b34861db69d20da6c48f086fa46d98f67"} Jan 21 10:59:42 crc kubenswrapper[4881]: I0121 10:59:42.912105 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"1b7183540119c8b9eee168945b8926646499506cc41c32a7e3cafc30f0b2a739"} Jan 21 10:59:42 crc kubenswrapper[4881]: I0121 10:59:42.912368 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"599b13afa3de1c32ea39de784508c7665fb436ae053e169463ce8f7cfbb59252"} Jan 21 10:59:42 crc kubenswrapper[4881]: I0121 10:59:42.916039 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"1813207fad30a5540c33f13fba6fda53d19e46ec4d3fa140eb5d8aadc76e5e13"} Jan 21 10:59:42 crc kubenswrapper[4881]: I0121 10:59:42.916150 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"d5cae0b10345945d3ec1ab0c087a08e8a2a69d10408227202319ed641a01f0d5"} Jan 21 10:59:42 crc kubenswrapper[4881]: I0121 10:59:42.916620 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:59:42 crc kubenswrapper[4881]: I0121 10:59:42.922231 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"41bc4c78-71b2-4ca1-b593-410715cb877b","Type":"ContainerStarted","Data":"442a9d2a13a72bc50a93b9b5088365fc2ff7f17c8a181731060f8bf93fd639fd"} Jan 21 10:59:42 crc kubenswrapper[4881]: I0121 10:59:42.924879 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"1d58c5397e8729c1268d44dec4fc932a9d2409e8f205f79d1712c41ff66ce64d"} Jan 21 10:59:42 crc kubenswrapper[4881]: I0121 10:59:42.924959 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"0b0bdf164de368a9f532c9b0db44e3257336e81a051a6283ac30c242895ceccc"} Jan 21 10:59:42 crc kubenswrapper[4881]: I0121 10:59:42.925921 4881 patch_prober.go:28] interesting pod/downloads-7954f5f757-wrqpb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 21 10:59:42 crc kubenswrapper[4881]: I0121 10:59:42.926079 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wrqpb" podUID="628cb8f4-a587-498f-9398-403e0af5eec4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 21 10:59:47 crc kubenswrapper[4881]: I0121 10:59:47.965274 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"41bc4c78-71b2-4ca1-b593-410715cb877b","Type":"ContainerStarted","Data":"891a9148acd513d44e13545e811cd63c09e7d52344359f98044e5a82a847b9a1"} Jan 21 10:59:49 crc kubenswrapper[4881]: I0121 10:59:48.972972 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"297f4cbb-3661-40d1-bfe7-518b3f934f71","Type":"ContainerStarted","Data":"7cf64852b8e94a0c7baefe70b649fd9a1474d6d2a1a6df059f6227f5286ea94e"} Jan 21 10:59:49 crc kubenswrapper[4881]: I0121 10:59:48.992617 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=21.992593214 podStartE2EDuration="21.992593214s" podCreationTimestamp="2026-01-21 10:59:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:59:48.989964049 +0000 UTC m=+176.249920518" watchObservedRunningTime="2026-01-21 10:59:48.992593214 +0000 UTC m=+176.252549683" Jan 21 10:59:49 crc kubenswrapper[4881]: I0121 10:59:49.012408 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=17.01198806 podStartE2EDuration="17.01198806s" podCreationTimestamp="2026-01-21 10:59:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:59:49.011582241 +0000 UTC m=+176.271538720" watchObservedRunningTime="2026-01-21 10:59:49.01198806 +0000 UTC m=+176.271944529" Jan 21 10:59:50 crc kubenswrapper[4881]: I0121 10:59:50.991041 4881 generic.go:334] "Generic (PLEG): container finished" podID="297f4cbb-3661-40d1-bfe7-518b3f934f71" containerID="7cf64852b8e94a0c7baefe70b649fd9a1474d6d2a1a6df059f6227f5286ea94e" exitCode=0 Jan 21 10:59:50 crc kubenswrapper[4881]: I0121 10:59:50.991135 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"297f4cbb-3661-40d1-bfe7-518b3f934f71","Type":"ContainerDied","Data":"7cf64852b8e94a0c7baefe70b649fd9a1474d6d2a1a6df059f6227f5286ea94e"} Jan 21 10:59:51 crc kubenswrapper[4881]: I0121 10:59:51.868581 4881 patch_prober.go:28] interesting pod/downloads-7954f5f757-wrqpb container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 21 10:59:51 crc kubenswrapper[4881]: I0121 10:59:51.869145 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-wrqpb" podUID="628cb8f4-a587-498f-9398-403e0af5eec4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 21 10:59:51 crc kubenswrapper[4881]: I0121 10:59:51.868694 4881 patch_prober.go:28] interesting pod/downloads-7954f5f757-wrqpb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 21 10:59:51 crc kubenswrapper[4881]: I0121 10:59:51.869331 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wrqpb" podUID="628cb8f4-a587-498f-9398-403e0af5eec4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 21 10:59:54 crc kubenswrapper[4881]: I0121 10:59:54.314143 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 10:59:54 crc kubenswrapper[4881]: I0121 10:59:54.447978 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/297f4cbb-3661-40d1-bfe7-518b3f934f71-kubelet-dir\") pod \"297f4cbb-3661-40d1-bfe7-518b3f934f71\" (UID: \"297f4cbb-3661-40d1-bfe7-518b3f934f71\") " Jan 21 10:59:54 crc kubenswrapper[4881]: I0121 10:59:54.448097 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/297f4cbb-3661-40d1-bfe7-518b3f934f71-kube-api-access\") pod \"297f4cbb-3661-40d1-bfe7-518b3f934f71\" (UID: \"297f4cbb-3661-40d1-bfe7-518b3f934f71\") " Jan 21 10:59:54 crc kubenswrapper[4881]: I0121 10:59:54.448124 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/297f4cbb-3661-40d1-bfe7-518b3f934f71-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "297f4cbb-3661-40d1-bfe7-518b3f934f71" (UID: "297f4cbb-3661-40d1-bfe7-518b3f934f71"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:59:54 crc kubenswrapper[4881]: I0121 10:59:54.448758 4881 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/297f4cbb-3661-40d1-bfe7-518b3f934f71-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:54 crc kubenswrapper[4881]: I0121 10:59:54.457004 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/297f4cbb-3661-40d1-bfe7-518b3f934f71-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "297f4cbb-3661-40d1-bfe7-518b3f934f71" (UID: "297f4cbb-3661-40d1-bfe7-518b3f934f71"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:59:54 crc kubenswrapper[4881]: I0121 10:59:54.650424 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/297f4cbb-3661-40d1-bfe7-518b3f934f71-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:55 crc kubenswrapper[4881]: I0121 10:59:55.021962 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"297f4cbb-3661-40d1-bfe7-518b3f934f71","Type":"ContainerDied","Data":"63025c330fe1b460c8485833df18772b34861db69d20da6c48f086fa46d98f67"} Jan 21 10:59:55 crc kubenswrapper[4881]: I0121 10:59:55.022472 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63025c330fe1b460c8485833df18772b34861db69d20da6c48f086fa46d98f67" Jan 21 10:59:55 crc kubenswrapper[4881]: I0121 10:59:55.022077 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 10:59:56 crc kubenswrapper[4881]: I0121 10:59:56.029581 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v5n2s" event={"ID":"e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a","Type":"ContainerStarted","Data":"af52521bc076413d8e72a4c4cff88c04fc3be6a74567d99416c9a8f9f7a66758"} Jan 21 10:59:59 crc kubenswrapper[4881]: I0121 10:59:59.124414 4881 generic.go:334] "Generic (PLEG): container finished" podID="e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a" containerID="af52521bc076413d8e72a4c4cff88c04fc3be6a74567d99416c9a8f9f7a66758" exitCode=0 Jan 21 10:59:59 crc kubenswrapper[4881]: I0121 10:59:59.124735 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v5n2s" event={"ID":"e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a","Type":"ContainerDied","Data":"af52521bc076413d8e72a4c4cff88c04fc3be6a74567d99416c9a8f9f7a66758"} Jan 21 10:59:59 crc kubenswrapper[4881]: I0121 10:59:59.850870 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:59:59 crc kubenswrapper[4881]: I0121 10:59:59.851208 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:00:00 crc kubenswrapper[4881]: I0121 11:00:00.150598 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483220-2jmrb"] Jan 21 11:00:00 crc kubenswrapper[4881]: E0121 11:00:00.150870 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="297f4cbb-3661-40d1-bfe7-518b3f934f71" containerName="pruner" Jan 21 11:00:00 crc kubenswrapper[4881]: I0121 11:00:00.150883 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="297f4cbb-3661-40d1-bfe7-518b3f934f71" containerName="pruner" Jan 21 11:00:00 crc kubenswrapper[4881]: I0121 11:00:00.150995 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="297f4cbb-3661-40d1-bfe7-518b3f934f71" containerName="pruner" Jan 21 11:00:00 crc kubenswrapper[4881]: I0121 11:00:00.151385 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-2jmrb" Jan 21 11:00:00 crc kubenswrapper[4881]: I0121 11:00:00.155699 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 11:00:00 crc kubenswrapper[4881]: I0121 11:00:00.156102 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 11:00:00 crc kubenswrapper[4881]: I0121 11:00:00.279907 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/65c09a3a-6389-443c-888b-fe83557dd508-secret-volume\") pod \"collect-profiles-29483220-2jmrb\" (UID: \"65c09a3a-6389-443c-888b-fe83557dd508\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-2jmrb" Jan 21 11:00:00 crc kubenswrapper[4881]: I0121 11:00:00.280110 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65c09a3a-6389-443c-888b-fe83557dd508-config-volume\") pod \"collect-profiles-29483220-2jmrb\" (UID: \"65c09a3a-6389-443c-888b-fe83557dd508\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-2jmrb" Jan 21 11:00:00 crc kubenswrapper[4881]: I0121 11:00:00.280177 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2rdm\" (UniqueName: \"kubernetes.io/projected/65c09a3a-6389-443c-888b-fe83557dd508-kube-api-access-b2rdm\") pod \"collect-profiles-29483220-2jmrb\" (UID: \"65c09a3a-6389-443c-888b-fe83557dd508\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-2jmrb" Jan 21 11:00:00 crc kubenswrapper[4881]: I0121 11:00:00.308853 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483220-2jmrb"] Jan 21 11:00:00 crc kubenswrapper[4881]: I0121 11:00:00.381777 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2rdm\" (UniqueName: \"kubernetes.io/projected/65c09a3a-6389-443c-888b-fe83557dd508-kube-api-access-b2rdm\") pod \"collect-profiles-29483220-2jmrb\" (UID: \"65c09a3a-6389-443c-888b-fe83557dd508\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-2jmrb" Jan 21 11:00:00 crc kubenswrapper[4881]: I0121 11:00:00.381919 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/65c09a3a-6389-443c-888b-fe83557dd508-secret-volume\") pod \"collect-profiles-29483220-2jmrb\" (UID: \"65c09a3a-6389-443c-888b-fe83557dd508\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-2jmrb" Jan 21 11:00:00 crc kubenswrapper[4881]: I0121 11:00:00.381994 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65c09a3a-6389-443c-888b-fe83557dd508-config-volume\") pod \"collect-profiles-29483220-2jmrb\" (UID: \"65c09a3a-6389-443c-888b-fe83557dd508\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-2jmrb" Jan 21 11:00:00 crc kubenswrapper[4881]: I0121 11:00:00.383802 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65c09a3a-6389-443c-888b-fe83557dd508-config-volume\") pod \"collect-profiles-29483220-2jmrb\" (UID: \"65c09a3a-6389-443c-888b-fe83557dd508\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-2jmrb" Jan 21 11:00:00 crc kubenswrapper[4881]: I0121 11:00:00.394982 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/65c09a3a-6389-443c-888b-fe83557dd508-secret-volume\") pod \"collect-profiles-29483220-2jmrb\" (UID: \"65c09a3a-6389-443c-888b-fe83557dd508\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-2jmrb" Jan 21 11:00:00 crc kubenswrapper[4881]: I0121 11:00:00.405444 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2rdm\" (UniqueName: \"kubernetes.io/projected/65c09a3a-6389-443c-888b-fe83557dd508-kube-api-access-b2rdm\") pod \"collect-profiles-29483220-2jmrb\" (UID: \"65c09a3a-6389-443c-888b-fe83557dd508\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-2jmrb" Jan 21 11:00:00 crc kubenswrapper[4881]: I0121 11:00:00.470013 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-2jmrb" Jan 21 11:00:01 crc kubenswrapper[4881]: I0121 11:00:01.356039 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q6dn5" event={"ID":"8e002e57-13ab-477a-9e16-980e13b5e47f","Type":"ContainerStarted","Data":"cad9f8570b6b7c8359172ebecd350bcad67cfe5e05e5aeca3f0a038ec3357bb5"} Jan 21 11:00:01 crc kubenswrapper[4881]: I0121 11:00:01.359929 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-89m75" event={"ID":"075db786-6ad0-4982-b70e-bd05d4f240ec","Type":"ContainerStarted","Data":"a06c8d6c70785e0e51b0e238072a99f6a50caf04a590fb7ba69cc08788ffee9a"} Jan 21 11:00:01 crc kubenswrapper[4881]: I0121 11:00:01.545354 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vljfh" event={"ID":"1d66b837-f7b1-4795-895f-08cdabe48b37","Type":"ContainerStarted","Data":"87b3da4f38a8247ed7dbb2b11f2ec14c16c71eee1d17657bf85f241bc0e931f6"} Jan 21 11:00:01 crc kubenswrapper[4881]: I0121 11:00:01.644273 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t4zlb" event={"ID":"b83e71f8-970c-4afc-ac31-264c7ca6625a","Type":"ContainerStarted","Data":"d97aa85fa9dba9a5f261efedffb0ffe8efb44a7c0ff638756658eab20e0bacac"} Jan 21 11:00:01 crc kubenswrapper[4881]: I0121 11:00:01.652993 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6rmvm" event={"ID":"2c460bf5-05a1-4977-b889-1a5c3263df33","Type":"ContainerStarted","Data":"db0493653bc30919d4352c24df01a207c2de62ad8f1fa10ff346fcc988a5549e"} Jan 21 11:00:01 crc kubenswrapper[4881]: I0121 11:00:01.655071 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kfmhs" event={"ID":"d318e830-067f-4722-9d74-a45fcefc939d","Type":"ContainerStarted","Data":"456438ece135082aa65a1f9d3e1df54da4ad18d3ac41d1e2ac75d98b61443cef"} Jan 21 11:00:01 crc kubenswrapper[4881]: I0121 11:00:01.902823 4881 patch_prober.go:28] interesting pod/downloads-7954f5f757-wrqpb container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 21 11:00:01 crc kubenswrapper[4881]: I0121 11:00:01.902896 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-wrqpb" podUID="628cb8f4-a587-498f-9398-403e0af5eec4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 21 11:00:01 crc kubenswrapper[4881]: I0121 11:00:01.903257 4881 patch_prober.go:28] interesting pod/downloads-7954f5f757-wrqpb container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" start-of-body= Jan 21 11:00:01 crc kubenswrapper[4881]: I0121 11:00:01.903288 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-wrqpb" podUID="628cb8f4-a587-498f-9398-403e0af5eec4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.27:8080/\": dial tcp 10.217.0.27:8080: connect: connection refused" Jan 21 11:00:02 crc kubenswrapper[4881]: I0121 11:00:02.754549 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v5n2s" event={"ID":"e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a","Type":"ContainerStarted","Data":"091b8c7421a6daba2d38abc6600200f92a99a9d9fffb2a18673337cc1cab5a28"} Jan 21 11:00:02 crc kubenswrapper[4881]: I0121 11:00:02.868975 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-v5n2s" podStartSLOduration=4.649819084 podStartE2EDuration="1m15.868935322s" podCreationTimestamp="2026-01-21 10:58:47 +0000 UTC" firstStartedPulling="2026-01-21 10:58:49.395434554 +0000 UTC m=+116.655391023" lastFinishedPulling="2026-01-21 11:00:00.614550792 +0000 UTC m=+187.874507261" observedRunningTime="2026-01-21 11:00:02.864735729 +0000 UTC m=+190.124692218" watchObservedRunningTime="2026-01-21 11:00:02.868935322 +0000 UTC m=+190.128891781" Jan 21 11:00:02 crc kubenswrapper[4881]: I0121 11:00:02.891584 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483220-2jmrb"] Jan 21 11:00:04 crc kubenswrapper[4881]: I0121 11:00:04.312772 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-2jmrb" event={"ID":"65c09a3a-6389-443c-888b-fe83557dd508","Type":"ContainerStarted","Data":"e7078195838c011ba41af3c83e6d88fadf75d4028c7c8f34237503be20319141"} Jan 21 11:00:05 crc kubenswrapper[4881]: I0121 11:00:05.489849 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-2jmrb" event={"ID":"65c09a3a-6389-443c-888b-fe83557dd508","Type":"ContainerStarted","Data":"506baee9263f2e28d3f1ef1ef645da28ead83f7c212d5255ebc44d13c43d15f7"} Jan 21 11:00:05 crc kubenswrapper[4881]: I0121 11:00:05.513428 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-2jmrb" podStartSLOduration=5.513381962 podStartE2EDuration="5.513381962s" podCreationTimestamp="2026-01-21 11:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:00:05.512343466 +0000 UTC m=+192.772299935" watchObservedRunningTime="2026-01-21 11:00:05.513381962 +0000 UTC m=+192.773338431" Jan 21 11:00:06 crc kubenswrapper[4881]: I0121 11:00:06.588179 4881 generic.go:334] "Generic (PLEG): container finished" podID="075db786-6ad0-4982-b70e-bd05d4f240ec" containerID="a06c8d6c70785e0e51b0e238072a99f6a50caf04a590fb7ba69cc08788ffee9a" exitCode=0 Jan 21 11:00:06 crc kubenswrapper[4881]: I0121 11:00:06.588259 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-89m75" event={"ID":"075db786-6ad0-4982-b70e-bd05d4f240ec","Type":"ContainerDied","Data":"a06c8d6c70785e0e51b0e238072a99f6a50caf04a590fb7ba69cc08788ffee9a"} Jan 21 11:00:06 crc kubenswrapper[4881]: I0121 11:00:06.594996 4881 generic.go:334] "Generic (PLEG): container finished" podID="1d66b837-f7b1-4795-895f-08cdabe48b37" containerID="87b3da4f38a8247ed7dbb2b11f2ec14c16c71eee1d17657bf85f241bc0e931f6" exitCode=0 Jan 21 11:00:06 crc kubenswrapper[4881]: I0121 11:00:06.595910 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vljfh" event={"ID":"1d66b837-f7b1-4795-895f-08cdabe48b37","Type":"ContainerDied","Data":"87b3da4f38a8247ed7dbb2b11f2ec14c16c71eee1d17657bf85f241bc0e931f6"} Jan 21 11:00:07 crc kubenswrapper[4881]: I0121 11:00:07.665933 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-v5n2s" Jan 21 11:00:07 crc kubenswrapper[4881]: I0121 11:00:07.666089 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-v5n2s" Jan 21 11:00:08 crc kubenswrapper[4881]: I0121 11:00:08.782977 4881 generic.go:334] "Generic (PLEG): container finished" podID="8e002e57-13ab-477a-9e16-980e13b5e47f" containerID="cad9f8570b6b7c8359172ebecd350bcad67cfe5e05e5aeca3f0a038ec3357bb5" exitCode=0 Jan 21 11:00:08 crc kubenswrapper[4881]: I0121 11:00:08.783056 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q6dn5" event={"ID":"8e002e57-13ab-477a-9e16-980e13b5e47f","Type":"ContainerDied","Data":"cad9f8570b6b7c8359172ebecd350bcad67cfe5e05e5aeca3f0a038ec3357bb5"} Jan 21 11:00:08 crc kubenswrapper[4881]: I0121 11:00:08.792165 4881 generic.go:334] "Generic (PLEG): container finished" podID="65c09a3a-6389-443c-888b-fe83557dd508" containerID="506baee9263f2e28d3f1ef1ef645da28ead83f7c212d5255ebc44d13c43d15f7" exitCode=0 Jan 21 11:00:08 crc kubenswrapper[4881]: I0121 11:00:08.792918 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-2jmrb" event={"ID":"65c09a3a-6389-443c-888b-fe83557dd508","Type":"ContainerDied","Data":"506baee9263f2e28d3f1ef1ef645da28ead83f7c212d5255ebc44d13c43d15f7"} Jan 21 11:00:09 crc kubenswrapper[4881]: I0121 11:00:09.780989 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-v5n2s" podUID="e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a" containerName="registry-server" probeResult="failure" output=< Jan 21 11:00:09 crc kubenswrapper[4881]: timeout: failed to connect service ":50051" within 1s Jan 21 11:00:09 crc kubenswrapper[4881]: > Jan 21 11:00:09 crc kubenswrapper[4881]: I0121 11:00:09.798463 4881 generic.go:334] "Generic (PLEG): container finished" podID="2c460bf5-05a1-4977-b889-1a5c3263df33" containerID="db0493653bc30919d4352c24df01a207c2de62ad8f1fa10ff346fcc988a5549e" exitCode=0 Jan 21 11:00:09 crc kubenswrapper[4881]: I0121 11:00:09.798559 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6rmvm" event={"ID":"2c460bf5-05a1-4977-b889-1a5c3263df33","Type":"ContainerDied","Data":"db0493653bc30919d4352c24df01a207c2de62ad8f1fa10ff346fcc988a5549e"} Jan 21 11:00:12 crc kubenswrapper[4881]: I0121 11:00:12.462905 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-wrqpb" Jan 21 11:00:13 crc kubenswrapper[4881]: I0121 11:00:13.096908 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-2jmrb" Jan 21 11:00:13 crc kubenswrapper[4881]: I0121 11:00:13.204284 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/65c09a3a-6389-443c-888b-fe83557dd508-secret-volume\") pod \"65c09a3a-6389-443c-888b-fe83557dd508\" (UID: \"65c09a3a-6389-443c-888b-fe83557dd508\") " Jan 21 11:00:13 crc kubenswrapper[4881]: I0121 11:00:13.204562 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2rdm\" (UniqueName: \"kubernetes.io/projected/65c09a3a-6389-443c-888b-fe83557dd508-kube-api-access-b2rdm\") pod \"65c09a3a-6389-443c-888b-fe83557dd508\" (UID: \"65c09a3a-6389-443c-888b-fe83557dd508\") " Jan 21 11:00:13 crc kubenswrapper[4881]: I0121 11:00:13.204650 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65c09a3a-6389-443c-888b-fe83557dd508-config-volume\") pod \"65c09a3a-6389-443c-888b-fe83557dd508\" (UID: \"65c09a3a-6389-443c-888b-fe83557dd508\") " Jan 21 11:00:13 crc kubenswrapper[4881]: I0121 11:00:13.234773 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65c09a3a-6389-443c-888b-fe83557dd508-config-volume" (OuterVolumeSpecName: "config-volume") pod "65c09a3a-6389-443c-888b-fe83557dd508" (UID: "65c09a3a-6389-443c-888b-fe83557dd508"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:00:13 crc kubenswrapper[4881]: I0121 11:00:13.307048 4881 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65c09a3a-6389-443c-888b-fe83557dd508-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:13 crc kubenswrapper[4881]: I0121 11:00:13.337449 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65c09a3a-6389-443c-888b-fe83557dd508-kube-api-access-b2rdm" (OuterVolumeSpecName: "kube-api-access-b2rdm") pod "65c09a3a-6389-443c-888b-fe83557dd508" (UID: "65c09a3a-6389-443c-888b-fe83557dd508"). InnerVolumeSpecName "kube-api-access-b2rdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:00:13 crc kubenswrapper[4881]: I0121 11:00:13.338052 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65c09a3a-6389-443c-888b-fe83557dd508-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "65c09a3a-6389-443c-888b-fe83557dd508" (UID: "65c09a3a-6389-443c-888b-fe83557dd508"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:00:13 crc kubenswrapper[4881]: I0121 11:00:13.408913 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2rdm\" (UniqueName: \"kubernetes.io/projected/65c09a3a-6389-443c-888b-fe83557dd508-kube-api-access-b2rdm\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:13 crc kubenswrapper[4881]: I0121 11:00:13.408945 4881 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/65c09a3a-6389-443c-888b-fe83557dd508-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:13 crc kubenswrapper[4881]: I0121 11:00:13.595333 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 11:00:13 crc kubenswrapper[4881]: I0121 11:00:13.952475 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-2jmrb" event={"ID":"65c09a3a-6389-443c-888b-fe83557dd508","Type":"ContainerDied","Data":"e7078195838c011ba41af3c83e6d88fadf75d4028c7c8f34237503be20319141"} Jan 21 11:00:13 crc kubenswrapper[4881]: I0121 11:00:13.952526 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7078195838c011ba41af3c83e6d88fadf75d4028c7c8f34237503be20319141" Jan 21 11:00:13 crc kubenswrapper[4881]: I0121 11:00:13.952590 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-2jmrb" Jan 21 11:00:14 crc kubenswrapper[4881]: I0121 11:00:14.960299 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2sqlm" event={"ID":"5b12596d-1f5f-4d81-b664-d0ddee72552c","Type":"ContainerStarted","Data":"8c58e8e6d9f4309fce56e3b043abdb46d3d4af579c4a6d9ae43870620be9634e"} Jan 21 11:00:16 crc kubenswrapper[4881]: E0121 11:00:16.512909 4881 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb83e71f8_970c_4afc_ac31_264c7ca6625a.slice/crio-d97aa85fa9dba9a5f261efedffb0ffe8efb44a7c0ff638756658eab20e0bacac.scope\": RecentStats: unable to find data in memory cache]" Jan 21 11:00:17 crc kubenswrapper[4881]: I0121 11:00:17.259697 4881 generic.go:334] "Generic (PLEG): container finished" podID="b83e71f8-970c-4afc-ac31-264c7ca6625a" containerID="d97aa85fa9dba9a5f261efedffb0ffe8efb44a7c0ff638756658eab20e0bacac" exitCode=0 Jan 21 11:00:17 crc kubenswrapper[4881]: I0121 11:00:17.260028 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t4zlb" event={"ID":"b83e71f8-970c-4afc-ac31-264c7ca6625a","Type":"ContainerDied","Data":"d97aa85fa9dba9a5f261efedffb0ffe8efb44a7c0ff638756658eab20e0bacac"} Jan 21 11:00:17 crc kubenswrapper[4881]: I0121 11:00:17.263110 4881 generic.go:334] "Generic (PLEG): container finished" podID="d318e830-067f-4722-9d74-a45fcefc939d" containerID="456438ece135082aa65a1f9d3e1df54da4ad18d3ac41d1e2ac75d98b61443cef" exitCode=0 Jan 21 11:00:17 crc kubenswrapper[4881]: I0121 11:00:17.263172 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kfmhs" event={"ID":"d318e830-067f-4722-9d74-a45fcefc939d","Type":"ContainerDied","Data":"456438ece135082aa65a1f9d3e1df54da4ad18d3ac41d1e2ac75d98b61443cef"} Jan 21 11:00:17 crc kubenswrapper[4881]: I0121 11:00:17.845714 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-v5n2s" Jan 21 11:00:17 crc kubenswrapper[4881]: I0121 11:00:17.923061 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-v5n2s" Jan 21 11:00:18 crc kubenswrapper[4881]: I0121 11:00:18.272674 4881 generic.go:334] "Generic (PLEG): container finished" podID="5b12596d-1f5f-4d81-b664-d0ddee72552c" containerID="8c58e8e6d9f4309fce56e3b043abdb46d3d4af579c4a6d9ae43870620be9634e" exitCode=0 Jan 21 11:00:18 crc kubenswrapper[4881]: I0121 11:00:18.272745 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2sqlm" event={"ID":"5b12596d-1f5f-4d81-b664-d0ddee72552c","Type":"ContainerDied","Data":"8c58e8e6d9f4309fce56e3b043abdb46d3d4af579c4a6d9ae43870620be9634e"} Jan 21 11:00:19 crc kubenswrapper[4881]: I0121 11:00:19.392496 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-whh46"] Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.325629 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6rmvm" event={"ID":"2c460bf5-05a1-4977-b889-1a5c3263df33","Type":"ContainerStarted","Data":"7e5f304bc82a020e253bc1850121534b947e1ce59d3cde3e998cffd1481389a2"} Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.790386 4881 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 11:00:25 crc kubenswrapper[4881]: E0121 11:00:25.790822 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65c09a3a-6389-443c-888b-fe83557dd508" containerName="collect-profiles" Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.790844 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="65c09a3a-6389-443c-888b-fe83557dd508" containerName="collect-profiles" Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.791106 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="65c09a3a-6389-443c-888b-fe83557dd508" containerName="collect-profiles" Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.791659 4881 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.791863 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.792086 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534" gracePeriod=15 Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.792289 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://0e507b4c3c536bdc63360b1386748657584f739e09973ec33c998ac267ca2766" gracePeriod=15 Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.792408 4881 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.792396 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2" gracePeriod=15 Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.792541 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f" gracePeriod=15 Jan 21 11:00:25 crc kubenswrapper[4881]: E0121 11:00:25.792641 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.792864 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 21 11:00:25 crc kubenswrapper[4881]: E0121 11:00:25.792902 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.792914 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.792642 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d" gracePeriod=15 Jan 21 11:00:25 crc kubenswrapper[4881]: E0121 11:00:25.792934 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.793305 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 21 11:00:25 crc kubenswrapper[4881]: E0121 11:00:25.793350 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.793373 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 11:00:25 crc kubenswrapper[4881]: E0121 11:00:25.793397 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.793404 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 21 11:00:25 crc kubenswrapper[4881]: E0121 11:00:25.793416 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.793423 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 21 11:00:25 crc kubenswrapper[4881]: E0121 11:00:25.793508 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.793516 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.793849 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.793863 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.793874 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.793882 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.793890 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.793901 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.793910 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 11:00:25 crc kubenswrapper[4881]: E0121 11:00:25.794069 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.794080 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.799368 4881 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.926372 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.926437 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.926476 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.926503 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.926521 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.926554 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.926593 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 11:00:25 crc kubenswrapper[4881]: I0121 11:00:25.926630 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 11:00:26 crc kubenswrapper[4881]: I0121 11:00:26.028694 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 11:00:26 crc kubenswrapper[4881]: I0121 11:00:26.028762 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 11:00:26 crc kubenswrapper[4881]: I0121 11:00:26.028831 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 11:00:26 crc kubenswrapper[4881]: I0121 11:00:26.028874 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 11:00:26 crc kubenswrapper[4881]: I0121 11:00:26.028900 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 11:00:26 crc kubenswrapper[4881]: I0121 11:00:26.028931 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 11:00:26 crc kubenswrapper[4881]: I0121 11:00:26.028958 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 11:00:26 crc kubenswrapper[4881]: I0121 11:00:26.028974 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 11:00:26 crc kubenswrapper[4881]: I0121 11:00:26.028974 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 11:00:26 crc kubenswrapper[4881]: I0121 11:00:26.029100 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 11:00:26 crc kubenswrapper[4881]: I0121 11:00:26.029129 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 11:00:26 crc kubenswrapper[4881]: I0121 11:00:26.029148 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 11:00:26 crc kubenswrapper[4881]: I0121 11:00:26.029164 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 11:00:26 crc kubenswrapper[4881]: I0121 11:00:26.029181 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 11:00:26 crc kubenswrapper[4881]: I0121 11:00:26.029195 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 11:00:26 crc kubenswrapper[4881]: I0121 11:00:26.029210 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 11:00:26 crc kubenswrapper[4881]: E0121 11:00:26.449710 4881 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.129.56.4:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-marketplace-89m75.188cb9f7888c87eb openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-marketplace-89m75,UID:075db786-6ad0-4982-b70e-bd05d4f240ec,APIVersion:v1,ResourceVersion:28589,FieldPath:spec.containers{registry-server},},Reason:Created,Message:Created container registry-server,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 11:00:26.448734187 +0000 UTC m=+213.708690656,LastTimestamp:2026-01-21 11:00:26.448734187 +0000 UTC m=+213.708690656,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.344109 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q6dn5" event={"ID":"8e002e57-13ab-477a-9e16-980e13b5e47f","Type":"ContainerStarted","Data":"e42581773a8d4ea1772dd60eaf9071bf2de0cdd39b8e134e5ac5a682d95b642f"} Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.347186 4881 generic.go:334] "Generic (PLEG): container finished" podID="41bc4c78-71b2-4ca1-b593-410715cb877b" containerID="891a9148acd513d44e13545e811cd63c09e7d52344359f98044e5a82a847b9a1" exitCode=0 Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.347303 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"41bc4c78-71b2-4ca1-b593-410715cb877b","Type":"ContainerDied","Data":"891a9148acd513d44e13545e811cd63c09e7d52344359f98044e5a82a847b9a1"} Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.347754 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.348736 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.348975 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.353274 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-89m75" event={"ID":"075db786-6ad0-4982-b70e-bd05d4f240ec","Type":"ContainerStarted","Data":"d4c87b729f18eaf9f12531e5147374286d6a7a44e910d96df5b3275a242bc490"} Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.354166 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.355461 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.355661 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.371110 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vljfh" event={"ID":"1d66b837-f7b1-4795-895f-08cdabe48b37","Type":"ContainerStarted","Data":"0e3e6281eef028f6cd4f512b5ed4a48f81805bf0232c271e4efbf06a7853a75b"} Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.372372 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.372760 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.373119 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.373432 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.373875 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.376080 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.377072 4881 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0e507b4c3c536bdc63360b1386748657584f739e09973ec33c998ac267ca2766" exitCode=0 Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.377121 4881 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f" exitCode=0 Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.377132 4881 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2" exitCode=0 Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.377142 4881 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d" exitCode=2 Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.377185 4881 scope.go:117] "RemoveContainer" containerID="676e764186e37083591f2c779b05dcbf5bc065bde85efade1c25a27a9bf74570" Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.378227 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.378652 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.379086 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.379382 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.379597 4881 status_manager.go:851] "Failed to get status for pod" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" pod="openshift-marketplace/community-operators-6rmvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6rmvm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.816725 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6rmvm" Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.817193 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6rmvm" Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.864223 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6rmvm" Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.865383 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.866167 4881 status_manager.go:851] "Failed to get status for pod" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" pod="openshift-marketplace/community-operators-6rmvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6rmvm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.867112 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.867901 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:27 crc kubenswrapper[4881]: I0121 11:00:27.868338 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:28 crc kubenswrapper[4881]: I0121 11:00:28.747435 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 11:00:28 crc kubenswrapper[4881]: I0121 11:00:28.748621 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:28 crc kubenswrapper[4881]: I0121 11:00:28.748826 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:28 crc kubenswrapper[4881]: I0121 11:00:28.749012 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:28 crc kubenswrapper[4881]: I0121 11:00:28.749175 4881 status_manager.go:851] "Failed to get status for pod" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" pod="openshift-marketplace/community-operators-6rmvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6rmvm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:28 crc kubenswrapper[4881]: I0121 11:00:28.749336 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:28 crc kubenswrapper[4881]: I0121 11:00:28.918718 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/41bc4c78-71b2-4ca1-b593-410715cb877b-var-lock\") pod \"41bc4c78-71b2-4ca1-b593-410715cb877b\" (UID: \"41bc4c78-71b2-4ca1-b593-410715cb877b\") " Jan 21 11:00:28 crc kubenswrapper[4881]: I0121 11:00:28.918893 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/41bc4c78-71b2-4ca1-b593-410715cb877b-kube-api-access\") pod \"41bc4c78-71b2-4ca1-b593-410715cb877b\" (UID: \"41bc4c78-71b2-4ca1-b593-410715cb877b\") " Jan 21 11:00:28 crc kubenswrapper[4881]: I0121 11:00:28.919081 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/41bc4c78-71b2-4ca1-b593-410715cb877b-kubelet-dir\") pod \"41bc4c78-71b2-4ca1-b593-410715cb877b\" (UID: \"41bc4c78-71b2-4ca1-b593-410715cb877b\") " Jan 21 11:00:28 crc kubenswrapper[4881]: I0121 11:00:28.919456 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41bc4c78-71b2-4ca1-b593-410715cb877b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "41bc4c78-71b2-4ca1-b593-410715cb877b" (UID: "41bc4c78-71b2-4ca1-b593-410715cb877b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:00:28 crc kubenswrapper[4881]: I0121 11:00:28.919502 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41bc4c78-71b2-4ca1-b593-410715cb877b-var-lock" (OuterVolumeSpecName: "var-lock") pod "41bc4c78-71b2-4ca1-b593-410715cb877b" (UID: "41bc4c78-71b2-4ca1-b593-410715cb877b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:00:28 crc kubenswrapper[4881]: I0121 11:00:28.928034 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41bc4c78-71b2-4ca1-b593-410715cb877b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "41bc4c78-71b2-4ca1-b593-410715cb877b" (UID: "41bc4c78-71b2-4ca1-b593-410715cb877b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:00:29 crc kubenswrapper[4881]: I0121 11:00:29.020305 4881 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/41bc4c78-71b2-4ca1-b593-410715cb877b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:29 crc kubenswrapper[4881]: I0121 11:00:29.020339 4881 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/41bc4c78-71b2-4ca1-b593-410715cb877b-var-lock\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:29 crc kubenswrapper[4881]: I0121 11:00:29.020349 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/41bc4c78-71b2-4ca1-b593-410715cb877b-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:29 crc kubenswrapper[4881]: E0121 11:00:29.298012 4881 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:29 crc kubenswrapper[4881]: E0121 11:00:29.298385 4881 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:29 crc kubenswrapper[4881]: E0121 11:00:29.298806 4881 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:29 crc kubenswrapper[4881]: E0121 11:00:29.299291 4881 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:29 crc kubenswrapper[4881]: E0121 11:00:29.299574 4881 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:29 crc kubenswrapper[4881]: I0121 11:00:29.299611 4881 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 21 11:00:29 crc kubenswrapper[4881]: E0121 11:00:29.299906 4881 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.4:6443: connect: connection refused" interval="200ms" Jan 21 11:00:29 crc kubenswrapper[4881]: I0121 11:00:29.410706 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 11:00:29 crc kubenswrapper[4881]: I0121 11:00:29.410703 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"41bc4c78-71b2-4ca1-b593-410715cb877b","Type":"ContainerDied","Data":"442a9d2a13a72bc50a93b9b5088365fc2ff7f17c8a181731060f8bf93fd639fd"} Jan 21 11:00:29 crc kubenswrapper[4881]: I0121 11:00:29.411394 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="442a9d2a13a72bc50a93b9b5088365fc2ff7f17c8a181731060f8bf93fd639fd" Jan 21 11:00:29 crc kubenswrapper[4881]: I0121 11:00:29.416589 4881 status_manager.go:851] "Failed to get status for pod" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" pod="openshift-marketplace/community-operators-6rmvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6rmvm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:29 crc kubenswrapper[4881]: I0121 11:00:29.416997 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:29 crc kubenswrapper[4881]: I0121 11:00:29.418154 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:29 crc kubenswrapper[4881]: I0121 11:00:29.418875 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:29 crc kubenswrapper[4881]: I0121 11:00:29.419278 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:29 crc kubenswrapper[4881]: E0121 11:00:29.500941 4881 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.4:6443: connect: connection refused" interval="400ms" Jan 21 11:00:29 crc kubenswrapper[4881]: I0121 11:00:29.692882 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-89m75" Jan 21 11:00:29 crc kubenswrapper[4881]: I0121 11:00:29.692931 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-89m75" Jan 21 11:00:29 crc kubenswrapper[4881]: I0121 11:00:29.741039 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-89m75" Jan 21 11:00:29 crc kubenswrapper[4881]: I0121 11:00:29.742055 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:29 crc kubenswrapper[4881]: I0121 11:00:29.742465 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:29 crc kubenswrapper[4881]: I0121 11:00:29.742711 4881 status_manager.go:851] "Failed to get status for pod" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" pod="openshift-marketplace/community-operators-6rmvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6rmvm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:29 crc kubenswrapper[4881]: I0121 11:00:29.742963 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:29 crc kubenswrapper[4881]: I0121 11:00:29.743225 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:29 crc kubenswrapper[4881]: I0121 11:00:29.850929 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:00:29 crc kubenswrapper[4881]: I0121 11:00:29.851020 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:00:29 crc kubenswrapper[4881]: I0121 11:00:29.851096 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 11:00:29 crc kubenswrapper[4881]: I0121 11:00:29.852541 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d"} pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 11:00:29 crc kubenswrapper[4881]: I0121 11:00:29.852686 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" containerID="cri-o://7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d" gracePeriod=600 Jan 21 11:00:29 crc kubenswrapper[4881]: E0121 11:00:29.902218 4881 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.4:6443: connect: connection refused" interval="800ms" Jan 21 11:00:30 crc kubenswrapper[4881]: I0121 11:00:30.077852 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vljfh" Jan 21 11:00:30 crc kubenswrapper[4881]: I0121 11:00:30.077941 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vljfh" Jan 21 11:00:30 crc kubenswrapper[4881]: I0121 11:00:30.129192 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vljfh" Jan 21 11:00:30 crc kubenswrapper[4881]: I0121 11:00:30.129972 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:30 crc kubenswrapper[4881]: I0121 11:00:30.130371 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:30 crc kubenswrapper[4881]: I0121 11:00:30.130718 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:30 crc kubenswrapper[4881]: I0121 11:00:30.131073 4881 status_manager.go:851] "Failed to get status for pod" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" pod="openshift-marketplace/community-operators-6rmvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6rmvm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:30 crc kubenswrapper[4881]: I0121 11:00:30.131320 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:30 crc kubenswrapper[4881]: I0121 11:00:30.421855 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 11:00:30 crc kubenswrapper[4881]: I0121 11:00:30.423123 4881 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534" exitCode=0 Jan 21 11:00:30 crc kubenswrapper[4881]: E0121 11:00:30.703300 4881 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.4:6443: connect: connection refused" interval="1.6s" Jan 21 11:00:30 crc kubenswrapper[4881]: E0121 11:00:30.830532 4881 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.4:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 11:00:30 crc kubenswrapper[4881]: I0121 11:00:30.831575 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 11:00:31 crc kubenswrapper[4881]: I0121 11:00:31.432852 4881 generic.go:334] "Generic (PLEG): container finished" podID="3687b313-1df2-4274-80db-8c758b51bf2d" containerID="7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d" exitCode=0 Jan 21 11:00:31 crc kubenswrapper[4881]: I0121 11:00:31.432917 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerDied","Data":"7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d"} Jan 21 11:00:31 crc kubenswrapper[4881]: I0121 11:00:31.900483 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 11:00:31 crc kubenswrapper[4881]: I0121 11:00:31.901371 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 11:00:31 crc kubenswrapper[4881]: I0121 11:00:31.902059 4881 status_manager.go:851] "Failed to get status for pod" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" pod="openshift-marketplace/community-operators-6rmvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6rmvm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:31 crc kubenswrapper[4881]: I0121 11:00:31.902317 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:31 crc kubenswrapper[4881]: I0121 11:00:31.902542 4881 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:31 crc kubenswrapper[4881]: I0121 11:00:31.902712 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:31 crc kubenswrapper[4881]: I0121 11:00:31.902925 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:31 crc kubenswrapper[4881]: I0121 11:00:31.903155 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:31 crc kubenswrapper[4881]: I0121 11:00:31.991547 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 21 11:00:31 crc kubenswrapper[4881]: I0121 11:00:31.991620 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 21 11:00:31 crc kubenswrapper[4881]: I0121 11:00:31.991688 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 21 11:00:31 crc kubenswrapper[4881]: I0121 11:00:31.991991 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:00:31 crc kubenswrapper[4881]: I0121 11:00:31.992022 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:00:31 crc kubenswrapper[4881]: I0121 11:00:31.992037 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:00:32 crc kubenswrapper[4881]: I0121 11:00:32.092778 4881 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:32 crc kubenswrapper[4881]: I0121 11:00:32.092826 4881 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:32 crc kubenswrapper[4881]: I0121 11:00:32.092839 4881 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:32 crc kubenswrapper[4881]: E0121 11:00:32.304718 4881 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.4:6443: connect: connection refused" interval="3.2s" Jan 21 11:00:32 crc kubenswrapper[4881]: I0121 11:00:32.442799 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 11:00:32 crc kubenswrapper[4881]: I0121 11:00:32.444947 4881 scope.go:117] "RemoveContainer" containerID="0e507b4c3c536bdc63360b1386748657584f739e09973ec33c998ac267ca2766" Jan 21 11:00:32 crc kubenswrapper[4881]: I0121 11:00:32.445041 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 11:00:32 crc kubenswrapper[4881]: I0121 11:00:32.446060 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:32 crc kubenswrapper[4881]: I0121 11:00:32.446648 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:32 crc kubenswrapper[4881]: I0121 11:00:32.446966 4881 status_manager.go:851] "Failed to get status for pod" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" pod="openshift-marketplace/community-operators-6rmvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6rmvm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:32 crc kubenswrapper[4881]: I0121 11:00:32.447209 4881 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:32 crc kubenswrapper[4881]: I0121 11:00:32.447517 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:32 crc kubenswrapper[4881]: I0121 11:00:32.447985 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:32 crc kubenswrapper[4881]: I0121 11:00:32.462806 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:32 crc kubenswrapper[4881]: I0121 11:00:32.463123 4881 status_manager.go:851] "Failed to get status for pod" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" pod="openshift-marketplace/community-operators-6rmvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6rmvm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:32 crc kubenswrapper[4881]: I0121 11:00:32.463398 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:32 crc kubenswrapper[4881]: I0121 11:00:32.463760 4881 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:32 crc kubenswrapper[4881]: I0121 11:00:32.464030 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:32 crc kubenswrapper[4881]: I0121 11:00:32.464284 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:32 crc kubenswrapper[4881]: I0121 11:00:32.898198 4881 scope.go:117] "RemoveContainer" containerID="b00c4c20b4212a307993f164d3f2d23b9b8bd823111bef7a385ae8811e147f3f" Jan 21 11:00:32 crc kubenswrapper[4881]: I0121 11:00:32.921672 4881 scope.go:117] "RemoveContainer" containerID="7f32dd767af23c59f021d583a23309b2180db86ae71b4ca4a5f436ac1c77d6e2" Jan 21 11:00:32 crc kubenswrapper[4881]: I0121 11:00:32.972078 4881 scope.go:117] "RemoveContainer" containerID="7c47dfa90d18e3e30053b0250c7b986bc877ab6bc3a553060f65468416c8105d" Jan 21 11:00:33 crc kubenswrapper[4881]: W0121 11:00:33.010602 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-3c3fd17463002dc60f1b6915dc610512a2be8006f920a2d721e7c6794a61be97 WatchSource:0}: Error finding container 3c3fd17463002dc60f1b6915dc610512a2be8006f920a2d721e7c6794a61be97: Status 404 returned error can't find the container with id 3c3fd17463002dc60f1b6915dc610512a2be8006f920a2d721e7c6794a61be97 Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.083224 4881 scope.go:117] "RemoveContainer" containerID="945977e9e86c32b08a633de6b962033c71982d3a129ac3e5b2705e19b50d6534" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.110050 4881 scope.go:117] "RemoveContainer" containerID="164282bb15c33a383e2fc11a6617ca4008eec1b59e1c5577f7b34b0fcbddc99f" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.314445 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.315465 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.316387 4881 status_manager.go:851] "Failed to get status for pod" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" pod="openshift-marketplace/community-operators-6rmvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6rmvm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.316972 4881 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.317375 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.317845 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.326935 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.453062 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"ffc7cfcc896e97bc89bcafadd903d32675c37638ae26cc272102f0c6d6bc59d1"} Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.453118 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"3c3fd17463002dc60f1b6915dc610512a2be8006f920a2d721e7c6794a61be97"} Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.454254 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: E0121 11:00:33.454293 4881 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.4:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.454488 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.454768 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.455184 4881 status_manager.go:851] "Failed to get status for pod" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" pod="openshift-marketplace/community-operators-6rmvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6rmvm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.455612 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t4zlb" event={"ID":"b83e71f8-970c-4afc-ac31-264c7ca6625a","Type":"ContainerStarted","Data":"7e551acaa20677090959425a7116a2212e0375845f7e600b54464bccf79b4461"} Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.455651 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.456370 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.456633 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.456918 4881 status_manager.go:851] "Failed to get status for pod" podUID="b83e71f8-970c-4afc-ac31-264c7ca6625a" pod="openshift-marketplace/redhat-operators-t4zlb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-t4zlb\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.457127 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.457549 4881 status_manager.go:851] "Failed to get status for pod" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" pod="openshift-marketplace/community-operators-6rmvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6rmvm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.457972 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.460035 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"f08eae3fb5bfbc3b6dfa6839a34471cb41febf3495ae4845e42b68ed33af40f1"} Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.461301 4881 status_manager.go:851] "Failed to get status for pod" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-fb4fr\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.461470 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.461638 4881 status_manager.go:851] "Failed to get status for pod" podUID="b83e71f8-970c-4afc-ac31-264c7ca6625a" pod="openshift-marketplace/redhat-operators-t4zlb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-t4zlb\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.461838 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.462097 4881 status_manager.go:851] "Failed to get status for pod" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" pod="openshift-marketplace/community-operators-6rmvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6rmvm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.462385 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.462631 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.464187 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kfmhs" event={"ID":"d318e830-067f-4722-9d74-a45fcefc939d","Type":"ContainerStarted","Data":"ea62c10cfd248c0ef9c6d0347f5a3b0a2b7e8d1e35c546c01d7fdadf484cb508"} Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.465160 4881 status_manager.go:851] "Failed to get status for pod" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" pod="openshift-marketplace/community-operators-6rmvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6rmvm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.465550 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.465818 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.466024 4881 status_manager.go:851] "Failed to get status for pod" podUID="d318e830-067f-4722-9d74-a45fcefc939d" pod="openshift-marketplace/redhat-operators-kfmhs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kfmhs\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.466304 4881 status_manager.go:851] "Failed to get status for pod" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-fb4fr\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.466655 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.467049 4881 status_manager.go:851] "Failed to get status for pod" podUID="b83e71f8-970c-4afc-ac31-264c7ca6625a" pod="openshift-marketplace/redhat-operators-t4zlb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-t4zlb\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.467315 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.468209 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2sqlm" event={"ID":"5b12596d-1f5f-4d81-b664-d0ddee72552c","Type":"ContainerStarted","Data":"c77f2373cbe2c6efce94e010b4a6e7c282b2ba984b2b3fef90734b6c51cc06d7"} Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.468884 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.469184 4881 status_manager.go:851] "Failed to get status for pod" podUID="d318e830-067f-4722-9d74-a45fcefc939d" pod="openshift-marketplace/redhat-operators-kfmhs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kfmhs\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.469453 4881 status_manager.go:851] "Failed to get status for pod" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-fb4fr\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.469695 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.469952 4881 status_manager.go:851] "Failed to get status for pod" podUID="5b12596d-1f5f-4d81-b664-d0ddee72552c" pod="openshift-marketplace/certified-operators-2sqlm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2sqlm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.470331 4881 status_manager.go:851] "Failed to get status for pod" podUID="b83e71f8-970c-4afc-ac31-264c7ca6625a" pod="openshift-marketplace/redhat-operators-t4zlb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-t4zlb\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.470538 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.470691 4881 status_manager.go:851] "Failed to get status for pod" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" pod="openshift-marketplace/community-operators-6rmvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6rmvm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: I0121 11:00:33.470870 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:33 crc kubenswrapper[4881]: E0121 11:00:33.848040 4881 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.129.56.4:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-marketplace-89m75.188cb9f7888c87eb openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-marketplace-89m75,UID:075db786-6ad0-4982-b70e-bd05d4f240ec,APIVersion:v1,ResourceVersion:28589,FieldPath:spec.containers{registry-server},},Reason:Created,Message:Created container registry-server,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 11:00:26.448734187 +0000 UTC m=+213.708690656,LastTimestamp:2026-01-21 11:00:26.448734187 +0000 UTC m=+213.708690656,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 11:00:35 crc kubenswrapper[4881]: E0121 11:00:35.507877 4881 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.4:6443: connect: connection refused" interval="6.4s" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.196275 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-q6dn5" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.196339 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-q6dn5" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.276091 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-q6dn5" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.276949 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.277541 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.278012 4881 status_manager.go:851] "Failed to get status for pod" podUID="d318e830-067f-4722-9d74-a45fcefc939d" pod="openshift-marketplace/redhat-operators-kfmhs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kfmhs\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.278382 4881 status_manager.go:851] "Failed to get status for pod" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-fb4fr\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.278837 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.279237 4881 status_manager.go:851] "Failed to get status for pod" podUID="5b12596d-1f5f-4d81-b664-d0ddee72552c" pod="openshift-marketplace/certified-operators-2sqlm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2sqlm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.279849 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.280337 4881 status_manager.go:851] "Failed to get status for pod" podUID="b83e71f8-970c-4afc-ac31-264c7ca6625a" pod="openshift-marketplace/redhat-operators-t4zlb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-t4zlb\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.280838 4881 status_manager.go:851] "Failed to get status for pod" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" pod="openshift-marketplace/community-operators-6rmvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6rmvm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.571332 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-q6dn5" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.571988 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.572635 4881 status_manager.go:851] "Failed to get status for pod" podUID="d318e830-067f-4722-9d74-a45fcefc939d" pod="openshift-marketplace/redhat-operators-kfmhs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kfmhs\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.573389 4881 status_manager.go:851] "Failed to get status for pod" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-fb4fr\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.573711 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.574079 4881 status_manager.go:851] "Failed to get status for pod" podUID="5b12596d-1f5f-4d81-b664-d0ddee72552c" pod="openshift-marketplace/certified-operators-2sqlm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2sqlm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.574403 4881 status_manager.go:851] "Failed to get status for pod" podUID="b83e71f8-970c-4afc-ac31-264c7ca6625a" pod="openshift-marketplace/redhat-operators-t4zlb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-t4zlb\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.574715 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.575086 4881 status_manager.go:851] "Failed to get status for pod" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" pod="openshift-marketplace/community-operators-6rmvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6rmvm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.575427 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.728388 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2sqlm" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.728466 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2sqlm" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.779838 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2sqlm" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.780508 4881 status_manager.go:851] "Failed to get status for pod" podUID="5b12596d-1f5f-4d81-b664-d0ddee72552c" pod="openshift-marketplace/certified-operators-2sqlm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2sqlm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.780998 4881 status_manager.go:851] "Failed to get status for pod" podUID="b83e71f8-970c-4afc-ac31-264c7ca6625a" pod="openshift-marketplace/redhat-operators-t4zlb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-t4zlb\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.781339 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.781648 4881 status_manager.go:851] "Failed to get status for pod" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" pod="openshift-marketplace/community-operators-6rmvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6rmvm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.781958 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.782326 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.782741 4881 status_manager.go:851] "Failed to get status for pod" podUID="d318e830-067f-4722-9d74-a45fcefc939d" pod="openshift-marketplace/redhat-operators-kfmhs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kfmhs\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.783085 4881 status_manager.go:851] "Failed to get status for pod" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-fb4fr\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.783339 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.856908 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6rmvm" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.857400 4881 status_manager.go:851] "Failed to get status for pod" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-fb4fr\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.857678 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.858037 4881 status_manager.go:851] "Failed to get status for pod" podUID="5b12596d-1f5f-4d81-b664-d0ddee72552c" pod="openshift-marketplace/certified-operators-2sqlm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2sqlm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.858325 4881 status_manager.go:851] "Failed to get status for pod" podUID="b83e71f8-970c-4afc-ac31-264c7ca6625a" pod="openshift-marketplace/redhat-operators-t4zlb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-t4zlb\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.858616 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.858882 4881 status_manager.go:851] "Failed to get status for pod" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" pod="openshift-marketplace/community-operators-6rmvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6rmvm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.859132 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.859399 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:37 crc kubenswrapper[4881]: I0121 11:00:37.859640 4881 status_manager.go:851] "Failed to get status for pod" podUID="d318e830-067f-4722-9d74-a45fcefc939d" pod="openshift-marketplace/redhat-operators-kfmhs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kfmhs\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:38 crc kubenswrapper[4881]: I0121 11:00:38.536974 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 21 11:00:38 crc kubenswrapper[4881]: I0121 11:00:38.537036 4881 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e" exitCode=1 Jan 21 11:00:38 crc kubenswrapper[4881]: I0121 11:00:38.537141 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e"} Jan 21 11:00:38 crc kubenswrapper[4881]: I0121 11:00:38.537907 4881 scope.go:117] "RemoveContainer" containerID="d93b0b0bb02293efe7c01a9dbc64ff302a7b9fab07a2fe9bef5d0c2b5e8ac30e" Jan 21 11:00:38 crc kubenswrapper[4881]: I0121 11:00:38.538635 4881 status_manager.go:851] "Failed to get status for pod" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" pod="openshift-marketplace/community-operators-6rmvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6rmvm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:38 crc kubenswrapper[4881]: I0121 11:00:38.540182 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:38 crc kubenswrapper[4881]: I0121 11:00:38.540737 4881 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:38 crc kubenswrapper[4881]: I0121 11:00:38.541163 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:38 crc kubenswrapper[4881]: I0121 11:00:38.542045 4881 status_manager.go:851] "Failed to get status for pod" podUID="d318e830-067f-4722-9d74-a45fcefc939d" pod="openshift-marketplace/redhat-operators-kfmhs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kfmhs\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:38 crc kubenswrapper[4881]: I0121 11:00:38.542228 4881 status_manager.go:851] "Failed to get status for pod" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-fb4fr\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:38 crc kubenswrapper[4881]: I0121 11:00:38.542395 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:38 crc kubenswrapper[4881]: I0121 11:00:38.542559 4881 status_manager.go:851] "Failed to get status for pod" podUID="5b12596d-1f5f-4d81-b664-d0ddee72552c" pod="openshift-marketplace/certified-operators-2sqlm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2sqlm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:38 crc kubenswrapper[4881]: I0121 11:00:38.546009 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:38 crc kubenswrapper[4881]: I0121 11:00:38.546630 4881 status_manager.go:851] "Failed to get status for pod" podUID="b83e71f8-970c-4afc-ac31-264c7ca6625a" pod="openshift-marketplace/redhat-operators-t4zlb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-t4zlb\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:38 crc kubenswrapper[4881]: I0121 11:00:38.604580 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2sqlm" Jan 21 11:00:38 crc kubenswrapper[4881]: I0121 11:00:38.605559 4881 status_manager.go:851] "Failed to get status for pod" podUID="b83e71f8-970c-4afc-ac31-264c7ca6625a" pod="openshift-marketplace/redhat-operators-t4zlb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-t4zlb\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:38 crc kubenswrapper[4881]: I0121 11:00:38.605981 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:38 crc kubenswrapper[4881]: I0121 11:00:38.606150 4881 status_manager.go:851] "Failed to get status for pod" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" pod="openshift-marketplace/community-operators-6rmvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6rmvm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:38 crc kubenswrapper[4881]: I0121 11:00:38.606305 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:38 crc kubenswrapper[4881]: I0121 11:00:38.606464 4881 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:38 crc kubenswrapper[4881]: I0121 11:00:38.606735 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:38 crc kubenswrapper[4881]: I0121 11:00:38.606916 4881 status_manager.go:851] "Failed to get status for pod" podUID="d318e830-067f-4722-9d74-a45fcefc939d" pod="openshift-marketplace/redhat-operators-kfmhs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kfmhs\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:38 crc kubenswrapper[4881]: I0121 11:00:38.607083 4881 status_manager.go:851] "Failed to get status for pod" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-fb4fr\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:38 crc kubenswrapper[4881]: I0121 11:00:38.607248 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:38 crc kubenswrapper[4881]: I0121 11:00:38.607411 4881 status_manager.go:851] "Failed to get status for pod" podUID="5b12596d-1f5f-4d81-b664-d0ddee72552c" pod="openshift-marketplace/certified-operators-2sqlm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2sqlm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:38 crc kubenswrapper[4881]: I0121 11:00:38.854207 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 11:00:39 crc kubenswrapper[4881]: I0121 11:00:39.547961 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 21 11:00:39 crc kubenswrapper[4881]: I0121 11:00:39.548900 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ff33174746d19460aab25278d732a07a6255013c7f12e5755802d92014fc940a"} Jan 21 11:00:39 crc kubenswrapper[4881]: I0121 11:00:39.550404 4881 status_manager.go:851] "Failed to get status for pod" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" pod="openshift-marketplace/community-operators-6rmvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6rmvm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:39 crc kubenswrapper[4881]: I0121 11:00:39.550935 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:39 crc kubenswrapper[4881]: I0121 11:00:39.551455 4881 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:39 crc kubenswrapper[4881]: I0121 11:00:39.551984 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:39 crc kubenswrapper[4881]: I0121 11:00:39.552215 4881 status_manager.go:851] "Failed to get status for pod" podUID="d318e830-067f-4722-9d74-a45fcefc939d" pod="openshift-marketplace/redhat-operators-kfmhs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kfmhs\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:39 crc kubenswrapper[4881]: I0121 11:00:39.552486 4881 status_manager.go:851] "Failed to get status for pod" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-fb4fr\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:39 crc kubenswrapper[4881]: I0121 11:00:39.552825 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:39 crc kubenswrapper[4881]: I0121 11:00:39.553227 4881 status_manager.go:851] "Failed to get status for pod" podUID="5b12596d-1f5f-4d81-b664-d0ddee72552c" pod="openshift-marketplace/certified-operators-2sqlm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2sqlm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:39 crc kubenswrapper[4881]: I0121 11:00:39.553644 4881 status_manager.go:851] "Failed to get status for pod" podUID="b83e71f8-970c-4afc-ac31-264c7ca6625a" pod="openshift-marketplace/redhat-operators-t4zlb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-t4zlb\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:39 crc kubenswrapper[4881]: I0121 11:00:39.553979 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:39 crc kubenswrapper[4881]: I0121 11:00:39.741037 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-89m75" Jan 21 11:00:39 crc kubenswrapper[4881]: I0121 11:00:39.742289 4881 status_manager.go:851] "Failed to get status for pod" podUID="5b12596d-1f5f-4d81-b664-d0ddee72552c" pod="openshift-marketplace/certified-operators-2sqlm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2sqlm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:39 crc kubenswrapper[4881]: I0121 11:00:39.742875 4881 status_manager.go:851] "Failed to get status for pod" podUID="b83e71f8-970c-4afc-ac31-264c7ca6625a" pod="openshift-marketplace/redhat-operators-t4zlb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-t4zlb\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:39 crc kubenswrapper[4881]: I0121 11:00:39.743415 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:39 crc kubenswrapper[4881]: I0121 11:00:39.744087 4881 status_manager.go:851] "Failed to get status for pod" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" pod="openshift-marketplace/community-operators-6rmvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6rmvm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:39 crc kubenswrapper[4881]: I0121 11:00:39.744623 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:39 crc kubenswrapper[4881]: I0121 11:00:39.744994 4881 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:39 crc kubenswrapper[4881]: I0121 11:00:39.745436 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:39 crc kubenswrapper[4881]: I0121 11:00:39.745753 4881 status_manager.go:851] "Failed to get status for pod" podUID="d318e830-067f-4722-9d74-a45fcefc939d" pod="openshift-marketplace/redhat-operators-kfmhs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kfmhs\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:39 crc kubenswrapper[4881]: I0121 11:00:39.746123 4881 status_manager.go:851] "Failed to get status for pod" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-fb4fr\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:39 crc kubenswrapper[4881]: I0121 11:00:39.746558 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.139259 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vljfh" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.140165 4881 status_manager.go:851] "Failed to get status for pod" podUID="5b12596d-1f5f-4d81-b664-d0ddee72552c" pod="openshift-marketplace/certified-operators-2sqlm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2sqlm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.140976 4881 status_manager.go:851] "Failed to get status for pod" podUID="b83e71f8-970c-4afc-ac31-264c7ca6625a" pod="openshift-marketplace/redhat-operators-t4zlb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-t4zlb\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.141356 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.141879 4881 status_manager.go:851] "Failed to get status for pod" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" pod="openshift-marketplace/community-operators-6rmvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6rmvm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.142265 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.142646 4881 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.143151 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.143518 4881 status_manager.go:851] "Failed to get status for pod" podUID="d318e830-067f-4722-9d74-a45fcefc939d" pod="openshift-marketplace/redhat-operators-kfmhs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kfmhs\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.143883 4881 status_manager.go:851] "Failed to get status for pod" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-fb4fr\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.144426 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.337096 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.338347 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.339082 4881 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.339619 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.339955 4881 status_manager.go:851] "Failed to get status for pod" podUID="d318e830-067f-4722-9d74-a45fcefc939d" pod="openshift-marketplace/redhat-operators-kfmhs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kfmhs\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.340456 4881 status_manager.go:851] "Failed to get status for pod" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-fb4fr\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.341079 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.341490 4881 status_manager.go:851] "Failed to get status for pod" podUID="5b12596d-1f5f-4d81-b664-d0ddee72552c" pod="openshift-marketplace/certified-operators-2sqlm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2sqlm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.341914 4881 status_manager.go:851] "Failed to get status for pod" podUID="b83e71f8-970c-4afc-ac31-264c7ca6625a" pod="openshift-marketplace/redhat-operators-t4zlb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-t4zlb\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.342347 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.342679 4881 status_manager.go:851] "Failed to get status for pod" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" pod="openshift-marketplace/community-operators-6rmvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6rmvm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.355226 4881 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5da31bf1-60a6-4d73-a425-97fe36cd40ee" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.355289 4881 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5da31bf1-60a6-4d73-a425-97fe36cd40ee" Jan 21 11:00:40 crc kubenswrapper[4881]: E0121 11:00:40.355930 4881 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.356669 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.361766 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kfmhs" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.362205 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kfmhs" Jan 21 11:00:40 crc kubenswrapper[4881]: W0121 11:00:40.388658 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-2d2396e8c25911397601513a07678c8c6371d5854b6a02b5782353dc2e1e3ef8 WatchSource:0}: Error finding container 2d2396e8c25911397601513a07678c8c6371d5854b6a02b5782353dc2e1e3ef8: Status 404 returned error can't find the container with id 2d2396e8c25911397601513a07678c8c6371d5854b6a02b5782353dc2e1e3ef8 Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.416897 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kfmhs" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.417831 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.418611 4881 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.419298 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.419687 4881 status_manager.go:851] "Failed to get status for pod" podUID="d318e830-067f-4722-9d74-a45fcefc939d" pod="openshift-marketplace/redhat-operators-kfmhs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kfmhs\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.420035 4881 status_manager.go:851] "Failed to get status for pod" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-fb4fr\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.420353 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.420635 4881 status_manager.go:851] "Failed to get status for pod" podUID="5b12596d-1f5f-4d81-b664-d0ddee72552c" pod="openshift-marketplace/certified-operators-2sqlm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2sqlm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.420963 4881 status_manager.go:851] "Failed to get status for pod" podUID="b83e71f8-970c-4afc-ac31-264c7ca6625a" pod="openshift-marketplace/redhat-operators-t4zlb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-t4zlb\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.421212 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.421559 4881 status_manager.go:851] "Failed to get status for pod" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" pod="openshift-marketplace/community-operators-6rmvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6rmvm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.556612 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2d2396e8c25911397601513a07678c8c6371d5854b6a02b5782353dc2e1e3ef8"} Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.614534 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kfmhs" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.616096 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.616833 4881 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.617448 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.617839 4881 status_manager.go:851] "Failed to get status for pod" podUID="d318e830-067f-4722-9d74-a45fcefc939d" pod="openshift-marketplace/redhat-operators-kfmhs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kfmhs\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.618182 4881 status_manager.go:851] "Failed to get status for pod" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-fb4fr\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.618535 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.618943 4881 status_manager.go:851] "Failed to get status for pod" podUID="5b12596d-1f5f-4d81-b664-d0ddee72552c" pod="openshift-marketplace/certified-operators-2sqlm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2sqlm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.619271 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.619511 4881 status_manager.go:851] "Failed to get status for pod" podUID="b83e71f8-970c-4afc-ac31-264c7ca6625a" pod="openshift-marketplace/redhat-operators-t4zlb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-t4zlb\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.619836 4881 status_manager.go:851] "Failed to get status for pod" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" pod="openshift-marketplace/community-operators-6rmvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6rmvm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.773285 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t4zlb" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.773342 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t4zlb" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.815260 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t4zlb" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.815814 4881 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.816075 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.816329 4881 status_manager.go:851] "Failed to get status for pod" podUID="d318e830-067f-4722-9d74-a45fcefc939d" pod="openshift-marketplace/redhat-operators-kfmhs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kfmhs\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.816586 4881 status_manager.go:851] "Failed to get status for pod" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-fb4fr\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.816825 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.817054 4881 status_manager.go:851] "Failed to get status for pod" podUID="5b12596d-1f5f-4d81-b664-d0ddee72552c" pod="openshift-marketplace/certified-operators-2sqlm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2sqlm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.817236 4881 status_manager.go:851] "Failed to get status for pod" podUID="b83e71f8-970c-4afc-ac31-264c7ca6625a" pod="openshift-marketplace/redhat-operators-t4zlb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-t4zlb\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.817390 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.817541 4881 status_manager.go:851] "Failed to get status for pod" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" pod="openshift-marketplace/community-operators-6rmvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6rmvm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:40 crc kubenswrapper[4881]: I0121 11:00:40.817734 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.234357 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.238842 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.239412 4881 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.239865 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.240554 4881 status_manager.go:851] "Failed to get status for pod" podUID="d318e830-067f-4722-9d74-a45fcefc939d" pod="openshift-marketplace/redhat-operators-kfmhs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kfmhs\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.241105 4881 status_manager.go:851] "Failed to get status for pod" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-fb4fr\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.241426 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.241860 4881 status_manager.go:851] "Failed to get status for pod" podUID="5b12596d-1f5f-4d81-b664-d0ddee72552c" pod="openshift-marketplace/certified-operators-2sqlm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2sqlm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.242373 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.242904 4881 status_manager.go:851] "Failed to get status for pod" podUID="b83e71f8-970c-4afc-ac31-264c7ca6625a" pod="openshift-marketplace/redhat-operators-t4zlb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-t4zlb\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.243369 4881 status_manager.go:851] "Failed to get status for pod" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" pod="openshift-marketplace/community-operators-6rmvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6rmvm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.243621 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.566423 4881 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="4cc224efcd44cd97aee734ee43bb83e308c8aa758eb86919b437e9cb332377ca" exitCode=0 Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.566562 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"4cc224efcd44cd97aee734ee43bb83e308c8aa758eb86919b437e9cb332377ca"} Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.566824 4881 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5da31bf1-60a6-4d73-a425-97fe36cd40ee" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.567165 4881 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5da31bf1-60a6-4d73-a425-97fe36cd40ee" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.567514 4881 status_manager.go:851] "Failed to get status for pod" podUID="5b12596d-1f5f-4d81-b664-d0ddee72552c" pod="openshift-marketplace/certified-operators-2sqlm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2sqlm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.567801 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.567935 4881 status_manager.go:851] "Failed to get status for pod" podUID="b83e71f8-970c-4afc-ac31-264c7ca6625a" pod="openshift-marketplace/redhat-operators-t4zlb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-t4zlb\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:41 crc kubenswrapper[4881]: E0121 11:00:41.567957 4881 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.568238 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.568688 4881 status_manager.go:851] "Failed to get status for pod" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" pod="openshift-marketplace/community-operators-6rmvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6rmvm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.570388 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.570811 4881 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.571147 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.571483 4881 status_manager.go:851] "Failed to get status for pod" podUID="d318e830-067f-4722-9d74-a45fcefc939d" pod="openshift-marketplace/redhat-operators-kfmhs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kfmhs\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.573267 4881 status_manager.go:851] "Failed to get status for pod" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-fb4fr\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.573949 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.621946 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t4zlb" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.622892 4881 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.623188 4881 status_manager.go:851] "Failed to get status for pod" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" pod="openshift-marketplace/redhat-marketplace-vljfh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vljfh\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.623420 4881 status_manager.go:851] "Failed to get status for pod" podUID="d318e830-067f-4722-9d74-a45fcefc939d" pod="openshift-marketplace/redhat-operators-kfmhs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-kfmhs\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.623935 4881 status_manager.go:851] "Failed to get status for pod" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-fb4fr\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.624396 4881 status_manager.go:851] "Failed to get status for pod" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" pod="openshift-marketplace/certified-operators-q6dn5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-q6dn5\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.624585 4881 status_manager.go:851] "Failed to get status for pod" podUID="5b12596d-1f5f-4d81-b664-d0ddee72552c" pod="openshift-marketplace/certified-operators-2sqlm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-2sqlm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.625088 4881 status_manager.go:851] "Failed to get status for pod" podUID="b83e71f8-970c-4afc-ac31-264c7ca6625a" pod="openshift-marketplace/redhat-operators-t4zlb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-t4zlb\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.625778 4881 status_manager.go:851] "Failed to get status for pod" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.626112 4881 status_manager.go:851] "Failed to get status for pod" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" pod="openshift-marketplace/community-operators-6rmvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6rmvm\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:41 crc kubenswrapper[4881]: I0121 11:00:41.626362 4881 status_manager.go:851] "Failed to get status for pod" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" pod="openshift-marketplace/redhat-marketplace-89m75" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-89m75\": dial tcp 38.129.56.4:6443: connect: connection refused" Jan 21 11:00:41 crc kubenswrapper[4881]: E0121 11:00:41.909569 4881 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.4:6443: connect: connection refused" interval="7s" Jan 21 11:00:42 crc kubenswrapper[4881]: I0121 11:00:42.637312 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3d6e82a5b7cad5bf1a2142628cbfd847c7527dc87d02df8c818b477e8186e80c"} Jan 21 11:00:42 crc kubenswrapper[4881]: I0121 11:00:42.637853 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"fa8c66424805081402cbf09b76ebe7eb1b727c9472e19926a74b709f32df256c"} Jan 21 11:00:42 crc kubenswrapper[4881]: I0121 11:00:42.637869 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f88eb7e3a7828df105488abc11b051b98ec4a3a8ce36cafb8ea569b3d9737c7c"} Jan 21 11:00:43 crc kubenswrapper[4881]: I0121 11:00:43.647766 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"10b322b02ba88fb1e74f4c96ac00898962f9b10f9ead20dac706f7e28969eb29"} Jan 21 11:00:43 crc kubenswrapper[4881]: I0121 11:00:43.648323 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 11:00:43 crc kubenswrapper[4881]: I0121 11:00:43.648340 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5adc74d07f5fb13d0f83706dc0ab5eff934c025860980485fb0100f977921a27"} Jan 21 11:00:43 crc kubenswrapper[4881]: I0121 11:00:43.648164 4881 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5da31bf1-60a6-4d73-a425-97fe36cd40ee" Jan 21 11:00:43 crc kubenswrapper[4881]: I0121 11:00:43.648369 4881 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5da31bf1-60a6-4d73-a425-97fe36cd40ee" Jan 21 11:00:44 crc kubenswrapper[4881]: I0121 11:00:44.463937 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-whh46" podUID="2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad" containerName="oauth-openshift" containerID="cri-o://35ce5ecabc873c14d35cf37aa4dd5c20723f513985dbc4caa43cffafe43e41fe" gracePeriod=15 Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.357913 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.358928 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.363997 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.368387 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.513237 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-session\") pod \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.513337 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-router-certs\") pod \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.513395 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-service-ca\") pod \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.513479 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-user-template-error\") pod \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.513522 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-audit-policies\") pod \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.513560 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-user-template-provider-selection\") pod \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.513598 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbqhc\" (UniqueName: \"kubernetes.io/projected/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-kube-api-access-lbqhc\") pod \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.513655 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-user-idp-0-file-data\") pod \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.513720 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-serving-cert\") pod \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.513754 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-audit-dir\") pod \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.513814 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-ocp-branding-template\") pod \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.513871 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-cliconfig\") pod \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.513933 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-user-template-login\") pod \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.513965 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-trusted-ca-bundle\") pod \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\" (UID: \"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad\") " Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.514422 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad" (UID: "2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.514510 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad" (UID: "2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.514975 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad" (UID: "2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.515543 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad" (UID: "2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.516136 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad" (UID: "2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.522286 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad" (UID: "2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.523963 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad" (UID: "2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.524611 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-kube-api-access-lbqhc" (OuterVolumeSpecName: "kube-api-access-lbqhc") pod "2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad" (UID: "2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad"). InnerVolumeSpecName "kube-api-access-lbqhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.524617 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad" (UID: "2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.525158 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad" (UID: "2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.525546 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad" (UID: "2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.525728 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad" (UID: "2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.526335 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad" (UID: "2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.527459 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad" (UID: "2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.615506 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.615585 4881 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.615605 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.615626 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbqhc\" (UniqueName: \"kubernetes.io/projected/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-kube-api-access-lbqhc\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.615641 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.615662 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.615675 4881 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.615692 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.615715 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.615736 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.615754 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.615772 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.615807 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.615822 4881 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.664484 4881 generic.go:334] "Generic (PLEG): container finished" podID="2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad" containerID="35ce5ecabc873c14d35cf37aa4dd5c20723f513985dbc4caa43cffafe43e41fe" exitCode=0 Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.670863 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-whh46" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.671141 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-whh46" event={"ID":"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad","Type":"ContainerDied","Data":"35ce5ecabc873c14d35cf37aa4dd5c20723f513985dbc4caa43cffafe43e41fe"} Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.674582 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-whh46" event={"ID":"2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad","Type":"ContainerDied","Data":"216606908c8b27d34a9f3f57e132945839e5bd3eae4f856f2671c9e8308d7423"} Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.674670 4881 scope.go:117] "RemoveContainer" containerID="35ce5ecabc873c14d35cf37aa4dd5c20723f513985dbc4caa43cffafe43e41fe" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.704637 4881 scope.go:117] "RemoveContainer" containerID="35ce5ecabc873c14d35cf37aa4dd5c20723f513985dbc4caa43cffafe43e41fe" Jan 21 11:00:45 crc kubenswrapper[4881]: E0121 11:00:45.707748 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35ce5ecabc873c14d35cf37aa4dd5c20723f513985dbc4caa43cffafe43e41fe\": container with ID starting with 35ce5ecabc873c14d35cf37aa4dd5c20723f513985dbc4caa43cffafe43e41fe not found: ID does not exist" containerID="35ce5ecabc873c14d35cf37aa4dd5c20723f513985dbc4caa43cffafe43e41fe" Jan 21 11:00:45 crc kubenswrapper[4881]: I0121 11:00:45.707848 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35ce5ecabc873c14d35cf37aa4dd5c20723f513985dbc4caa43cffafe43e41fe"} err="failed to get container status \"35ce5ecabc873c14d35cf37aa4dd5c20723f513985dbc4caa43cffafe43e41fe\": rpc error: code = NotFound desc = could not find container \"35ce5ecabc873c14d35cf37aa4dd5c20723f513985dbc4caa43cffafe43e41fe\": container with ID starting with 35ce5ecabc873c14d35cf37aa4dd5c20723f513985dbc4caa43cffafe43e41fe not found: ID does not exist" Jan 21 11:00:48 crc kubenswrapper[4881]: I0121 11:00:48.672452 4881 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 11:00:48 crc kubenswrapper[4881]: I0121 11:00:48.763632 4881 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="5a430d8b-9d5e-41d8-a702-5042d4c683ad" Jan 21 11:00:49 crc kubenswrapper[4881]: E0121 11:00:49.185969 4881 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\": Failed to watch *v1.Secret: unknown (get secrets)" logger="UnhandledError" Jan 21 11:00:49 crc kubenswrapper[4881]: I0121 11:00:49.701965 4881 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5da31bf1-60a6-4d73-a425-97fe36cd40ee" Jan 21 11:00:49 crc kubenswrapper[4881]: I0121 11:00:49.702049 4881 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5da31bf1-60a6-4d73-a425-97fe36cd40ee" Jan 21 11:00:49 crc kubenswrapper[4881]: I0121 11:00:49.706691 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 11:00:49 crc kubenswrapper[4881]: I0121 11:00:49.707272 4881 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="5a430d8b-9d5e-41d8-a702-5042d4c683ad" Jan 21 11:00:49 crc kubenswrapper[4881]: E0121 11:00:49.818441 4881 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\": Failed to watch *v1.Secret: unknown (get secrets)" logger="UnhandledError" Jan 21 11:00:50 crc kubenswrapper[4881]: I0121 11:00:50.710563 4881 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5da31bf1-60a6-4d73-a425-97fe36cd40ee" Jan 21 11:00:50 crc kubenswrapper[4881]: I0121 11:00:50.710612 4881 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5da31bf1-60a6-4d73-a425-97fe36cd40ee" Jan 21 11:00:50 crc kubenswrapper[4881]: I0121 11:00:50.714191 4881 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="5a430d8b-9d5e-41d8-a702-5042d4c683ad" Jan 21 11:00:56 crc kubenswrapper[4881]: I0121 11:00:56.496352 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 11:00:58 crc kubenswrapper[4881]: I0121 11:00:58.376454 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 21 11:00:58 crc kubenswrapper[4881]: I0121 11:00:58.606859 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 11:00:58 crc kubenswrapper[4881]: I0121 11:00:58.655061 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 11:00:58 crc kubenswrapper[4881]: I0121 11:00:58.822743 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 21 11:00:59 crc kubenswrapper[4881]: I0121 11:00:59.366971 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 21 11:00:59 crc kubenswrapper[4881]: I0121 11:00:59.630573 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 21 11:00:59 crc kubenswrapper[4881]: I0121 11:00:59.748969 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 21 11:01:00 crc kubenswrapper[4881]: I0121 11:01:00.361505 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 21 11:01:00 crc kubenswrapper[4881]: I0121 11:01:00.464229 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 21 11:01:00 crc kubenswrapper[4881]: I0121 11:01:00.571330 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 21 11:01:00 crc kubenswrapper[4881]: I0121 11:01:00.706217 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 21 11:01:00 crc kubenswrapper[4881]: I0121 11:01:00.882004 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 21 11:01:01 crc kubenswrapper[4881]: I0121 11:01:01.025830 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 21 11:01:01 crc kubenswrapper[4881]: I0121 11:01:01.137881 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 21 11:01:01 crc kubenswrapper[4881]: I0121 11:01:01.155196 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 21 11:01:01 crc kubenswrapper[4881]: I0121 11:01:01.279252 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 11:01:01 crc kubenswrapper[4881]: I0121 11:01:01.292128 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 21 11:01:01 crc kubenswrapper[4881]: I0121 11:01:01.296176 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 21 11:01:01 crc kubenswrapper[4881]: I0121 11:01:01.345988 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 21 11:01:01 crc kubenswrapper[4881]: I0121 11:01:01.475355 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 21 11:01:01 crc kubenswrapper[4881]: I0121 11:01:01.475457 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 11:01:01 crc kubenswrapper[4881]: I0121 11:01:01.481359 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 21 11:01:01 crc kubenswrapper[4881]: I0121 11:01:01.629684 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 21 11:01:01 crc kubenswrapper[4881]: I0121 11:01:01.862393 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 21 11:01:01 crc kubenswrapper[4881]: I0121 11:01:01.875669 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 21 11:01:01 crc kubenswrapper[4881]: I0121 11:01:01.963145 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 21 11:01:01 crc kubenswrapper[4881]: I0121 11:01:01.977859 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 21 11:01:02 crc kubenswrapper[4881]: I0121 11:01:02.009559 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 21 11:01:02 crc kubenswrapper[4881]: I0121 11:01:02.064859 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 21 11:01:02 crc kubenswrapper[4881]: I0121 11:01:02.175236 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 21 11:01:02 crc kubenswrapper[4881]: I0121 11:01:02.302502 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 21 11:01:02 crc kubenswrapper[4881]: I0121 11:01:02.403824 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 21 11:01:02 crc kubenswrapper[4881]: I0121 11:01:02.486660 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 21 11:01:02 crc kubenswrapper[4881]: I0121 11:01:02.547124 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 21 11:01:02 crc kubenswrapper[4881]: I0121 11:01:02.551911 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 21 11:01:02 crc kubenswrapper[4881]: I0121 11:01:02.584738 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 21 11:01:02 crc kubenswrapper[4881]: I0121 11:01:02.589340 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 21 11:01:02 crc kubenswrapper[4881]: I0121 11:01:02.605121 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 21 11:01:02 crc kubenswrapper[4881]: I0121 11:01:02.617502 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 21 11:01:02 crc kubenswrapper[4881]: I0121 11:01:02.671378 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 21 11:01:02 crc kubenswrapper[4881]: I0121 11:01:02.868344 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 21 11:01:02 crc kubenswrapper[4881]: I0121 11:01:02.894553 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 21 11:01:02 crc kubenswrapper[4881]: I0121 11:01:02.982839 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 21 11:01:03 crc kubenswrapper[4881]: I0121 11:01:03.007506 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 21 11:01:03 crc kubenswrapper[4881]: I0121 11:01:03.017908 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 21 11:01:03 crc kubenswrapper[4881]: I0121 11:01:03.018426 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 21 11:01:03 crc kubenswrapper[4881]: I0121 11:01:03.022211 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 21 11:01:03 crc kubenswrapper[4881]: I0121 11:01:03.025265 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 21 11:01:03 crc kubenswrapper[4881]: I0121 11:01:03.096639 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 21 11:01:03 crc kubenswrapper[4881]: I0121 11:01:03.104580 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 21 11:01:03 crc kubenswrapper[4881]: I0121 11:01:03.128332 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 21 11:01:03 crc kubenswrapper[4881]: I0121 11:01:03.174121 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 21 11:01:03 crc kubenswrapper[4881]: I0121 11:01:03.297669 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 21 11:01:03 crc kubenswrapper[4881]: I0121 11:01:03.306138 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 21 11:01:03 crc kubenswrapper[4881]: I0121 11:01:03.348512 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 21 11:01:03 crc kubenswrapper[4881]: I0121 11:01:03.351947 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 21 11:01:03 crc kubenswrapper[4881]: I0121 11:01:03.396298 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 21 11:01:03 crc kubenswrapper[4881]: I0121 11:01:03.493011 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 21 11:01:03 crc kubenswrapper[4881]: I0121 11:01:03.585354 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 21 11:01:03 crc kubenswrapper[4881]: I0121 11:01:03.594190 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 21 11:01:03 crc kubenswrapper[4881]: I0121 11:01:03.745181 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 21 11:01:03 crc kubenswrapper[4881]: I0121 11:01:03.794889 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 21 11:01:03 crc kubenswrapper[4881]: I0121 11:01:03.841750 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 21 11:01:03 crc kubenswrapper[4881]: I0121 11:01:03.848830 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 21 11:01:03 crc kubenswrapper[4881]: I0121 11:01:03.853639 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 21 11:01:03 crc kubenswrapper[4881]: I0121 11:01:03.898320 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 21 11:01:03 crc kubenswrapper[4881]: I0121 11:01:03.911224 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 21 11:01:03 crc kubenswrapper[4881]: I0121 11:01:03.972748 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 21 11:01:03 crc kubenswrapper[4881]: I0121 11:01:03.989561 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 21 11:01:04 crc kubenswrapper[4881]: I0121 11:01:04.000462 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 21 11:01:04 crc kubenswrapper[4881]: I0121 11:01:04.086565 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 21 11:01:04 crc kubenswrapper[4881]: I0121 11:01:04.109369 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 11:01:04 crc kubenswrapper[4881]: I0121 11:01:04.137785 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 11:01:04 crc kubenswrapper[4881]: I0121 11:01:04.266015 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 21 11:01:04 crc kubenswrapper[4881]: I0121 11:01:04.285025 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 21 11:01:04 crc kubenswrapper[4881]: I0121 11:01:04.316290 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 21 11:01:04 crc kubenswrapper[4881]: I0121 11:01:04.451909 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 21 11:01:04 crc kubenswrapper[4881]: I0121 11:01:04.577697 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 21 11:01:04 crc kubenswrapper[4881]: I0121 11:01:04.826202 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 21 11:01:04 crc kubenswrapper[4881]: I0121 11:01:04.839442 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 21 11:01:05 crc kubenswrapper[4881]: I0121 11:01:05.041579 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 21 11:01:05 crc kubenswrapper[4881]: I0121 11:01:05.232341 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 21 11:01:05 crc kubenswrapper[4881]: I0121 11:01:05.245602 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 21 11:01:05 crc kubenswrapper[4881]: I0121 11:01:05.275814 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 21 11:01:05 crc kubenswrapper[4881]: I0121 11:01:05.287574 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 21 11:01:05 crc kubenswrapper[4881]: I0121 11:01:05.354039 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 21 11:01:05 crc kubenswrapper[4881]: I0121 11:01:05.394674 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 11:01:05 crc kubenswrapper[4881]: I0121 11:01:05.529220 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 21 11:01:05 crc kubenswrapper[4881]: I0121 11:01:05.578063 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 11:01:05 crc kubenswrapper[4881]: I0121 11:01:05.580897 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 21 11:01:05 crc kubenswrapper[4881]: I0121 11:01:05.623566 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 21 11:01:05 crc kubenswrapper[4881]: I0121 11:01:05.656215 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 21 11:01:05 crc kubenswrapper[4881]: I0121 11:01:05.702568 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 21 11:01:05 crc kubenswrapper[4881]: I0121 11:01:05.747388 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 21 11:01:05 crc kubenswrapper[4881]: I0121 11:01:05.925558 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 21 11:01:06 crc kubenswrapper[4881]: I0121 11:01:06.120801 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 21 11:01:06 crc kubenswrapper[4881]: I0121 11:01:06.162739 4881 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 21 11:01:06 crc kubenswrapper[4881]: I0121 11:01:06.341875 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 21 11:01:06 crc kubenswrapper[4881]: I0121 11:01:06.348644 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 21 11:01:06 crc kubenswrapper[4881]: I0121 11:01:06.614154 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 21 11:01:06 crc kubenswrapper[4881]: I0121 11:01:06.750446 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 21 11:01:06 crc kubenswrapper[4881]: I0121 11:01:06.837432 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 21 11:01:06 crc kubenswrapper[4881]: I0121 11:01:06.850753 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 21 11:01:06 crc kubenswrapper[4881]: I0121 11:01:06.917689 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 21 11:01:06 crc kubenswrapper[4881]: I0121 11:01:06.952377 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 21 11:01:07 crc kubenswrapper[4881]: I0121 11:01:07.018906 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 11:01:07 crc kubenswrapper[4881]: I0121 11:01:07.070870 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 21 11:01:07 crc kubenswrapper[4881]: I0121 11:01:07.092999 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 21 11:01:07 crc kubenswrapper[4881]: I0121 11:01:07.373527 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 21 11:01:07 crc kubenswrapper[4881]: I0121 11:01:07.431497 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 21 11:01:07 crc kubenswrapper[4881]: I0121 11:01:07.435656 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 21 11:01:07 crc kubenswrapper[4881]: I0121 11:01:07.476487 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 21 11:01:07 crc kubenswrapper[4881]: I0121 11:01:07.510807 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 21 11:01:07 crc kubenswrapper[4881]: I0121 11:01:07.526967 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 21 11:01:07 crc kubenswrapper[4881]: I0121 11:01:07.539998 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 21 11:01:07 crc kubenswrapper[4881]: I0121 11:01:07.779632 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 21 11:01:07 crc kubenswrapper[4881]: I0121 11:01:07.814869 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 21 11:01:07 crc kubenswrapper[4881]: I0121 11:01:07.842977 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 21 11:01:07 crc kubenswrapper[4881]: I0121 11:01:07.902865 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 21 11:01:07 crc kubenswrapper[4881]: I0121 11:01:07.920331 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 21 11:01:07 crc kubenswrapper[4881]: I0121 11:01:07.961992 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 21 11:01:08 crc kubenswrapper[4881]: I0121 11:01:08.018469 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 21 11:01:08 crc kubenswrapper[4881]: I0121 11:01:08.018859 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 21 11:01:08 crc kubenswrapper[4881]: I0121 11:01:08.082652 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 21 11:01:08 crc kubenswrapper[4881]: I0121 11:01:08.111707 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 21 11:01:08 crc kubenswrapper[4881]: I0121 11:01:08.132269 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 21 11:01:08 crc kubenswrapper[4881]: I0121 11:01:08.139415 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 21 11:01:08 crc kubenswrapper[4881]: I0121 11:01:08.186391 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 21 11:01:08 crc kubenswrapper[4881]: I0121 11:01:08.204083 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 21 11:01:08 crc kubenswrapper[4881]: I0121 11:01:08.235604 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 21 11:01:08 crc kubenswrapper[4881]: I0121 11:01:08.235611 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 11:01:08 crc kubenswrapper[4881]: I0121 11:01:08.291692 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 21 11:01:08 crc kubenswrapper[4881]: I0121 11:01:08.370087 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 21 11:01:08 crc kubenswrapper[4881]: I0121 11:01:08.399662 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 21 11:01:08 crc kubenswrapper[4881]: I0121 11:01:08.422738 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 21 11:01:08 crc kubenswrapper[4881]: I0121 11:01:08.472672 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 21 11:01:08 crc kubenswrapper[4881]: I0121 11:01:08.557236 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 21 11:01:08 crc kubenswrapper[4881]: I0121 11:01:08.625635 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 21 11:01:08 crc kubenswrapper[4881]: I0121 11:01:08.641036 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 21 11:01:08 crc kubenswrapper[4881]: I0121 11:01:08.652754 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 21 11:01:08 crc kubenswrapper[4881]: I0121 11:01:08.766631 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 11:01:08 crc kubenswrapper[4881]: I0121 11:01:08.819275 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 21 11:01:08 crc kubenswrapper[4881]: I0121 11:01:08.821558 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 21 11:01:08 crc kubenswrapper[4881]: I0121 11:01:08.912049 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 21 11:01:08 crc kubenswrapper[4881]: I0121 11:01:08.922548 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 21 11:01:08 crc kubenswrapper[4881]: I0121 11:01:08.925082 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 21 11:01:08 crc kubenswrapper[4881]: I0121 11:01:08.961603 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 21 11:01:08 crc kubenswrapper[4881]: I0121 11:01:08.969902 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 21 11:01:09 crc kubenswrapper[4881]: I0121 11:01:09.060121 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 21 11:01:09 crc kubenswrapper[4881]: I0121 11:01:09.137268 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 21 11:01:09 crc kubenswrapper[4881]: I0121 11:01:09.168534 4881 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 21 11:01:09 crc kubenswrapper[4881]: I0121 11:01:09.195767 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 21 11:01:09 crc kubenswrapper[4881]: I0121 11:01:09.358774 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 21 11:01:09 crc kubenswrapper[4881]: I0121 11:01:09.366018 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 21 11:01:09 crc kubenswrapper[4881]: I0121 11:01:09.566122 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 21 11:01:09 crc kubenswrapper[4881]: I0121 11:01:09.575562 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 21 11:01:09 crc kubenswrapper[4881]: I0121 11:01:09.583330 4881 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 21 11:01:09 crc kubenswrapper[4881]: I0121 11:01:09.602636 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 21 11:01:09 crc kubenswrapper[4881]: I0121 11:01:09.643033 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 21 11:01:09 crc kubenswrapper[4881]: I0121 11:01:09.661623 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 21 11:01:09 crc kubenswrapper[4881]: I0121 11:01:09.690484 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 21 11:01:09 crc kubenswrapper[4881]: I0121 11:01:09.746172 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 21 11:01:09 crc kubenswrapper[4881]: I0121 11:01:09.858878 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 21 11:01:09 crc kubenswrapper[4881]: I0121 11:01:09.874628 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 11:01:10 crc kubenswrapper[4881]: I0121 11:01:10.003063 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 21 11:01:10 crc kubenswrapper[4881]: I0121 11:01:10.011642 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 21 11:01:10 crc kubenswrapper[4881]: I0121 11:01:10.070827 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 21 11:01:10 crc kubenswrapper[4881]: I0121 11:01:10.213276 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 21 11:01:10 crc kubenswrapper[4881]: I0121 11:01:10.229574 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 21 11:01:10 crc kubenswrapper[4881]: I0121 11:01:10.269981 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 21 11:01:10 crc kubenswrapper[4881]: I0121 11:01:10.363227 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 21 11:01:10 crc kubenswrapper[4881]: I0121 11:01:10.371861 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 21 11:01:10 crc kubenswrapper[4881]: I0121 11:01:10.372052 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 21 11:01:10 crc kubenswrapper[4881]: I0121 11:01:10.471883 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 21 11:01:10 crc kubenswrapper[4881]: I0121 11:01:10.479208 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 21 11:01:10 crc kubenswrapper[4881]: I0121 11:01:10.528254 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 21 11:01:10 crc kubenswrapper[4881]: I0121 11:01:10.571044 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 21 11:01:10 crc kubenswrapper[4881]: I0121 11:01:10.590883 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 21 11:01:10 crc kubenswrapper[4881]: I0121 11:01:10.678810 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 21 11:01:10 crc kubenswrapper[4881]: I0121 11:01:10.759434 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 21 11:01:10 crc kubenswrapper[4881]: I0121 11:01:10.759954 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 21 11:01:10 crc kubenswrapper[4881]: I0121 11:01:10.969368 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 21 11:01:10 crc kubenswrapper[4881]: I0121 11:01:10.990956 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 21 11:01:10 crc kubenswrapper[4881]: I0121 11:01:10.993995 4881 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 21 11:01:11 crc kubenswrapper[4881]: I0121 11:01:11.035635 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 21 11:01:11 crc kubenswrapper[4881]: I0121 11:01:11.112563 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 21 11:01:11 crc kubenswrapper[4881]: I0121 11:01:11.290575 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 21 11:01:11 crc kubenswrapper[4881]: I0121 11:01:11.292828 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 21 11:01:11 crc kubenswrapper[4881]: I0121 11:01:11.298879 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 21 11:01:11 crc kubenswrapper[4881]: I0121 11:01:11.316188 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 21 11:01:11 crc kubenswrapper[4881]: I0121 11:01:11.367468 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 21 11:01:11 crc kubenswrapper[4881]: I0121 11:01:11.422315 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 21 11:01:11 crc kubenswrapper[4881]: I0121 11:01:11.589922 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 21 11:01:11 crc kubenswrapper[4881]: I0121 11:01:11.598055 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 21 11:01:11 crc kubenswrapper[4881]: I0121 11:01:11.601640 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 21 11:01:11 crc kubenswrapper[4881]: I0121 11:01:11.606397 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 21 11:01:11 crc kubenswrapper[4881]: I0121 11:01:11.624185 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 21 11:01:11 crc kubenswrapper[4881]: I0121 11:01:11.668410 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 21 11:01:11 crc kubenswrapper[4881]: I0121 11:01:11.819238 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 21 11:01:12 crc kubenswrapper[4881]: I0121 11:01:12.012143 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 21 11:01:12 crc kubenswrapper[4881]: I0121 11:01:12.061226 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 21 11:01:12 crc kubenswrapper[4881]: I0121 11:01:12.129886 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 21 11:01:12 crc kubenswrapper[4881]: I0121 11:01:12.131761 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 21 11:01:12 crc kubenswrapper[4881]: I0121 11:01:12.135972 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 21 11:01:12 crc kubenswrapper[4881]: I0121 11:01:12.174895 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 21 11:01:12 crc kubenswrapper[4881]: I0121 11:01:12.542659 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 21 11:01:12 crc kubenswrapper[4881]: I0121 11:01:12.621234 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 11:01:12 crc kubenswrapper[4881]: I0121 11:01:12.762617 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 21 11:01:12 crc kubenswrapper[4881]: I0121 11:01:12.776595 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 21 11:01:12 crc kubenswrapper[4881]: I0121 11:01:12.869447 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 21 11:01:12 crc kubenswrapper[4881]: I0121 11:01:12.891733 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 21 11:01:12 crc kubenswrapper[4881]: I0121 11:01:12.960021 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 21 11:01:13 crc kubenswrapper[4881]: I0121 11:01:13.017479 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 21 11:01:13 crc kubenswrapper[4881]: I0121 11:01:13.021128 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 21 11:01:13 crc kubenswrapper[4881]: I0121 11:01:13.023123 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 21 11:01:13 crc kubenswrapper[4881]: I0121 11:01:13.108171 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 21 11:01:13 crc kubenswrapper[4881]: I0121 11:01:13.108350 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 21 11:01:13 crc kubenswrapper[4881]: I0121 11:01:13.231471 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 21 11:01:13 crc kubenswrapper[4881]: I0121 11:01:13.248134 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 21 11:01:13 crc kubenswrapper[4881]: I0121 11:01:13.432140 4881 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 21 11:01:13 crc kubenswrapper[4881]: I0121 11:01:13.561594 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 11:01:13 crc kubenswrapper[4881]: I0121 11:01:13.647029 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 21 11:01:13 crc kubenswrapper[4881]: I0121 11:01:13.822527 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 21 11:01:13 crc kubenswrapper[4881]: I0121 11:01:13.829124 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 21 11:01:13 crc kubenswrapper[4881]: I0121 11:01:13.967964 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 21 11:01:13 crc kubenswrapper[4881]: I0121 11:01:13.990552 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 21 11:01:14 crc kubenswrapper[4881]: I0121 11:01:14.048132 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 21 11:01:14 crc kubenswrapper[4881]: I0121 11:01:14.073550 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 21 11:01:14 crc kubenswrapper[4881]: I0121 11:01:14.233009 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 21 11:01:14 crc kubenswrapper[4881]: I0121 11:01:14.315468 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 21 11:01:14 crc kubenswrapper[4881]: I0121 11:01:14.489461 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 21 11:01:14 crc kubenswrapper[4881]: I0121 11:01:14.696441 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 21 11:01:31 crc kubenswrapper[4881]: I0121 11:01:31.653476 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 21 11:01:43 crc kubenswrapper[4881]: I0121 11:01:43.771568 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 21 11:01:47 crc kubenswrapper[4881]: I0121 11:01:47.701576 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 21 11:01:47 crc kubenswrapper[4881]: I0121 11:01:47.760340 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.865473 4881 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.866881 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kfmhs" podStartSLOduration=87.291891007 podStartE2EDuration="3m2.866859437s" podCreationTimestamp="2026-01-21 10:58:50 +0000 UTC" firstStartedPulling="2026-01-21 10:58:52.687157552 +0000 UTC m=+119.947114021" lastFinishedPulling="2026-01-21 11:00:28.262125982 +0000 UTC m=+215.522082451" observedRunningTime="2026-01-21 11:00:48.613318464 +0000 UTC m=+235.873274933" watchObservedRunningTime="2026-01-21 11:01:52.866859437 +0000 UTC m=+300.126815896" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.867686 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-q6dn5" podStartSLOduration=92.223133184 podStartE2EDuration="3m6.867677489s" podCreationTimestamp="2026-01-21 10:58:46 +0000 UTC" firstStartedPulling="2026-01-21 10:58:49.413397935 +0000 UTC m=+116.673354404" lastFinishedPulling="2026-01-21 11:00:24.05794224 +0000 UTC m=+211.317898709" observedRunningTime="2026-01-21 11:00:48.676916006 +0000 UTC m=+235.936872475" watchObservedRunningTime="2026-01-21 11:01:52.867677489 +0000 UTC m=+300.127633958" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.868474 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vljfh" podStartSLOduration=97.578040704 podStartE2EDuration="3m3.868466749s" podCreationTimestamp="2026-01-21 10:58:49 +0000 UTC" firstStartedPulling="2026-01-21 10:58:51.485309684 +0000 UTC m=+118.745266153" lastFinishedPulling="2026-01-21 11:00:17.775735729 +0000 UTC m=+205.035692198" observedRunningTime="2026-01-21 11:00:48.587840335 +0000 UTC m=+235.847796814" watchObservedRunningTime="2026-01-21 11:01:52.868466749 +0000 UTC m=+300.128423218" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.868586 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t4zlb" podStartSLOduration=82.643953818 podStartE2EDuration="3m2.868580632s" podCreationTimestamp="2026-01-21 10:58:50 +0000 UTC" firstStartedPulling="2026-01-21 10:58:52.673688132 +0000 UTC m=+119.933644601" lastFinishedPulling="2026-01-21 11:00:32.898314946 +0000 UTC m=+220.158271415" observedRunningTime="2026-01-21 11:00:48.725181764 +0000 UTC m=+235.985138243" watchObservedRunningTime="2026-01-21 11:01:52.868580632 +0000 UTC m=+300.128537101" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.868931 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-89m75" podStartSLOduration=93.325736883 podStartE2EDuration="3m4.8689248s" podCreationTimestamp="2026-01-21 10:58:48 +0000 UTC" firstStartedPulling="2026-01-21 10:58:51.645049337 +0000 UTC m=+118.905005806" lastFinishedPulling="2026-01-21 11:00:23.188237254 +0000 UTC m=+210.448193723" observedRunningTime="2026-01-21 11:00:48.545141203 +0000 UTC m=+235.805097682" watchObservedRunningTime="2026-01-21 11:01:52.8689248 +0000 UTC m=+300.128881279" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.870842 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2sqlm" podStartSLOduration=82.398900067 podStartE2EDuration="3m5.870836471s" podCreationTimestamp="2026-01-21 10:58:47 +0000 UTC" firstStartedPulling="2026-01-21 10:58:49.409825088 +0000 UTC m=+116.669781557" lastFinishedPulling="2026-01-21 11:00:32.881761452 +0000 UTC m=+220.141717961" observedRunningTime="2026-01-21 11:00:48.70143362 +0000 UTC m=+235.961390099" watchObservedRunningTime="2026-01-21 11:01:52.870836471 +0000 UTC m=+300.130792940" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.871280 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6rmvm" podStartSLOduration=92.069366057 podStartE2EDuration="3m5.871276662s" podCreationTimestamp="2026-01-21 10:58:47 +0000 UTC" firstStartedPulling="2026-01-21 10:58:49.387616253 +0000 UTC m=+116.647572722" lastFinishedPulling="2026-01-21 11:00:23.189526858 +0000 UTC m=+210.449483327" observedRunningTime="2026-01-21 11:00:48.759701452 +0000 UTC m=+236.019657921" watchObservedRunningTime="2026-01-21 11:01:52.871276662 +0000 UTC m=+300.131233131" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.872138 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-whh46","openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.872203 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-apiserver/kube-apiserver-startup-monitor-crc","openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8"] Jan 21 11:01:52 crc kubenswrapper[4881]: E0121 11:01:52.872556 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" containerName="installer" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.872576 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" containerName="installer" Jan 21 11:01:52 crc kubenswrapper[4881]: E0121 11:01:52.872594 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad" containerName="oauth-openshift" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.872601 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad" containerName="oauth-openshift" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.872746 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="41bc4c78-71b2-4ca1-b593-410715cb877b" containerName="installer" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.872759 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad" containerName="oauth-openshift" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.873193 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5xwk8","openshift-controller-manager/controller-manager-879f6c89f-wjlxh","openshift-marketplace/certified-operators-2sqlm","openshift-marketplace/community-operators-6rmvm"] Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.873438 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6rmvm" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" containerName="registry-server" containerID="cri-o://7e5f304bc82a020e253bc1850121534b947e1ce59d3cde3e998cffd1481389a2" gracePeriod=2 Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.873996 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2sqlm" podUID="5b12596d-1f5f-4d81-b664-d0ddee72552c" containerName="registry-server" containerID="cri-o://c77f2373cbe2c6efce94e010b4a6e7c282b2ba984b2b3fef90734b6c51cc06d7" gracePeriod=2 Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.874106 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.875799 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5xwk8" podUID="706c6a3b-823b-4ea3-b7a8-e20d571d3ace" containerName="route-controller-manager" containerID="cri-o://9c8c8d93509d2a29c183d63351f0748ec6e60414dbb285df980924884b598111" gracePeriod=30 Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.876101 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-wjlxh" podUID="002a39eb-e2e0-4d3e-8f61-89a539a653a9" containerName="controller-manager" containerID="cri-o://6b8fc2aac0518f9de92cee69b4b59a05f08ed2161c480a5655d85171be0e5a8b" gracePeriod=30 Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.877544 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.885636 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.885969 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.885996 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.886612 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.888524 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.890122 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.890334 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.890611 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.890996 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.891513 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.898280 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.898693 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.899104 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.901716 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.928895 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.933632 4881 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.934529 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwwx7\" (UniqueName: \"kubernetes.io/projected/beca3a20-cc8d-4051-80e4-abefdc51ade5-kube-api-access-kwwx7\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.934601 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-user-template-error\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.934638 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-system-session\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.934676 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.934712 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.934736 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.934774 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/beca3a20-cc8d-4051-80e4-abefdc51ade5-audit-dir\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.934821 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-user-template-login\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.934862 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-system-router-certs\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.934896 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.934927 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/beca3a20-cc8d-4051-80e4-abefdc51ade5-audit-policies\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.934964 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.934993 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-system-service-ca\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.935118 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.959013 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=64.958993777 podStartE2EDuration="1m4.958993777s" podCreationTimestamp="2026-01-21 11:00:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:01:52.956173264 +0000 UTC m=+300.216129743" watchObservedRunningTime="2026-01-21 11:01:52.958993777 +0000 UTC m=+300.218950246" Jan 21 11:01:52 crc kubenswrapper[4881]: I0121 11:01:52.983500 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=6.983475351 podStartE2EDuration="6.983475351s" podCreationTimestamp="2026-01-21 11:01:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:01:52.981336576 +0000 UTC m=+300.241293045" watchObservedRunningTime="2026-01-21 11:01:52.983475351 +0000 UTC m=+300.243431820" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.038581 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-system-router-certs\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.038653 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.038687 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/beca3a20-cc8d-4051-80e4-abefdc51ade5-audit-policies\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.038734 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.038764 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-system-service-ca\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.040959 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/beca3a20-cc8d-4051-80e4-abefdc51ade5-audit-policies\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.041991 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.042092 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwwx7\" (UniqueName: \"kubernetes.io/projected/beca3a20-cc8d-4051-80e4-abefdc51ade5-kube-api-access-kwwx7\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.042147 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-user-template-error\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.042191 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-system-session\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.042284 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.042351 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.042382 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.042452 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-user-template-login\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.042483 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/beca3a20-cc8d-4051-80e4-abefdc51ade5-audit-dir\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.042644 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/beca3a20-cc8d-4051-80e4-abefdc51ade5-audit-dir\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.042883 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.050934 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.052343 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-system-service-ca\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.059835 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-user-template-error\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.061820 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.065018 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-system-session\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.067372 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.069596 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwwx7\" (UniqueName: \"kubernetes.io/projected/beca3a20-cc8d-4051-80e4-abefdc51ade5-kube-api-access-kwwx7\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.075225 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.078012 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.082751 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-system-router-certs\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.084826 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/beca3a20-cc8d-4051-80e4-abefdc51ade5-v4-0-config-user-template-login\") pod \"oauth-openshift-7fdb5b7d8f-rrqf8\" (UID: \"beca3a20-cc8d-4051-80e4-abefdc51ade5\") " pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.143977 4881 generic.go:334] "Generic (PLEG): container finished" podID="2c460bf5-05a1-4977-b889-1a5c3263df33" containerID="7e5f304bc82a020e253bc1850121534b947e1ce59d3cde3e998cffd1481389a2" exitCode=0 Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.144121 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6rmvm" event={"ID":"2c460bf5-05a1-4977-b889-1a5c3263df33","Type":"ContainerDied","Data":"7e5f304bc82a020e253bc1850121534b947e1ce59d3cde3e998cffd1481389a2"} Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.154756 4881 generic.go:334] "Generic (PLEG): container finished" podID="5b12596d-1f5f-4d81-b664-d0ddee72552c" containerID="c77f2373cbe2c6efce94e010b4a6e7c282b2ba984b2b3fef90734b6c51cc06d7" exitCode=0 Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.154901 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2sqlm" event={"ID":"5b12596d-1f5f-4d81-b664-d0ddee72552c","Type":"ContainerDied","Data":"c77f2373cbe2c6efce94e010b4a6e7c282b2ba984b2b3fef90734b6c51cc06d7"} Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.156904 4881 generic.go:334] "Generic (PLEG): container finished" podID="706c6a3b-823b-4ea3-b7a8-e20d571d3ace" containerID="9c8c8d93509d2a29c183d63351f0748ec6e60414dbb285df980924884b598111" exitCode=0 Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.156963 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5xwk8" event={"ID":"706c6a3b-823b-4ea3-b7a8-e20d571d3ace","Type":"ContainerDied","Data":"9c8c8d93509d2a29c183d63351f0748ec6e60414dbb285df980924884b598111"} Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.161623 4881 generic.go:334] "Generic (PLEG): container finished" podID="002a39eb-e2e0-4d3e-8f61-89a539a653a9" containerID="6b8fc2aac0518f9de92cee69b4b59a05f08ed2161c480a5655d85171be0e5a8b" exitCode=0 Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.161737 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-wjlxh" event={"ID":"002a39eb-e2e0-4d3e-8f61-89a539a653a9","Type":"ContainerDied","Data":"6b8fc2aac0518f9de92cee69b4b59a05f08ed2161c480a5655d85171be0e5a8b"} Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.233723 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.242260 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.323252 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad" path="/var/lib/kubelet/pods/2945fc4c-0c0b-4e2d-97f9-a769b6edb8ad/volumes" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.326738 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-wjlxh" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.396190 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-cfcdf47c7-fppdw"] Jan 21 11:01:53 crc kubenswrapper[4881]: E0121 11:01:53.397301 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="002a39eb-e2e0-4d3e-8f61-89a539a653a9" containerName="controller-manager" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.397330 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="002a39eb-e2e0-4d3e-8f61-89a539a653a9" containerName="controller-manager" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.397702 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="002a39eb-e2e0-4d3e-8f61-89a539a653a9" containerName="controller-manager" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.398921 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cfcdf47c7-fppdw" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.401070 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-cfcdf47c7-fppdw"] Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.437956 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6rmvm" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.462818 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vn8zf\" (UniqueName: \"kubernetes.io/projected/002a39eb-e2e0-4d3e-8f61-89a539a653a9-kube-api-access-vn8zf\") pod \"002a39eb-e2e0-4d3e-8f61-89a539a653a9\" (UID: \"002a39eb-e2e0-4d3e-8f61-89a539a653a9\") " Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.462993 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/002a39eb-e2e0-4d3e-8f61-89a539a653a9-proxy-ca-bundles\") pod \"002a39eb-e2e0-4d3e-8f61-89a539a653a9\" (UID: \"002a39eb-e2e0-4d3e-8f61-89a539a653a9\") " Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.463054 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/002a39eb-e2e0-4d3e-8f61-89a539a653a9-serving-cert\") pod \"002a39eb-e2e0-4d3e-8f61-89a539a653a9\" (UID: \"002a39eb-e2e0-4d3e-8f61-89a539a653a9\") " Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.463116 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/002a39eb-e2e0-4d3e-8f61-89a539a653a9-client-ca\") pod \"002a39eb-e2e0-4d3e-8f61-89a539a653a9\" (UID: \"002a39eb-e2e0-4d3e-8f61-89a539a653a9\") " Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.463163 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/002a39eb-e2e0-4d3e-8f61-89a539a653a9-config\") pod \"002a39eb-e2e0-4d3e-8f61-89a539a653a9\" (UID: \"002a39eb-e2e0-4d3e-8f61-89a539a653a9\") " Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.463405 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89559857-e73d-4f35-838d-c0b0946939d4-config\") pod \"controller-manager-cfcdf47c7-fppdw\" (UID: \"89559857-e73d-4f35-838d-c0b0946939d4\") " pod="openshift-controller-manager/controller-manager-cfcdf47c7-fppdw" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.463450 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/89559857-e73d-4f35-838d-c0b0946939d4-proxy-ca-bundles\") pod \"controller-manager-cfcdf47c7-fppdw\" (UID: \"89559857-e73d-4f35-838d-c0b0946939d4\") " pod="openshift-controller-manager/controller-manager-cfcdf47c7-fppdw" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.463508 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89559857-e73d-4f35-838d-c0b0946939d4-serving-cert\") pod \"controller-manager-cfcdf47c7-fppdw\" (UID: \"89559857-e73d-4f35-838d-c0b0946939d4\") " pod="openshift-controller-manager/controller-manager-cfcdf47c7-fppdw" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.463578 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/89559857-e73d-4f35-838d-c0b0946939d4-client-ca\") pod \"controller-manager-cfcdf47c7-fppdw\" (UID: \"89559857-e73d-4f35-838d-c0b0946939d4\") " pod="openshift-controller-manager/controller-manager-cfcdf47c7-fppdw" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.463602 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4kr9\" (UniqueName: \"kubernetes.io/projected/89559857-e73d-4f35-838d-c0b0946939d4-kube-api-access-v4kr9\") pod \"controller-manager-cfcdf47c7-fppdw\" (UID: \"89559857-e73d-4f35-838d-c0b0946939d4\") " pod="openshift-controller-manager/controller-manager-cfcdf47c7-fppdw" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.469379 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/002a39eb-e2e0-4d3e-8f61-89a539a653a9-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "002a39eb-e2e0-4d3e-8f61-89a539a653a9" (UID: "002a39eb-e2e0-4d3e-8f61-89a539a653a9"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.470434 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/002a39eb-e2e0-4d3e-8f61-89a539a653a9-config" (OuterVolumeSpecName: "config") pod "002a39eb-e2e0-4d3e-8f61-89a539a653a9" (UID: "002a39eb-e2e0-4d3e-8f61-89a539a653a9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.473582 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/002a39eb-e2e0-4d3e-8f61-89a539a653a9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "002a39eb-e2e0-4d3e-8f61-89a539a653a9" (UID: "002a39eb-e2e0-4d3e-8f61-89a539a653a9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.476655 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/002a39eb-e2e0-4d3e-8f61-89a539a653a9-client-ca" (OuterVolumeSpecName: "client-ca") pod "002a39eb-e2e0-4d3e-8f61-89a539a653a9" (UID: "002a39eb-e2e0-4d3e-8f61-89a539a653a9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.477425 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/002a39eb-e2e0-4d3e-8f61-89a539a653a9-kube-api-access-vn8zf" (OuterVolumeSpecName: "kube-api-access-vn8zf") pod "002a39eb-e2e0-4d3e-8f61-89a539a653a9" (UID: "002a39eb-e2e0-4d3e-8f61-89a539a653a9"). InnerVolumeSpecName "kube-api-access-vn8zf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.514612 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2sqlm" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.523109 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5xwk8" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.565558 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/706c6a3b-823b-4ea3-b7a8-e20d571d3ace-config\") pod \"706c6a3b-823b-4ea3-b7a8-e20d571d3ace\" (UID: \"706c6a3b-823b-4ea3-b7a8-e20d571d3ace\") " Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.565648 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrsm4\" (UniqueName: \"kubernetes.io/projected/5b12596d-1f5f-4d81-b664-d0ddee72552c-kube-api-access-lrsm4\") pod \"5b12596d-1f5f-4d81-b664-d0ddee72552c\" (UID: \"5b12596d-1f5f-4d81-b664-d0ddee72552c\") " Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.565724 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b12596d-1f5f-4d81-b664-d0ddee72552c-utilities\") pod \"5b12596d-1f5f-4d81-b664-d0ddee72552c\" (UID: \"5b12596d-1f5f-4d81-b664-d0ddee72552c\") " Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.565877 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/706c6a3b-823b-4ea3-b7a8-e20d571d3ace-client-ca\") pod \"706c6a3b-823b-4ea3-b7a8-e20d571d3ace\" (UID: \"706c6a3b-823b-4ea3-b7a8-e20d571d3ace\") " Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.565908 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2dkc\" (UniqueName: \"kubernetes.io/projected/2c460bf5-05a1-4977-b889-1a5c3263df33-kube-api-access-p2dkc\") pod \"2c460bf5-05a1-4977-b889-1a5c3263df33\" (UID: \"2c460bf5-05a1-4977-b889-1a5c3263df33\") " Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.565976 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/706c6a3b-823b-4ea3-b7a8-e20d571d3ace-serving-cert\") pod \"706c6a3b-823b-4ea3-b7a8-e20d571d3ace\" (UID: \"706c6a3b-823b-4ea3-b7a8-e20d571d3ace\") " Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.566021 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b12596d-1f5f-4d81-b664-d0ddee72552c-catalog-content\") pod \"5b12596d-1f5f-4d81-b664-d0ddee72552c\" (UID: \"5b12596d-1f5f-4d81-b664-d0ddee72552c\") " Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.566058 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c460bf5-05a1-4977-b889-1a5c3263df33-utilities\") pod \"2c460bf5-05a1-4977-b889-1a5c3263df33\" (UID: \"2c460bf5-05a1-4977-b889-1a5c3263df33\") " Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.566157 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c460bf5-05a1-4977-b889-1a5c3263df33-catalog-content\") pod \"2c460bf5-05a1-4977-b889-1a5c3263df33\" (UID: \"2c460bf5-05a1-4977-b889-1a5c3263df33\") " Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.566194 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9kgjc\" (UniqueName: \"kubernetes.io/projected/706c6a3b-823b-4ea3-b7a8-e20d571d3ace-kube-api-access-9kgjc\") pod \"706c6a3b-823b-4ea3-b7a8-e20d571d3ace\" (UID: \"706c6a3b-823b-4ea3-b7a8-e20d571d3ace\") " Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.566469 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89559857-e73d-4f35-838d-c0b0946939d4-serving-cert\") pod \"controller-manager-cfcdf47c7-fppdw\" (UID: \"89559857-e73d-4f35-838d-c0b0946939d4\") " pod="openshift-controller-manager/controller-manager-cfcdf47c7-fppdw" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.566564 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/89559857-e73d-4f35-838d-c0b0946939d4-client-ca\") pod \"controller-manager-cfcdf47c7-fppdw\" (UID: \"89559857-e73d-4f35-838d-c0b0946939d4\") " pod="openshift-controller-manager/controller-manager-cfcdf47c7-fppdw" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.566590 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4kr9\" (UniqueName: \"kubernetes.io/projected/89559857-e73d-4f35-838d-c0b0946939d4-kube-api-access-v4kr9\") pod \"controller-manager-cfcdf47c7-fppdw\" (UID: \"89559857-e73d-4f35-838d-c0b0946939d4\") " pod="openshift-controller-manager/controller-manager-cfcdf47c7-fppdw" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.566619 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89559857-e73d-4f35-838d-c0b0946939d4-config\") pod \"controller-manager-cfcdf47c7-fppdw\" (UID: \"89559857-e73d-4f35-838d-c0b0946939d4\") " pod="openshift-controller-manager/controller-manager-cfcdf47c7-fppdw" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.566645 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/89559857-e73d-4f35-838d-c0b0946939d4-proxy-ca-bundles\") pod \"controller-manager-cfcdf47c7-fppdw\" (UID: \"89559857-e73d-4f35-838d-c0b0946939d4\") " pod="openshift-controller-manager/controller-manager-cfcdf47c7-fppdw" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.566692 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vn8zf\" (UniqueName: \"kubernetes.io/projected/002a39eb-e2e0-4d3e-8f61-89a539a653a9-kube-api-access-vn8zf\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.566704 4881 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/002a39eb-e2e0-4d3e-8f61-89a539a653a9-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.566713 4881 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/002a39eb-e2e0-4d3e-8f61-89a539a653a9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.566723 4881 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/002a39eb-e2e0-4d3e-8f61-89a539a653a9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.566733 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/002a39eb-e2e0-4d3e-8f61-89a539a653a9-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.568747 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/89559857-e73d-4f35-838d-c0b0946939d4-proxy-ca-bundles\") pod \"controller-manager-cfcdf47c7-fppdw\" (UID: \"89559857-e73d-4f35-838d-c0b0946939d4\") " pod="openshift-controller-manager/controller-manager-cfcdf47c7-fppdw" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.569876 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c460bf5-05a1-4977-b889-1a5c3263df33-utilities" (OuterVolumeSpecName: "utilities") pod "2c460bf5-05a1-4977-b889-1a5c3263df33" (UID: "2c460bf5-05a1-4977-b889-1a5c3263df33"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.571018 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b12596d-1f5f-4d81-b664-d0ddee72552c-utilities" (OuterVolumeSpecName: "utilities") pod "5b12596d-1f5f-4d81-b664-d0ddee72552c" (UID: "5b12596d-1f5f-4d81-b664-d0ddee72552c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.572729 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/706c6a3b-823b-4ea3-b7a8-e20d571d3ace-config" (OuterVolumeSpecName: "config") pod "706c6a3b-823b-4ea3-b7a8-e20d571d3ace" (UID: "706c6a3b-823b-4ea3-b7a8-e20d571d3ace"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.574404 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/89559857-e73d-4f35-838d-c0b0946939d4-client-ca\") pod \"controller-manager-cfcdf47c7-fppdw\" (UID: \"89559857-e73d-4f35-838d-c0b0946939d4\") " pod="openshift-controller-manager/controller-manager-cfcdf47c7-fppdw" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.574690 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/706c6a3b-823b-4ea3-b7a8-e20d571d3ace-client-ca" (OuterVolumeSpecName: "client-ca") pod "706c6a3b-823b-4ea3-b7a8-e20d571d3ace" (UID: "706c6a3b-823b-4ea3-b7a8-e20d571d3ace"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.578508 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89559857-e73d-4f35-838d-c0b0946939d4-config\") pod \"controller-manager-cfcdf47c7-fppdw\" (UID: \"89559857-e73d-4f35-838d-c0b0946939d4\") " pod="openshift-controller-manager/controller-manager-cfcdf47c7-fppdw" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.579122 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89559857-e73d-4f35-838d-c0b0946939d4-serving-cert\") pod \"controller-manager-cfcdf47c7-fppdw\" (UID: \"89559857-e73d-4f35-838d-c0b0946939d4\") " pod="openshift-controller-manager/controller-manager-cfcdf47c7-fppdw" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.579273 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b12596d-1f5f-4d81-b664-d0ddee72552c-kube-api-access-lrsm4" (OuterVolumeSpecName: "kube-api-access-lrsm4") pod "5b12596d-1f5f-4d81-b664-d0ddee72552c" (UID: "5b12596d-1f5f-4d81-b664-d0ddee72552c"). InnerVolumeSpecName "kube-api-access-lrsm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.581644 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c460bf5-05a1-4977-b889-1a5c3263df33-kube-api-access-p2dkc" (OuterVolumeSpecName: "kube-api-access-p2dkc") pod "2c460bf5-05a1-4977-b889-1a5c3263df33" (UID: "2c460bf5-05a1-4977-b889-1a5c3263df33"). InnerVolumeSpecName "kube-api-access-p2dkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.582881 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/706c6a3b-823b-4ea3-b7a8-e20d571d3ace-kube-api-access-9kgjc" (OuterVolumeSpecName: "kube-api-access-9kgjc") pod "706c6a3b-823b-4ea3-b7a8-e20d571d3ace" (UID: "706c6a3b-823b-4ea3-b7a8-e20d571d3ace"). InnerVolumeSpecName "kube-api-access-9kgjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.584949 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/706c6a3b-823b-4ea3-b7a8-e20d571d3ace-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "706c6a3b-823b-4ea3-b7a8-e20d571d3ace" (UID: "706c6a3b-823b-4ea3-b7a8-e20d571d3ace"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.595732 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4kr9\" (UniqueName: \"kubernetes.io/projected/89559857-e73d-4f35-838d-c0b0946939d4-kube-api-access-v4kr9\") pod \"controller-manager-cfcdf47c7-fppdw\" (UID: \"89559857-e73d-4f35-838d-c0b0946939d4\") " pod="openshift-controller-manager/controller-manager-cfcdf47c7-fppdw" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.625656 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b12596d-1f5f-4d81-b664-d0ddee72552c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5b12596d-1f5f-4d81-b664-d0ddee72552c" (UID: "5b12596d-1f5f-4d81-b664-d0ddee72552c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.642629 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c460bf5-05a1-4977-b889-1a5c3263df33-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2c460bf5-05a1-4977-b889-1a5c3263df33" (UID: "2c460bf5-05a1-4977-b889-1a5c3263df33"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.668640 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c460bf5-05a1-4977-b889-1a5c3263df33-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.668677 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9kgjc\" (UniqueName: \"kubernetes.io/projected/706c6a3b-823b-4ea3-b7a8-e20d571d3ace-kube-api-access-9kgjc\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.668692 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/706c6a3b-823b-4ea3-b7a8-e20d571d3ace-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.668705 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrsm4\" (UniqueName: \"kubernetes.io/projected/5b12596d-1f5f-4d81-b664-d0ddee72552c-kube-api-access-lrsm4\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.668715 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b12596d-1f5f-4d81-b664-d0ddee72552c-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.668724 4881 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/706c6a3b-823b-4ea3-b7a8-e20d571d3ace-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.668734 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2dkc\" (UniqueName: \"kubernetes.io/projected/2c460bf5-05a1-4977-b889-1a5c3263df33-kube-api-access-p2dkc\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.668744 4881 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/706c6a3b-823b-4ea3-b7a8-e20d571d3ace-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.668752 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b12596d-1f5f-4d81-b664-d0ddee72552c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.668761 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c460bf5-05a1-4977-b889-1a5c3263df33-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.727757 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cfcdf47c7-fppdw" Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.827699 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8"] Jan 21 11:01:53 crc kubenswrapper[4881]: W0121 11:01:53.835845 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbeca3a20_cc8d_4051_80e4_abefdc51ade5.slice/crio-a056711b9c51d593aca8331517f6165e9d28333e5d223c19de2b24f717912a83 WatchSource:0}: Error finding container a056711b9c51d593aca8331517f6165e9d28333e5d223c19de2b24f717912a83: Status 404 returned error can't find the container with id a056711b9c51d593aca8331517f6165e9d28333e5d223c19de2b24f717912a83 Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.881367 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vljfh"] Jan 21 11:01:53 crc kubenswrapper[4881]: I0121 11:01:53.881834 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vljfh" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" containerName="registry-server" containerID="cri-o://0e3e6281eef028f6cd4f512b5ed4a48f81805bf0232c271e4efbf06a7853a75b" gracePeriod=2 Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.071403 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t4zlb"] Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.072827 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t4zlb" podUID="b83e71f8-970c-4afc-ac31-264c7ca6625a" containerName="registry-server" containerID="cri-o://7e551acaa20677090959425a7116a2212e0375845f7e600b54464bccf79b4461" gracePeriod=2 Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.076613 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-cfcdf47c7-fppdw"] Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.186808 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5xwk8" Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.186829 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5xwk8" event={"ID":"706c6a3b-823b-4ea3-b7a8-e20d571d3ace","Type":"ContainerDied","Data":"22d022e22752b1a845c64ff7297933c2f9f91e223d3640540e2ab737fe1ace78"} Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.186907 4881 scope.go:117] "RemoveContainer" containerID="9c8c8d93509d2a29c183d63351f0748ec6e60414dbb285df980924884b598111" Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.195036 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-wjlxh" Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.195977 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-wjlxh" event={"ID":"002a39eb-e2e0-4d3e-8f61-89a539a653a9","Type":"ContainerDied","Data":"fec206b72c4648e66af3adcacd7cb5106e2766bcb34d529fae1cd757bd777535"} Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.206076 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" event={"ID":"beca3a20-cc8d-4051-80e4-abefdc51ade5","Type":"ContainerStarted","Data":"a056711b9c51d593aca8331517f6165e9d28333e5d223c19de2b24f717912a83"} Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.214462 4881 generic.go:334] "Generic (PLEG): container finished" podID="1d66b837-f7b1-4795-895f-08cdabe48b37" containerID="0e3e6281eef028f6cd4f512b5ed4a48f81805bf0232c271e4efbf06a7853a75b" exitCode=0 Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.214570 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vljfh" event={"ID":"1d66b837-f7b1-4795-895f-08cdabe48b37","Type":"ContainerDied","Data":"0e3e6281eef028f6cd4f512b5ed4a48f81805bf0232c271e4efbf06a7853a75b"} Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.226167 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6rmvm" event={"ID":"2c460bf5-05a1-4977-b889-1a5c3263df33","Type":"ContainerDied","Data":"c3a0b0298aa8ab878f3e521eb0f166ff0e56c334391018119468d1c2b03f0be9"} Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.226322 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6rmvm" Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.228472 4881 scope.go:117] "RemoveContainer" containerID="6b8fc2aac0518f9de92cee69b4b59a05f08ed2161c480a5655d85171be0e5a8b" Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.234629 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2sqlm" event={"ID":"5b12596d-1f5f-4d81-b664-d0ddee72552c","Type":"ContainerDied","Data":"06bab0b00f0f71fd0a092b84dfd550234e778896541edbd10dbb4f1a0cb5d5b8"} Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.234767 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2sqlm" Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.245544 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cfcdf47c7-fppdw" event={"ID":"89559857-e73d-4f35-838d-c0b0946939d4","Type":"ContainerStarted","Data":"ebe56607ace74705e145d654d7bc2814291ec5e33259c85e6447339814042d78"} Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.262353 4881 scope.go:117] "RemoveContainer" containerID="7e5f304bc82a020e253bc1850121534b947e1ce59d3cde3e998cffd1481389a2" Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.268480 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wjlxh"] Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.274417 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wjlxh"] Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.323624 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vljfh" Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.330602 4881 scope.go:117] "RemoveContainer" containerID="db0493653bc30919d4352c24df01a207c2de62ad8f1fa10ff346fcc988a5549e" Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.340015 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5xwk8"] Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.348470 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5xwk8"] Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.357014 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6rmvm"] Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.368027 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6rmvm"] Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.372235 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2sqlm"] Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.377395 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2sqlm"] Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.381687 4881 scope.go:117] "RemoveContainer" containerID="21ab48233ffe1978a9c9e6217e5905832c0304da6f07fa2e19daa5ca75ac0da7" Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.385845 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d66b837-f7b1-4795-895f-08cdabe48b37-catalog-content\") pod \"1d66b837-f7b1-4795-895f-08cdabe48b37\" (UID: \"1d66b837-f7b1-4795-895f-08cdabe48b37\") " Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.386008 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d66b837-f7b1-4795-895f-08cdabe48b37-utilities\") pod \"1d66b837-f7b1-4795-895f-08cdabe48b37\" (UID: \"1d66b837-f7b1-4795-895f-08cdabe48b37\") " Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.386074 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b56ld\" (UniqueName: \"kubernetes.io/projected/1d66b837-f7b1-4795-895f-08cdabe48b37-kube-api-access-b56ld\") pod \"1d66b837-f7b1-4795-895f-08cdabe48b37\" (UID: \"1d66b837-f7b1-4795-895f-08cdabe48b37\") " Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.390847 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d66b837-f7b1-4795-895f-08cdabe48b37-utilities" (OuterVolumeSpecName: "utilities") pod "1d66b837-f7b1-4795-895f-08cdabe48b37" (UID: "1d66b837-f7b1-4795-895f-08cdabe48b37"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.402677 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d66b837-f7b1-4795-895f-08cdabe48b37-kube-api-access-b56ld" (OuterVolumeSpecName: "kube-api-access-b56ld") pod "1d66b837-f7b1-4795-895f-08cdabe48b37" (UID: "1d66b837-f7b1-4795-895f-08cdabe48b37"). InnerVolumeSpecName "kube-api-access-b56ld". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.412394 4881 scope.go:117] "RemoveContainer" containerID="c77f2373cbe2c6efce94e010b4a6e7c282b2ba984b2b3fef90734b6c51cc06d7" Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.422269 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d66b837-f7b1-4795-895f-08cdabe48b37-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d66b837-f7b1-4795-895f-08cdabe48b37" (UID: "1d66b837-f7b1-4795-895f-08cdabe48b37"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.452597 4881 scope.go:117] "RemoveContainer" containerID="8c58e8e6d9f4309fce56e3b043abdb46d3d4af579c4a6d9ae43870620be9634e" Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.488585 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d66b837-f7b1-4795-895f-08cdabe48b37-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.488635 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b56ld\" (UniqueName: \"kubernetes.io/projected/1d66b837-f7b1-4795-895f-08cdabe48b37-kube-api-access-b56ld\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.488651 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d66b837-f7b1-4795-895f-08cdabe48b37-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.501080 4881 scope.go:117] "RemoveContainer" containerID="5aed93291404e255299931c1a9f3a011b1cb4d3b3ce796db1f1b3e7ec12c142e" Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.515278 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t4zlb" Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.593179 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sn5jn\" (UniqueName: \"kubernetes.io/projected/b83e71f8-970c-4afc-ac31-264c7ca6625a-kube-api-access-sn5jn\") pod \"b83e71f8-970c-4afc-ac31-264c7ca6625a\" (UID: \"b83e71f8-970c-4afc-ac31-264c7ca6625a\") " Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.593336 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b83e71f8-970c-4afc-ac31-264c7ca6625a-utilities\") pod \"b83e71f8-970c-4afc-ac31-264c7ca6625a\" (UID: \"b83e71f8-970c-4afc-ac31-264c7ca6625a\") " Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.593475 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b83e71f8-970c-4afc-ac31-264c7ca6625a-catalog-content\") pod \"b83e71f8-970c-4afc-ac31-264c7ca6625a\" (UID: \"b83e71f8-970c-4afc-ac31-264c7ca6625a\") " Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.594354 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b83e71f8-970c-4afc-ac31-264c7ca6625a-utilities" (OuterVolumeSpecName: "utilities") pod "b83e71f8-970c-4afc-ac31-264c7ca6625a" (UID: "b83e71f8-970c-4afc-ac31-264c7ca6625a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.598753 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b83e71f8-970c-4afc-ac31-264c7ca6625a-kube-api-access-sn5jn" (OuterVolumeSpecName: "kube-api-access-sn5jn") pod "b83e71f8-970c-4afc-ac31-264c7ca6625a" (UID: "b83e71f8-970c-4afc-ac31-264c7ca6625a"). InnerVolumeSpecName "kube-api-access-sn5jn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.695074 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sn5jn\" (UniqueName: \"kubernetes.io/projected/b83e71f8-970c-4afc-ac31-264c7ca6625a-kube-api-access-sn5jn\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.695103 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b83e71f8-970c-4afc-ac31-264c7ca6625a-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.717187 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b83e71f8-970c-4afc-ac31-264c7ca6625a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b83e71f8-970c-4afc-ac31-264c7ca6625a" (UID: "b83e71f8-970c-4afc-ac31-264c7ca6625a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:01:54 crc kubenswrapper[4881]: I0121 11:01:54.796696 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b83e71f8-970c-4afc-ac31-264c7ca6625a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.256675 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" event={"ID":"beca3a20-cc8d-4051-80e4-abefdc51ade5","Type":"ContainerStarted","Data":"9d40e077357163b3f00df547a9ac5607b2669655ed19bf3a13296c1d2659a959"} Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.257843 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.262625 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vljfh" event={"ID":"1d66b837-f7b1-4795-895f-08cdabe48b37","Type":"ContainerDied","Data":"eb22a93b2892f0c51c953eb6eb827724775592dd8224db01464d1014b0260e0e"} Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.262688 4881 scope.go:117] "RemoveContainer" containerID="0e3e6281eef028f6cd4f512b5ed4a48f81805bf0232c271e4efbf06a7853a75b" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.262712 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vljfh" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.266148 4881 generic.go:334] "Generic (PLEG): container finished" podID="b83e71f8-970c-4afc-ac31-264c7ca6625a" containerID="7e551acaa20677090959425a7116a2212e0375845f7e600b54464bccf79b4461" exitCode=0 Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.266205 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t4zlb" event={"ID":"b83e71f8-970c-4afc-ac31-264c7ca6625a","Type":"ContainerDied","Data":"7e551acaa20677090959425a7116a2212e0375845f7e600b54464bccf79b4461"} Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.266229 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t4zlb" event={"ID":"b83e71f8-970c-4afc-ac31-264c7ca6625a","Type":"ContainerDied","Data":"16d7bf5b9f969471865c2f6c0d0043006c1b79484bd1c97e826d3a03374ea542"} Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.266327 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t4zlb" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.275581 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.282351 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cfcdf47c7-fppdw" event={"ID":"89559857-e73d-4f35-838d-c0b0946939d4","Type":"ContainerStarted","Data":"e4842f0920f40f8afe25168938541ec7282f9d06248cee97e875afa522eda397"} Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.282894 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-cfcdf47c7-fppdw" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.288105 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7fdb5b7d8f-rrqf8" podStartSLOduration=96.288089021 podStartE2EDuration="1m36.288089021s" podCreationTimestamp="2026-01-21 11:00:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:01:55.284373626 +0000 UTC m=+302.544330105" watchObservedRunningTime="2026-01-21 11:01:55.288089021 +0000 UTC m=+302.548045490" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.296566 4881 scope.go:117] "RemoveContainer" containerID="87b3da4f38a8247ed7dbb2b11f2ec14c16c71eee1d17657bf85f241bc0e931f6" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.297906 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-cfcdf47c7-fppdw" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.311318 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-cfcdf47c7-fppdw" podStartSLOduration=19.311302444 podStartE2EDuration="19.311302444s" podCreationTimestamp="2026-01-21 11:01:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:01:55.308980733 +0000 UTC m=+302.568937212" watchObservedRunningTime="2026-01-21 11:01:55.311302444 +0000 UTC m=+302.571258913" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.316649 4881 scope.go:117] "RemoveContainer" containerID="ec4a8cdf9092080c2fbbc3ac32eca21f15705f2f8424796b41499693e29b4095" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.325181 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="002a39eb-e2e0-4d3e-8f61-89a539a653a9" path="/var/lib/kubelet/pods/002a39eb-e2e0-4d3e-8f61-89a539a653a9/volumes" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.326188 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" path="/var/lib/kubelet/pods/2c460bf5-05a1-4977-b889-1a5c3263df33/volumes" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.327399 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b12596d-1f5f-4d81-b664-d0ddee72552c" path="/var/lib/kubelet/pods/5b12596d-1f5f-4d81-b664-d0ddee72552c/volumes" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.328068 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="706c6a3b-823b-4ea3-b7a8-e20d571d3ace" path="/var/lib/kubelet/pods/706c6a3b-823b-4ea3-b7a8-e20d571d3ace/volumes" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.347090 4881 scope.go:117] "RemoveContainer" containerID="7e551acaa20677090959425a7116a2212e0375845f7e600b54464bccf79b4461" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.363261 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vljfh"] Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.366223 4881 scope.go:117] "RemoveContainer" containerID="d97aa85fa9dba9a5f261efedffb0ffe8efb44a7c0ff638756658eab20e0bacac" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.371353 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vljfh"] Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.408792 4881 scope.go:117] "RemoveContainer" containerID="ae4974769900e5c543fbbb2d217e3f9cdfc7b9998621c36ae6d12bcf65b9b593" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.412986 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t4zlb"] Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.416848 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t4zlb"] Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.446295 4881 scope.go:117] "RemoveContainer" containerID="7e551acaa20677090959425a7116a2212e0375845f7e600b54464bccf79b4461" Jan 21 11:01:55 crc kubenswrapper[4881]: E0121 11:01:55.446847 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e551acaa20677090959425a7116a2212e0375845f7e600b54464bccf79b4461\": container with ID starting with 7e551acaa20677090959425a7116a2212e0375845f7e600b54464bccf79b4461 not found: ID does not exist" containerID="7e551acaa20677090959425a7116a2212e0375845f7e600b54464bccf79b4461" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.446889 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e551acaa20677090959425a7116a2212e0375845f7e600b54464bccf79b4461"} err="failed to get container status \"7e551acaa20677090959425a7116a2212e0375845f7e600b54464bccf79b4461\": rpc error: code = NotFound desc = could not find container \"7e551acaa20677090959425a7116a2212e0375845f7e600b54464bccf79b4461\": container with ID starting with 7e551acaa20677090959425a7116a2212e0375845f7e600b54464bccf79b4461 not found: ID does not exist" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.446918 4881 scope.go:117] "RemoveContainer" containerID="d97aa85fa9dba9a5f261efedffb0ffe8efb44a7c0ff638756658eab20e0bacac" Jan 21 11:01:55 crc kubenswrapper[4881]: E0121 11:01:55.447313 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d97aa85fa9dba9a5f261efedffb0ffe8efb44a7c0ff638756658eab20e0bacac\": container with ID starting with d97aa85fa9dba9a5f261efedffb0ffe8efb44a7c0ff638756658eab20e0bacac not found: ID does not exist" containerID="d97aa85fa9dba9a5f261efedffb0ffe8efb44a7c0ff638756658eab20e0bacac" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.447465 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d97aa85fa9dba9a5f261efedffb0ffe8efb44a7c0ff638756658eab20e0bacac"} err="failed to get container status \"d97aa85fa9dba9a5f261efedffb0ffe8efb44a7c0ff638756658eab20e0bacac\": rpc error: code = NotFound desc = could not find container \"d97aa85fa9dba9a5f261efedffb0ffe8efb44a7c0ff638756658eab20e0bacac\": container with ID starting with d97aa85fa9dba9a5f261efedffb0ffe8efb44a7c0ff638756658eab20e0bacac not found: ID does not exist" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.447609 4881 scope.go:117] "RemoveContainer" containerID="ae4974769900e5c543fbbb2d217e3f9cdfc7b9998621c36ae6d12bcf65b9b593" Jan 21 11:01:55 crc kubenswrapper[4881]: E0121 11:01:55.448064 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae4974769900e5c543fbbb2d217e3f9cdfc7b9998621c36ae6d12bcf65b9b593\": container with ID starting with ae4974769900e5c543fbbb2d217e3f9cdfc7b9998621c36ae6d12bcf65b9b593 not found: ID does not exist" containerID="ae4974769900e5c543fbbb2d217e3f9cdfc7b9998621c36ae6d12bcf65b9b593" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.448094 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae4974769900e5c543fbbb2d217e3f9cdfc7b9998621c36ae6d12bcf65b9b593"} err="failed to get container status \"ae4974769900e5c543fbbb2d217e3f9cdfc7b9998621c36ae6d12bcf65b9b593\": rpc error: code = NotFound desc = could not find container \"ae4974769900e5c543fbbb2d217e3f9cdfc7b9998621c36ae6d12bcf65b9b593\": container with ID starting with ae4974769900e5c543fbbb2d217e3f9cdfc7b9998621c36ae6d12bcf65b9b593 not found: ID does not exist" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.895517 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54bb857fc-6xg7b"] Jan 21 11:01:55 crc kubenswrapper[4881]: E0121 11:01:55.895868 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b83e71f8-970c-4afc-ac31-264c7ca6625a" containerName="extract-content" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.895889 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="b83e71f8-970c-4afc-ac31-264c7ca6625a" containerName="extract-content" Jan 21 11:01:55 crc kubenswrapper[4881]: E0121 11:01:55.895904 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b12596d-1f5f-4d81-b664-d0ddee72552c" containerName="registry-server" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.895913 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b12596d-1f5f-4d81-b664-d0ddee72552c" containerName="registry-server" Jan 21 11:01:55 crc kubenswrapper[4881]: E0121 11:01:55.895927 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b12596d-1f5f-4d81-b664-d0ddee72552c" containerName="extract-utilities" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.895936 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b12596d-1f5f-4d81-b664-d0ddee72552c" containerName="extract-utilities" Jan 21 11:01:55 crc kubenswrapper[4881]: E0121 11:01:55.895949 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" containerName="extract-content" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.895957 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" containerName="extract-content" Jan 21 11:01:55 crc kubenswrapper[4881]: E0121 11:01:55.895970 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" containerName="extract-utilities" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.895978 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" containerName="extract-utilities" Jan 21 11:01:55 crc kubenswrapper[4881]: E0121 11:01:55.895991 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b12596d-1f5f-4d81-b664-d0ddee72552c" containerName="extract-content" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.896001 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b12596d-1f5f-4d81-b664-d0ddee72552c" containerName="extract-content" Jan 21 11:01:55 crc kubenswrapper[4881]: E0121 11:01:55.896021 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="706c6a3b-823b-4ea3-b7a8-e20d571d3ace" containerName="route-controller-manager" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.896030 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="706c6a3b-823b-4ea3-b7a8-e20d571d3ace" containerName="route-controller-manager" Jan 21 11:01:55 crc kubenswrapper[4881]: E0121 11:01:55.896046 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" containerName="registry-server" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.896055 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" containerName="registry-server" Jan 21 11:01:55 crc kubenswrapper[4881]: E0121 11:01:55.896070 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b83e71f8-970c-4afc-ac31-264c7ca6625a" containerName="registry-server" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.896078 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="b83e71f8-970c-4afc-ac31-264c7ca6625a" containerName="registry-server" Jan 21 11:01:55 crc kubenswrapper[4881]: E0121 11:01:55.896090 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" containerName="extract-utilities" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.896100 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" containerName="extract-utilities" Jan 21 11:01:55 crc kubenswrapper[4881]: E0121 11:01:55.896111 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b83e71f8-970c-4afc-ac31-264c7ca6625a" containerName="extract-utilities" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.896119 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="b83e71f8-970c-4afc-ac31-264c7ca6625a" containerName="extract-utilities" Jan 21 11:01:55 crc kubenswrapper[4881]: E0121 11:01:55.896131 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" containerName="extract-content" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.896139 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" containerName="extract-content" Jan 21 11:01:55 crc kubenswrapper[4881]: E0121 11:01:55.896150 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" containerName="registry-server" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.896158 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" containerName="registry-server" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.896289 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b12596d-1f5f-4d81-b664-d0ddee72552c" containerName="registry-server" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.896303 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" containerName="registry-server" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.896315 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="b83e71f8-970c-4afc-ac31-264c7ca6625a" containerName="registry-server" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.896329 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c460bf5-05a1-4977-b889-1a5c3263df33" containerName="registry-server" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.896339 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="706c6a3b-823b-4ea3-b7a8-e20d571d3ace" containerName="route-controller-manager" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.897006 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54bb857fc-6xg7b" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.900035 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.900856 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.901172 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.902057 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.902252 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.905852 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 11:01:55 crc kubenswrapper[4881]: I0121 11:01:55.919421 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54bb857fc-6xg7b"] Jan 21 11:01:56 crc kubenswrapper[4881]: I0121 11:01:56.021296 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b1ebf4ad-7b0d-4711-93bd-206ec36e7202-client-ca\") pod \"route-controller-manager-54bb857fc-6xg7b\" (UID: \"b1ebf4ad-7b0d-4711-93bd-206ec36e7202\") " pod="openshift-route-controller-manager/route-controller-manager-54bb857fc-6xg7b" Jan 21 11:01:56 crc kubenswrapper[4881]: I0121 11:01:56.021662 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1ebf4ad-7b0d-4711-93bd-206ec36e7202-config\") pod \"route-controller-manager-54bb857fc-6xg7b\" (UID: \"b1ebf4ad-7b0d-4711-93bd-206ec36e7202\") " pod="openshift-route-controller-manager/route-controller-manager-54bb857fc-6xg7b" Jan 21 11:01:56 crc kubenswrapper[4881]: I0121 11:01:56.021860 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1ebf4ad-7b0d-4711-93bd-206ec36e7202-serving-cert\") pod \"route-controller-manager-54bb857fc-6xg7b\" (UID: \"b1ebf4ad-7b0d-4711-93bd-206ec36e7202\") " pod="openshift-route-controller-manager/route-controller-manager-54bb857fc-6xg7b" Jan 21 11:01:56 crc kubenswrapper[4881]: I0121 11:01:56.021994 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wx6k\" (UniqueName: \"kubernetes.io/projected/b1ebf4ad-7b0d-4711-93bd-206ec36e7202-kube-api-access-6wx6k\") pod \"route-controller-manager-54bb857fc-6xg7b\" (UID: \"b1ebf4ad-7b0d-4711-93bd-206ec36e7202\") " pod="openshift-route-controller-manager/route-controller-manager-54bb857fc-6xg7b" Jan 21 11:01:56 crc kubenswrapper[4881]: I0121 11:01:56.123600 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1ebf4ad-7b0d-4711-93bd-206ec36e7202-config\") pod \"route-controller-manager-54bb857fc-6xg7b\" (UID: \"b1ebf4ad-7b0d-4711-93bd-206ec36e7202\") " pod="openshift-route-controller-manager/route-controller-manager-54bb857fc-6xg7b" Jan 21 11:01:56 crc kubenswrapper[4881]: I0121 11:01:56.124295 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1ebf4ad-7b0d-4711-93bd-206ec36e7202-serving-cert\") pod \"route-controller-manager-54bb857fc-6xg7b\" (UID: \"b1ebf4ad-7b0d-4711-93bd-206ec36e7202\") " pod="openshift-route-controller-manager/route-controller-manager-54bb857fc-6xg7b" Jan 21 11:01:56 crc kubenswrapper[4881]: I0121 11:01:56.124420 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wx6k\" (UniqueName: \"kubernetes.io/projected/b1ebf4ad-7b0d-4711-93bd-206ec36e7202-kube-api-access-6wx6k\") pod \"route-controller-manager-54bb857fc-6xg7b\" (UID: \"b1ebf4ad-7b0d-4711-93bd-206ec36e7202\") " pod="openshift-route-controller-manager/route-controller-manager-54bb857fc-6xg7b" Jan 21 11:01:56 crc kubenswrapper[4881]: I0121 11:01:56.124752 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b1ebf4ad-7b0d-4711-93bd-206ec36e7202-client-ca\") pod \"route-controller-manager-54bb857fc-6xg7b\" (UID: \"b1ebf4ad-7b0d-4711-93bd-206ec36e7202\") " pod="openshift-route-controller-manager/route-controller-manager-54bb857fc-6xg7b" Jan 21 11:01:56 crc kubenswrapper[4881]: I0121 11:01:56.125149 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1ebf4ad-7b0d-4711-93bd-206ec36e7202-config\") pod \"route-controller-manager-54bb857fc-6xg7b\" (UID: \"b1ebf4ad-7b0d-4711-93bd-206ec36e7202\") " pod="openshift-route-controller-manager/route-controller-manager-54bb857fc-6xg7b" Jan 21 11:01:56 crc kubenswrapper[4881]: I0121 11:01:56.125805 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b1ebf4ad-7b0d-4711-93bd-206ec36e7202-client-ca\") pod \"route-controller-manager-54bb857fc-6xg7b\" (UID: \"b1ebf4ad-7b0d-4711-93bd-206ec36e7202\") " pod="openshift-route-controller-manager/route-controller-manager-54bb857fc-6xg7b" Jan 21 11:01:56 crc kubenswrapper[4881]: I0121 11:01:56.132027 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1ebf4ad-7b0d-4711-93bd-206ec36e7202-serving-cert\") pod \"route-controller-manager-54bb857fc-6xg7b\" (UID: \"b1ebf4ad-7b0d-4711-93bd-206ec36e7202\") " pod="openshift-route-controller-manager/route-controller-manager-54bb857fc-6xg7b" Jan 21 11:01:56 crc kubenswrapper[4881]: I0121 11:01:56.145227 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wx6k\" (UniqueName: \"kubernetes.io/projected/b1ebf4ad-7b0d-4711-93bd-206ec36e7202-kube-api-access-6wx6k\") pod \"route-controller-manager-54bb857fc-6xg7b\" (UID: \"b1ebf4ad-7b0d-4711-93bd-206ec36e7202\") " pod="openshift-route-controller-manager/route-controller-manager-54bb857fc-6xg7b" Jan 21 11:01:56 crc kubenswrapper[4881]: I0121 11:01:56.225711 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54bb857fc-6xg7b" Jan 21 11:01:56 crc kubenswrapper[4881]: I0121 11:01:56.486407 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54bb857fc-6xg7b"] Jan 21 11:01:56 crc kubenswrapper[4881]: I0121 11:01:56.614270 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-cfcdf47c7-fppdw"] Jan 21 11:01:56 crc kubenswrapper[4881]: I0121 11:01:56.698760 4881 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 11:01:56 crc kubenswrapper[4881]: I0121 11:01:56.699040 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://ffc7cfcc896e97bc89bcafadd903d32675c37638ae26cc272102f0c6d6bc59d1" gracePeriod=5 Jan 21 11:01:57 crc kubenswrapper[4881]: I0121 11:01:57.327555 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d66b837-f7b1-4795-895f-08cdabe48b37" path="/var/lib/kubelet/pods/1d66b837-f7b1-4795-895f-08cdabe48b37/volumes" Jan 21 11:01:57 crc kubenswrapper[4881]: I0121 11:01:57.328528 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b83e71f8-970c-4afc-ac31-264c7ca6625a" path="/var/lib/kubelet/pods/b83e71f8-970c-4afc-ac31-264c7ca6625a/volumes" Jan 21 11:01:57 crc kubenswrapper[4881]: I0121 11:01:57.329217 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54bb857fc-6xg7b" event={"ID":"b1ebf4ad-7b0d-4711-93bd-206ec36e7202","Type":"ContainerStarted","Data":"03285c7f75ca0c5ea5fc4bbbace73cfbfd25315c2b430af309cd5af6d0d8503a"} Jan 21 11:01:57 crc kubenswrapper[4881]: I0121 11:01:57.329271 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54bb857fc-6xg7b" event={"ID":"b1ebf4ad-7b0d-4711-93bd-206ec36e7202","Type":"ContainerStarted","Data":"cf1ccaca8e9193a4546c7cd1215ccba45fb7b47029b1d20906ee6e97c1d22afe"} Jan 21 11:01:57 crc kubenswrapper[4881]: I0121 11:01:57.344568 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-54bb857fc-6xg7b" podStartSLOduration=21.344539966 podStartE2EDuration="21.344539966s" podCreationTimestamp="2026-01-21 11:01:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:01:57.336053146 +0000 UTC m=+304.596009615" watchObservedRunningTime="2026-01-21 11:01:57.344539966 +0000 UTC m=+304.604496435" Jan 21 11:01:58 crc kubenswrapper[4881]: E0121 11:01:58.276882 4881 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod706c6a3b_823b_4ea3_b7a8_e20d571d3ace.slice/crio-conmon-9c8c8d93509d2a29c183d63351f0748ec6e60414dbb285df980924884b598111.scope\": RecentStats: unable to find data in memory cache]" Jan 21 11:01:58 crc kubenswrapper[4881]: I0121 11:01:58.326867 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-cfcdf47c7-fppdw" podUID="89559857-e73d-4f35-838d-c0b0946939d4" containerName="controller-manager" containerID="cri-o://e4842f0920f40f8afe25168938541ec7282f9d06248cee97e875afa522eda397" gracePeriod=30 Jan 21 11:01:58 crc kubenswrapper[4881]: I0121 11:01:58.327187 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-54bb857fc-6xg7b" Jan 21 11:01:58 crc kubenswrapper[4881]: I0121 11:01:58.347281 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-54bb857fc-6xg7b" Jan 21 11:01:58 crc kubenswrapper[4881]: I0121 11:01:58.800587 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cfcdf47c7-fppdw" Jan 21 11:01:58 crc kubenswrapper[4881]: I0121 11:01:58.837703 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-86b9bf4878-kbmxb"] Jan 21 11:01:58 crc kubenswrapper[4881]: E0121 11:01:58.838011 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89559857-e73d-4f35-838d-c0b0946939d4" containerName="controller-manager" Jan 21 11:01:58 crc kubenswrapper[4881]: I0121 11:01:58.838029 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="89559857-e73d-4f35-838d-c0b0946939d4" containerName="controller-manager" Jan 21 11:01:58 crc kubenswrapper[4881]: E0121 11:01:58.838046 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 21 11:01:58 crc kubenswrapper[4881]: I0121 11:01:58.838055 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 21 11:01:58 crc kubenswrapper[4881]: I0121 11:01:58.838157 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="89559857-e73d-4f35-838d-c0b0946939d4" containerName="controller-manager" Jan 21 11:01:58 crc kubenswrapper[4881]: I0121 11:01:58.838172 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 21 11:01:58 crc kubenswrapper[4881]: I0121 11:01:58.838553 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86b9bf4878-kbmxb" Jan 21 11:01:58 crc kubenswrapper[4881]: I0121 11:01:58.851283 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-86b9bf4878-kbmxb"] Jan 21 11:01:58 crc kubenswrapper[4881]: I0121 11:01:58.864730 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89559857-e73d-4f35-838d-c0b0946939d4-serving-cert\") pod \"89559857-e73d-4f35-838d-c0b0946939d4\" (UID: \"89559857-e73d-4f35-838d-c0b0946939d4\") " Jan 21 11:01:58 crc kubenswrapper[4881]: I0121 11:01:58.864772 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4kr9\" (UniqueName: \"kubernetes.io/projected/89559857-e73d-4f35-838d-c0b0946939d4-kube-api-access-v4kr9\") pod \"89559857-e73d-4f35-838d-c0b0946939d4\" (UID: \"89559857-e73d-4f35-838d-c0b0946939d4\") " Jan 21 11:01:58 crc kubenswrapper[4881]: I0121 11:01:58.864848 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/89559857-e73d-4f35-838d-c0b0946939d4-proxy-ca-bundles\") pod \"89559857-e73d-4f35-838d-c0b0946939d4\" (UID: \"89559857-e73d-4f35-838d-c0b0946939d4\") " Jan 21 11:01:58 crc kubenswrapper[4881]: I0121 11:01:58.864897 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/89559857-e73d-4f35-838d-c0b0946939d4-client-ca\") pod \"89559857-e73d-4f35-838d-c0b0946939d4\" (UID: \"89559857-e73d-4f35-838d-c0b0946939d4\") " Jan 21 11:01:58 crc kubenswrapper[4881]: I0121 11:01:58.864925 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89559857-e73d-4f35-838d-c0b0946939d4-config\") pod \"89559857-e73d-4f35-838d-c0b0946939d4\" (UID: \"89559857-e73d-4f35-838d-c0b0946939d4\") " Jan 21 11:01:58 crc kubenswrapper[4881]: I0121 11:01:58.866137 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89559857-e73d-4f35-838d-c0b0946939d4-config" (OuterVolumeSpecName: "config") pod "89559857-e73d-4f35-838d-c0b0946939d4" (UID: "89559857-e73d-4f35-838d-c0b0946939d4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:01:58 crc kubenswrapper[4881]: I0121 11:01:58.867558 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89559857-e73d-4f35-838d-c0b0946939d4-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "89559857-e73d-4f35-838d-c0b0946939d4" (UID: "89559857-e73d-4f35-838d-c0b0946939d4"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:01:58 crc kubenswrapper[4881]: I0121 11:01:58.870892 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89559857-e73d-4f35-838d-c0b0946939d4-client-ca" (OuterVolumeSpecName: "client-ca") pod "89559857-e73d-4f35-838d-c0b0946939d4" (UID: "89559857-e73d-4f35-838d-c0b0946939d4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:01:58 crc kubenswrapper[4881]: I0121 11:01:58.873967 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89559857-e73d-4f35-838d-c0b0946939d4-kube-api-access-v4kr9" (OuterVolumeSpecName: "kube-api-access-v4kr9") pod "89559857-e73d-4f35-838d-c0b0946939d4" (UID: "89559857-e73d-4f35-838d-c0b0946939d4"). InnerVolumeSpecName "kube-api-access-v4kr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:01:58 crc kubenswrapper[4881]: I0121 11:01:58.874197 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89559857-e73d-4f35-838d-c0b0946939d4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "89559857-e73d-4f35-838d-c0b0946939d4" (UID: "89559857-e73d-4f35-838d-c0b0946939d4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:01:58 crc kubenswrapper[4881]: I0121 11:01:58.967019 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpspj\" (UniqueName: \"kubernetes.io/projected/6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f-kube-api-access-wpspj\") pod \"controller-manager-86b9bf4878-kbmxb\" (UID: \"6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f\") " pod="openshift-controller-manager/controller-manager-86b9bf4878-kbmxb" Jan 21 11:01:58 crc kubenswrapper[4881]: I0121 11:01:58.967084 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f-config\") pod \"controller-manager-86b9bf4878-kbmxb\" (UID: \"6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f\") " pod="openshift-controller-manager/controller-manager-86b9bf4878-kbmxb" Jan 21 11:01:58 crc kubenswrapper[4881]: I0121 11:01:58.967144 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f-proxy-ca-bundles\") pod \"controller-manager-86b9bf4878-kbmxb\" (UID: \"6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f\") " pod="openshift-controller-manager/controller-manager-86b9bf4878-kbmxb" Jan 21 11:01:58 crc kubenswrapper[4881]: I0121 11:01:58.967184 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f-client-ca\") pod \"controller-manager-86b9bf4878-kbmxb\" (UID: \"6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f\") " pod="openshift-controller-manager/controller-manager-86b9bf4878-kbmxb" Jan 21 11:01:58 crc kubenswrapper[4881]: I0121 11:01:58.967258 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f-serving-cert\") pod \"controller-manager-86b9bf4878-kbmxb\" (UID: \"6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f\") " pod="openshift-controller-manager/controller-manager-86b9bf4878-kbmxb" Jan 21 11:01:58 crc kubenswrapper[4881]: I0121 11:01:58.967398 4881 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/89559857-e73d-4f35-838d-c0b0946939d4-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:58 crc kubenswrapper[4881]: I0121 11:01:58.967418 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89559857-e73d-4f35-838d-c0b0946939d4-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:58 crc kubenswrapper[4881]: I0121 11:01:58.967449 4881 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89559857-e73d-4f35-838d-c0b0946939d4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:58 crc kubenswrapper[4881]: I0121 11:01:58.967462 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4kr9\" (UniqueName: \"kubernetes.io/projected/89559857-e73d-4f35-838d-c0b0946939d4-kube-api-access-v4kr9\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:58 crc kubenswrapper[4881]: I0121 11:01:58.967471 4881 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/89559857-e73d-4f35-838d-c0b0946939d4-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:59 crc kubenswrapper[4881]: I0121 11:01:59.072908 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f-serving-cert\") pod \"controller-manager-86b9bf4878-kbmxb\" (UID: \"6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f\") " pod="openshift-controller-manager/controller-manager-86b9bf4878-kbmxb" Jan 21 11:01:59 crc kubenswrapper[4881]: I0121 11:01:59.072993 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpspj\" (UniqueName: \"kubernetes.io/projected/6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f-kube-api-access-wpspj\") pod \"controller-manager-86b9bf4878-kbmxb\" (UID: \"6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f\") " pod="openshift-controller-manager/controller-manager-86b9bf4878-kbmxb" Jan 21 11:01:59 crc kubenswrapper[4881]: I0121 11:01:59.073018 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f-config\") pod \"controller-manager-86b9bf4878-kbmxb\" (UID: \"6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f\") " pod="openshift-controller-manager/controller-manager-86b9bf4878-kbmxb" Jan 21 11:01:59 crc kubenswrapper[4881]: I0121 11:01:59.073047 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f-proxy-ca-bundles\") pod \"controller-manager-86b9bf4878-kbmxb\" (UID: \"6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f\") " pod="openshift-controller-manager/controller-manager-86b9bf4878-kbmxb" Jan 21 11:01:59 crc kubenswrapper[4881]: I0121 11:01:59.073077 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f-client-ca\") pod \"controller-manager-86b9bf4878-kbmxb\" (UID: \"6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f\") " pod="openshift-controller-manager/controller-manager-86b9bf4878-kbmxb" Jan 21 11:01:59 crc kubenswrapper[4881]: I0121 11:01:59.075413 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f-client-ca\") pod \"controller-manager-86b9bf4878-kbmxb\" (UID: \"6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f\") " pod="openshift-controller-manager/controller-manager-86b9bf4878-kbmxb" Jan 21 11:01:59 crc kubenswrapper[4881]: I0121 11:01:59.075706 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f-config\") pod \"controller-manager-86b9bf4878-kbmxb\" (UID: \"6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f\") " pod="openshift-controller-manager/controller-manager-86b9bf4878-kbmxb" Jan 21 11:01:59 crc kubenswrapper[4881]: I0121 11:01:59.075970 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f-proxy-ca-bundles\") pod \"controller-manager-86b9bf4878-kbmxb\" (UID: \"6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f\") " pod="openshift-controller-manager/controller-manager-86b9bf4878-kbmxb" Jan 21 11:01:59 crc kubenswrapper[4881]: I0121 11:01:59.081359 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f-serving-cert\") pod \"controller-manager-86b9bf4878-kbmxb\" (UID: \"6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f\") " pod="openshift-controller-manager/controller-manager-86b9bf4878-kbmxb" Jan 21 11:01:59 crc kubenswrapper[4881]: I0121 11:01:59.090888 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpspj\" (UniqueName: \"kubernetes.io/projected/6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f-kube-api-access-wpspj\") pod \"controller-manager-86b9bf4878-kbmxb\" (UID: \"6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f\") " pod="openshift-controller-manager/controller-manager-86b9bf4878-kbmxb" Jan 21 11:01:59 crc kubenswrapper[4881]: I0121 11:01:59.183738 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86b9bf4878-kbmxb" Jan 21 11:01:59 crc kubenswrapper[4881]: I0121 11:01:59.345392 4881 generic.go:334] "Generic (PLEG): container finished" podID="89559857-e73d-4f35-838d-c0b0946939d4" containerID="e4842f0920f40f8afe25168938541ec7282f9d06248cee97e875afa522eda397" exitCode=0 Jan 21 11:01:59 crc kubenswrapper[4881]: I0121 11:01:59.346258 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cfcdf47c7-fppdw" Jan 21 11:01:59 crc kubenswrapper[4881]: I0121 11:01:59.350517 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cfcdf47c7-fppdw" event={"ID":"89559857-e73d-4f35-838d-c0b0946939d4","Type":"ContainerDied","Data":"e4842f0920f40f8afe25168938541ec7282f9d06248cee97e875afa522eda397"} Jan 21 11:01:59 crc kubenswrapper[4881]: I0121 11:01:59.350571 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cfcdf47c7-fppdw" event={"ID":"89559857-e73d-4f35-838d-c0b0946939d4","Type":"ContainerDied","Data":"ebe56607ace74705e145d654d7bc2814291ec5e33259c85e6447339814042d78"} Jan 21 11:01:59 crc kubenswrapper[4881]: I0121 11:01:59.350590 4881 scope.go:117] "RemoveContainer" containerID="e4842f0920f40f8afe25168938541ec7282f9d06248cee97e875afa522eda397" Jan 21 11:01:59 crc kubenswrapper[4881]: I0121 11:01:59.397954 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-cfcdf47c7-fppdw"] Jan 21 11:01:59 crc kubenswrapper[4881]: I0121 11:01:59.401559 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-cfcdf47c7-fppdw"] Jan 21 11:01:59 crc kubenswrapper[4881]: I0121 11:01:59.407336 4881 scope.go:117] "RemoveContainer" containerID="e4842f0920f40f8afe25168938541ec7282f9d06248cee97e875afa522eda397" Jan 21 11:01:59 crc kubenswrapper[4881]: E0121 11:01:59.409127 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4842f0920f40f8afe25168938541ec7282f9d06248cee97e875afa522eda397\": container with ID starting with e4842f0920f40f8afe25168938541ec7282f9d06248cee97e875afa522eda397 not found: ID does not exist" containerID="e4842f0920f40f8afe25168938541ec7282f9d06248cee97e875afa522eda397" Jan 21 11:01:59 crc kubenswrapper[4881]: I0121 11:01:59.409172 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4842f0920f40f8afe25168938541ec7282f9d06248cee97e875afa522eda397"} err="failed to get container status \"e4842f0920f40f8afe25168938541ec7282f9d06248cee97e875afa522eda397\": rpc error: code = NotFound desc = could not find container \"e4842f0920f40f8afe25168938541ec7282f9d06248cee97e875afa522eda397\": container with ID starting with e4842f0920f40f8afe25168938541ec7282f9d06248cee97e875afa522eda397 not found: ID does not exist" Jan 21 11:01:59 crc kubenswrapper[4881]: I0121 11:01:59.715742 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-86b9bf4878-kbmxb"] Jan 21 11:02:00 crc kubenswrapper[4881]: I0121 11:02:00.355420 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86b9bf4878-kbmxb" event={"ID":"6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f","Type":"ContainerStarted","Data":"66bf8da974464776256d6a59805c0099cfc6baf199f22bc813539a2a6a44acee"} Jan 21 11:02:00 crc kubenswrapper[4881]: I0121 11:02:00.355996 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86b9bf4878-kbmxb" event={"ID":"6711fdf8-3aa8-4c33-8ab3-9ae3bf27362f","Type":"ContainerStarted","Data":"332bcaa29cc1493ca4d3b0a99be15366debbc1695857b25b01aa44f8caa14d80"} Jan 21 11:02:00 crc kubenswrapper[4881]: I0121 11:02:00.358527 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-86b9bf4878-kbmxb" Jan 21 11:02:00 crc kubenswrapper[4881]: I0121 11:02:00.365939 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-86b9bf4878-kbmxb" Jan 21 11:02:00 crc kubenswrapper[4881]: I0121 11:02:00.383527 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-86b9bf4878-kbmxb" podStartSLOduration=4.3834904 podStartE2EDuration="4.3834904s" podCreationTimestamp="2026-01-21 11:01:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:02:00.37615035 +0000 UTC m=+307.636106829" watchObservedRunningTime="2026-01-21 11:02:00.3834904 +0000 UTC m=+307.643446879" Jan 21 11:02:01 crc kubenswrapper[4881]: I0121 11:02:01.323115 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89559857-e73d-4f35-838d-c0b0946939d4" path="/var/lib/kubelet/pods/89559857-e73d-4f35-838d-c0b0946939d4/volumes" Jan 21 11:02:01 crc kubenswrapper[4881]: I0121 11:02:01.837173 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 21 11:02:01 crc kubenswrapper[4881]: I0121 11:02:01.837323 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 11:02:01 crc kubenswrapper[4881]: I0121 11:02:01.918994 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:02:01 crc kubenswrapper[4881]: I0121 11:02:01.918885 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 11:02:01 crc kubenswrapper[4881]: I0121 11:02:01.919663 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 11:02:01 crc kubenswrapper[4881]: I0121 11:02:01.921049 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 11:02:01 crc kubenswrapper[4881]: I0121 11:02:01.921137 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 11:02:01 crc kubenswrapper[4881]: I0121 11:02:01.921206 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 11:02:01 crc kubenswrapper[4881]: I0121 11:02:01.921564 4881 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:01 crc kubenswrapper[4881]: I0121 11:02:01.921614 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:02:01 crc kubenswrapper[4881]: I0121 11:02:01.921658 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:02:01 crc kubenswrapper[4881]: I0121 11:02:01.921691 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:02:01 crc kubenswrapper[4881]: I0121 11:02:01.930358 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:02:02 crc kubenswrapper[4881]: I0121 11:02:02.022804 4881 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:02 crc kubenswrapper[4881]: I0121 11:02:02.022842 4881 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:02 crc kubenswrapper[4881]: I0121 11:02:02.022854 4881 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:02 crc kubenswrapper[4881]: I0121 11:02:02.022863 4881 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:02 crc kubenswrapper[4881]: I0121 11:02:02.384224 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 21 11:02:02 crc kubenswrapper[4881]: I0121 11:02:02.384715 4881 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="ffc7cfcc896e97bc89bcafadd903d32675c37638ae26cc272102f0c6d6bc59d1" exitCode=137 Jan 21 11:02:02 crc kubenswrapper[4881]: I0121 11:02:02.384917 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 11:02:02 crc kubenswrapper[4881]: I0121 11:02:02.384976 4881 scope.go:117] "RemoveContainer" containerID="ffc7cfcc896e97bc89bcafadd903d32675c37638ae26cc272102f0c6d6bc59d1" Jan 21 11:02:02 crc kubenswrapper[4881]: I0121 11:02:02.417310 4881 scope.go:117] "RemoveContainer" containerID="ffc7cfcc896e97bc89bcafadd903d32675c37638ae26cc272102f0c6d6bc59d1" Jan 21 11:02:02 crc kubenswrapper[4881]: E0121 11:02:02.418077 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffc7cfcc896e97bc89bcafadd903d32675c37638ae26cc272102f0c6d6bc59d1\": container with ID starting with ffc7cfcc896e97bc89bcafadd903d32675c37638ae26cc272102f0c6d6bc59d1 not found: ID does not exist" containerID="ffc7cfcc896e97bc89bcafadd903d32675c37638ae26cc272102f0c6d6bc59d1" Jan 21 11:02:02 crc kubenswrapper[4881]: I0121 11:02:02.418138 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffc7cfcc896e97bc89bcafadd903d32675c37638ae26cc272102f0c6d6bc59d1"} err="failed to get container status \"ffc7cfcc896e97bc89bcafadd903d32675c37638ae26cc272102f0c6d6bc59d1\": rpc error: code = NotFound desc = could not find container \"ffc7cfcc896e97bc89bcafadd903d32675c37638ae26cc272102f0c6d6bc59d1\": container with ID starting with ffc7cfcc896e97bc89bcafadd903d32675c37638ae26cc272102f0c6d6bc59d1 not found: ID does not exist" Jan 21 11:02:02 crc kubenswrapper[4881]: I0121 11:02:02.959968 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q6dn5"] Jan 21 11:02:02 crc kubenswrapper[4881]: I0121 11:02:02.960325 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-q6dn5" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" containerName="registry-server" containerID="cri-o://e42581773a8d4ea1772dd60eaf9071bf2de0cdd39b8e134e5ac5a682d95b642f" gracePeriod=30 Jan 21 11:02:02 crc kubenswrapper[4881]: I0121 11:02:02.974186 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v5n2s"] Jan 21 11:02:02 crc kubenswrapper[4881]: I0121 11:02:02.974604 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-v5n2s" podUID="e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a" containerName="registry-server" containerID="cri-o://091b8c7421a6daba2d38abc6600200f92a99a9d9fffb2a18673337cc1cab5a28" gracePeriod=30 Jan 21 11:02:02 crc kubenswrapper[4881]: I0121 11:02:02.998121 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xmq82"] Jan 21 11:02:02 crc kubenswrapper[4881]: I0121 11:02:02.998418 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-xmq82" podUID="e94f1e92-21b2-44c9-b499-b879850c288d" containerName="marketplace-operator" containerID="cri-o://814fc7d7b657d30002e0169875973f3d65029d02d56ac8702f4d08fa12940079" gracePeriod=30 Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.005621 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-89m75"] Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.006027 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-89m75" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" containerName="registry-server" containerID="cri-o://d4c87b729f18eaf9f12531e5147374286d6a7a44e910d96df5b3275a242bc490" gracePeriod=30 Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.016497 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kfmhs"] Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.016913 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kfmhs" podUID="d318e830-067f-4722-9d74-a45fcefc939d" containerName="registry-server" containerID="cri-o://ea62c10cfd248c0ef9c6d0347f5a3b0a2b7e8d1e35c546c01d7fdadf484cb508" gracePeriod=30 Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.059100 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vrcvz"] Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.060661 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vrcvz" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.062634 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vrcvz"] Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.140131 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/98f0e6fe-f27f-4d75-9149-6238b2220849-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vrcvz\" (UID: \"98f0e6fe-f27f-4d75-9149-6238b2220849\") " pod="openshift-marketplace/marketplace-operator-79b997595-vrcvz" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.140631 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqb8m\" (UniqueName: \"kubernetes.io/projected/98f0e6fe-f27f-4d75-9149-6238b2220849-kube-api-access-mqb8m\") pod \"marketplace-operator-79b997595-vrcvz\" (UID: \"98f0e6fe-f27f-4d75-9149-6238b2220849\") " pod="openshift-marketplace/marketplace-operator-79b997595-vrcvz" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.140698 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/98f0e6fe-f27f-4d75-9149-6238b2220849-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vrcvz\" (UID: \"98f0e6fe-f27f-4d75-9149-6238b2220849\") " pod="openshift-marketplace/marketplace-operator-79b997595-vrcvz" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.242439 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/98f0e6fe-f27f-4d75-9149-6238b2220849-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vrcvz\" (UID: \"98f0e6fe-f27f-4d75-9149-6238b2220849\") " pod="openshift-marketplace/marketplace-operator-79b997595-vrcvz" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.242511 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqb8m\" (UniqueName: \"kubernetes.io/projected/98f0e6fe-f27f-4d75-9149-6238b2220849-kube-api-access-mqb8m\") pod \"marketplace-operator-79b997595-vrcvz\" (UID: \"98f0e6fe-f27f-4d75-9149-6238b2220849\") " pod="openshift-marketplace/marketplace-operator-79b997595-vrcvz" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.242561 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/98f0e6fe-f27f-4d75-9149-6238b2220849-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vrcvz\" (UID: \"98f0e6fe-f27f-4d75-9149-6238b2220849\") " pod="openshift-marketplace/marketplace-operator-79b997595-vrcvz" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.252373 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/98f0e6fe-f27f-4d75-9149-6238b2220849-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vrcvz\" (UID: \"98f0e6fe-f27f-4d75-9149-6238b2220849\") " pod="openshift-marketplace/marketplace-operator-79b997595-vrcvz" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.271123 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/98f0e6fe-f27f-4d75-9149-6238b2220849-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vrcvz\" (UID: \"98f0e6fe-f27f-4d75-9149-6238b2220849\") " pod="openshift-marketplace/marketplace-operator-79b997595-vrcvz" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.277577 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqb8m\" (UniqueName: \"kubernetes.io/projected/98f0e6fe-f27f-4d75-9149-6238b2220849-kube-api-access-mqb8m\") pod \"marketplace-operator-79b997595-vrcvz\" (UID: \"98f0e6fe-f27f-4d75-9149-6238b2220849\") " pod="openshift-marketplace/marketplace-operator-79b997595-vrcvz" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.319698 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.320248 4881 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.336801 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.336875 4881 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="1c8815a8-fd68-4185-92ad-520c398cd927" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.345111 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.345162 4881 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="1c8815a8-fd68-4185-92ad-520c398cd927" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.403414 4881 generic.go:334] "Generic (PLEG): container finished" podID="8e002e57-13ab-477a-9e16-980e13b5e47f" containerID="e42581773a8d4ea1772dd60eaf9071bf2de0cdd39b8e134e5ac5a682d95b642f" exitCode=0 Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.403461 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q6dn5" event={"ID":"8e002e57-13ab-477a-9e16-980e13b5e47f","Type":"ContainerDied","Data":"e42581773a8d4ea1772dd60eaf9071bf2de0cdd39b8e134e5ac5a682d95b642f"} Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.408752 4881 generic.go:334] "Generic (PLEG): container finished" podID="075db786-6ad0-4982-b70e-bd05d4f240ec" containerID="d4c87b729f18eaf9f12531e5147374286d6a7a44e910d96df5b3275a242bc490" exitCode=0 Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.408846 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-89m75" event={"ID":"075db786-6ad0-4982-b70e-bd05d4f240ec","Type":"ContainerDied","Data":"d4c87b729f18eaf9f12531e5147374286d6a7a44e910d96df5b3275a242bc490"} Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.410434 4881 generic.go:334] "Generic (PLEG): container finished" podID="e94f1e92-21b2-44c9-b499-b879850c288d" containerID="814fc7d7b657d30002e0169875973f3d65029d02d56ac8702f4d08fa12940079" exitCode=0 Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.410492 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xmq82" event={"ID":"e94f1e92-21b2-44c9-b499-b879850c288d","Type":"ContainerDied","Data":"814fc7d7b657d30002e0169875973f3d65029d02d56ac8702f4d08fa12940079"} Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.423245 4881 generic.go:334] "Generic (PLEG): container finished" podID="e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a" containerID="091b8c7421a6daba2d38abc6600200f92a99a9d9fffb2a18673337cc1cab5a28" exitCode=0 Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.423350 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v5n2s" event={"ID":"e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a","Type":"ContainerDied","Data":"091b8c7421a6daba2d38abc6600200f92a99a9d9fffb2a18673337cc1cab5a28"} Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.423422 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v5n2s" event={"ID":"e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a","Type":"ContainerDied","Data":"79b5df43169324987a329525742a5078ed6a8e75640eab433d3baf2cf413407f"} Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.423436 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79b5df43169324987a329525742a5078ed6a8e75640eab433d3baf2cf413407f" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.426746 4881 generic.go:334] "Generic (PLEG): container finished" podID="d318e830-067f-4722-9d74-a45fcefc939d" containerID="ea62c10cfd248c0ef9c6d0347f5a3b0a2b7e8d1e35c546c01d7fdadf484cb508" exitCode=0 Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.426808 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kfmhs" event={"ID":"d318e830-067f-4722-9d74-a45fcefc939d","Type":"ContainerDied","Data":"ea62c10cfd248c0ef9c6d0347f5a3b0a2b7e8d1e35c546c01d7fdadf484cb508"} Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.437067 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vrcvz" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.441329 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v5n2s" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.497690 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q6dn5" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.547834 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a-utilities\") pod \"e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a\" (UID: \"e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a\") " Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.547935 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e002e57-13ab-477a-9e16-980e13b5e47f-utilities\") pod \"8e002e57-13ab-477a-9e16-980e13b5e47f\" (UID: \"8e002e57-13ab-477a-9e16-980e13b5e47f\") " Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.547969 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g42w8\" (UniqueName: \"kubernetes.io/projected/8e002e57-13ab-477a-9e16-980e13b5e47f-kube-api-access-g42w8\") pod \"8e002e57-13ab-477a-9e16-980e13b5e47f\" (UID: \"8e002e57-13ab-477a-9e16-980e13b5e47f\") " Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.547998 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e002e57-13ab-477a-9e16-980e13b5e47f-catalog-content\") pod \"8e002e57-13ab-477a-9e16-980e13b5e47f\" (UID: \"8e002e57-13ab-477a-9e16-980e13b5e47f\") " Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.548029 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mf89m\" (UniqueName: \"kubernetes.io/projected/e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a-kube-api-access-mf89m\") pod \"e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a\" (UID: \"e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a\") " Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.548073 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a-catalog-content\") pod \"e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a\" (UID: \"e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a\") " Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.548784 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e002e57-13ab-477a-9e16-980e13b5e47f-utilities" (OuterVolumeSpecName: "utilities") pod "8e002e57-13ab-477a-9e16-980e13b5e47f" (UID: "8e002e57-13ab-477a-9e16-980e13b5e47f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.566184 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a-kube-api-access-mf89m" (OuterVolumeSpecName: "kube-api-access-mf89m") pod "e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a" (UID: "e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a"). InnerVolumeSpecName "kube-api-access-mf89m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.566355 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e002e57-13ab-477a-9e16-980e13b5e47f-kube-api-access-g42w8" (OuterVolumeSpecName: "kube-api-access-g42w8") pod "8e002e57-13ab-477a-9e16-980e13b5e47f" (UID: "8e002e57-13ab-477a-9e16-980e13b5e47f"). InnerVolumeSpecName "kube-api-access-g42w8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.570695 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a-utilities" (OuterVolumeSpecName: "utilities") pod "e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a" (UID: "e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.611687 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e002e57-13ab-477a-9e16-980e13b5e47f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8e002e57-13ab-477a-9e16-980e13b5e47f" (UID: "8e002e57-13ab-477a-9e16-980e13b5e47f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.617884 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a" (UID: "e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.650608 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.650661 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e002e57-13ab-477a-9e16-980e13b5e47f-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.650671 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g42w8\" (UniqueName: \"kubernetes.io/projected/8e002e57-13ab-477a-9e16-980e13b5e47f-kube-api-access-g42w8\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.650684 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e002e57-13ab-477a-9e16-980e13b5e47f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.650694 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mf89m\" (UniqueName: \"kubernetes.io/projected/e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a-kube-api-access-mf89m\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.650701 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.725916 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-89m75" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.734364 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xmq82" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.752166 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2qtc\" (UniqueName: \"kubernetes.io/projected/075db786-6ad0-4982-b70e-bd05d4f240ec-kube-api-access-q2qtc\") pod \"075db786-6ad0-4982-b70e-bd05d4f240ec\" (UID: \"075db786-6ad0-4982-b70e-bd05d4f240ec\") " Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.752330 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/075db786-6ad0-4982-b70e-bd05d4f240ec-utilities\") pod \"075db786-6ad0-4982-b70e-bd05d4f240ec\" (UID: \"075db786-6ad0-4982-b70e-bd05d4f240ec\") " Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.752440 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/075db786-6ad0-4982-b70e-bd05d4f240ec-catalog-content\") pod \"075db786-6ad0-4982-b70e-bd05d4f240ec\" (UID: \"075db786-6ad0-4982-b70e-bd05d4f240ec\") " Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.754039 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/075db786-6ad0-4982-b70e-bd05d4f240ec-utilities" (OuterVolumeSpecName: "utilities") pod "075db786-6ad0-4982-b70e-bd05d4f240ec" (UID: "075db786-6ad0-4982-b70e-bd05d4f240ec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.757666 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/075db786-6ad0-4982-b70e-bd05d4f240ec-kube-api-access-q2qtc" (OuterVolumeSpecName: "kube-api-access-q2qtc") pod "075db786-6ad0-4982-b70e-bd05d4f240ec" (UID: "075db786-6ad0-4982-b70e-bd05d4f240ec"). InnerVolumeSpecName "kube-api-access-q2qtc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.762652 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kfmhs" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.800820 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/075db786-6ad0-4982-b70e-bd05d4f240ec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "075db786-6ad0-4982-b70e-bd05d4f240ec" (UID: "075db786-6ad0-4982-b70e-bd05d4f240ec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.856106 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fc6f2\" (UniqueName: \"kubernetes.io/projected/d318e830-067f-4722-9d74-a45fcefc939d-kube-api-access-fc6f2\") pod \"d318e830-067f-4722-9d74-a45fcefc939d\" (UID: \"d318e830-067f-4722-9d74-a45fcefc939d\") " Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.856207 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e94f1e92-21b2-44c9-b499-b879850c288d-marketplace-operator-metrics\") pod \"e94f1e92-21b2-44c9-b499-b879850c288d\" (UID: \"e94f1e92-21b2-44c9-b499-b879850c288d\") " Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.856259 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e94f1e92-21b2-44c9-b499-b879850c288d-marketplace-trusted-ca\") pod \"e94f1e92-21b2-44c9-b499-b879850c288d\" (UID: \"e94f1e92-21b2-44c9-b499-b879850c288d\") " Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.856295 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d318e830-067f-4722-9d74-a45fcefc939d-catalog-content\") pod \"d318e830-067f-4722-9d74-a45fcefc939d\" (UID: \"d318e830-067f-4722-9d74-a45fcefc939d\") " Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.856348 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d318e830-067f-4722-9d74-a45fcefc939d-utilities\") pod \"d318e830-067f-4722-9d74-a45fcefc939d\" (UID: \"d318e830-067f-4722-9d74-a45fcefc939d\") " Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.856465 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-995tp\" (UniqueName: \"kubernetes.io/projected/e94f1e92-21b2-44c9-b499-b879850c288d-kube-api-access-995tp\") pod \"e94f1e92-21b2-44c9-b499-b879850c288d\" (UID: \"e94f1e92-21b2-44c9-b499-b879850c288d\") " Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.857085 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/075db786-6ad0-4982-b70e-bd05d4f240ec-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.857119 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2qtc\" (UniqueName: \"kubernetes.io/projected/075db786-6ad0-4982-b70e-bd05d4f240ec-kube-api-access-q2qtc\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.857137 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/075db786-6ad0-4982-b70e-bd05d4f240ec-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.857457 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e94f1e92-21b2-44c9-b499-b879850c288d-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "e94f1e92-21b2-44c9-b499-b879850c288d" (UID: "e94f1e92-21b2-44c9-b499-b879850c288d"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.858364 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d318e830-067f-4722-9d74-a45fcefc939d-utilities" (OuterVolumeSpecName: "utilities") pod "d318e830-067f-4722-9d74-a45fcefc939d" (UID: "d318e830-067f-4722-9d74-a45fcefc939d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.863550 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e94f1e92-21b2-44c9-b499-b879850c288d-kube-api-access-995tp" (OuterVolumeSpecName: "kube-api-access-995tp") pod "e94f1e92-21b2-44c9-b499-b879850c288d" (UID: "e94f1e92-21b2-44c9-b499-b879850c288d"). InnerVolumeSpecName "kube-api-access-995tp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.864263 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d318e830-067f-4722-9d74-a45fcefc939d-kube-api-access-fc6f2" (OuterVolumeSpecName: "kube-api-access-fc6f2") pod "d318e830-067f-4722-9d74-a45fcefc939d" (UID: "d318e830-067f-4722-9d74-a45fcefc939d"). InnerVolumeSpecName "kube-api-access-fc6f2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.868334 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e94f1e92-21b2-44c9-b499-b879850c288d-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "e94f1e92-21b2-44c9-b499-b879850c288d" (UID: "e94f1e92-21b2-44c9-b499-b879850c288d"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.959452 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fc6f2\" (UniqueName: \"kubernetes.io/projected/d318e830-067f-4722-9d74-a45fcefc939d-kube-api-access-fc6f2\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.959520 4881 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/e94f1e92-21b2-44c9-b499-b879850c288d-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.959542 4881 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e94f1e92-21b2-44c9-b499-b879850c288d-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.959560 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d318e830-067f-4722-9d74-a45fcefc939d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.959574 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-995tp\" (UniqueName: \"kubernetes.io/projected/e94f1e92-21b2-44c9-b499-b879850c288d-kube-api-access-995tp\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:03 crc kubenswrapper[4881]: I0121 11:02:03.983859 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vrcvz"] Jan 21 11:02:03 crc kubenswrapper[4881]: W0121 11:02:03.990099 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98f0e6fe_f27f_4d75_9149_6238b2220849.slice/crio-ea34048b1edcabeb6567b730e6cb5d995f3b84ecb21eb2f187130d4fa8f74bc3 WatchSource:0}: Error finding container ea34048b1edcabeb6567b730e6cb5d995f3b84ecb21eb2f187130d4fa8f74bc3: Status 404 returned error can't find the container with id ea34048b1edcabeb6567b730e6cb5d995f3b84ecb21eb2f187130d4fa8f74bc3 Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.030547 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d318e830-067f-4722-9d74-a45fcefc939d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d318e830-067f-4722-9d74-a45fcefc939d" (UID: "d318e830-067f-4722-9d74-a45fcefc939d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.062827 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d318e830-067f-4722-9d74-a45fcefc939d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.437710 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xmq82" Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.437719 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xmq82" event={"ID":"e94f1e92-21b2-44c9-b499-b879850c288d","Type":"ContainerDied","Data":"123c57f996d77041997b15262c61902d2eed5d15c9314dac5b070f52214a0ad3"} Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.437833 4881 scope.go:117] "RemoveContainer" containerID="814fc7d7b657d30002e0169875973f3d65029d02d56ac8702f4d08fa12940079" Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.441946 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vrcvz" event={"ID":"98f0e6fe-f27f-4d75-9149-6238b2220849","Type":"ContainerStarted","Data":"ea34048b1edcabeb6567b730e6cb5d995f3b84ecb21eb2f187130d4fa8f74bc3"} Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.448389 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kfmhs" event={"ID":"d318e830-067f-4722-9d74-a45fcefc939d","Type":"ContainerDied","Data":"b87ddedd309d60e82b2425e90c86377b7db5b6d93701316fb318e5a216d01095"} Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.448439 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kfmhs" Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.453614 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q6dn5" event={"ID":"8e002e57-13ab-477a-9e16-980e13b5e47f","Type":"ContainerDied","Data":"a5c87f9c9c2e9ea53443d498b2b01400a8b6111456d79eeb2d2d4b28aa714ca1"} Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.453722 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q6dn5" Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.458384 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v5n2s" Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.458699 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-89m75" event={"ID":"075db786-6ad0-4982-b70e-bd05d4f240ec","Type":"ContainerDied","Data":"97ca6fad994e892affd0e053e6d3515afda4b44ce01474758415dca871d6c00b"} Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.459062 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-89m75" Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.462295 4881 scope.go:117] "RemoveContainer" containerID="ea62c10cfd248c0ef9c6d0347f5a3b0a2b7e8d1e35c546c01d7fdadf484cb508" Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.490030 4881 scope.go:117] "RemoveContainer" containerID="456438ece135082aa65a1f9d3e1df54da4ad18d3ac41d1e2ac75d98b61443cef" Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.496258 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xmq82"] Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.506402 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xmq82"] Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.513260 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kfmhs"] Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.518993 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kfmhs"] Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.523259 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q6dn5"] Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.533394 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-q6dn5"] Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.537693 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v5n2s"] Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.543226 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-v5n2s"] Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.546248 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-89m75"] Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.548936 4881 scope.go:117] "RemoveContainer" containerID="b9a009384ba81492213bce1a87a61e1b83f262354a9aea725ad849bc0749a5f7" Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.549050 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-89m75"] Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.562307 4881 scope.go:117] "RemoveContainer" containerID="e42581773a8d4ea1772dd60eaf9071bf2de0cdd39b8e134e5ac5a682d95b642f" Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.584992 4881 scope.go:117] "RemoveContainer" containerID="cad9f8570b6b7c8359172ebecd350bcad67cfe5e05e5aeca3f0a038ec3357bb5" Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.607719 4881 scope.go:117] "RemoveContainer" containerID="1ccb96495e693b437b8f3969fa58a55b9e7011c267f14a44820d1cfd34daabf3" Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.636419 4881 scope.go:117] "RemoveContainer" containerID="d4c87b729f18eaf9f12531e5147374286d6a7a44e910d96df5b3275a242bc490" Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.656009 4881 scope.go:117] "RemoveContainer" containerID="a06c8d6c70785e0e51b0e238072a99f6a50caf04a590fb7ba69cc08788ffee9a" Jan 21 11:02:04 crc kubenswrapper[4881]: I0121 11:02:04.693053 4881 scope.go:117] "RemoveContainer" containerID="aa990b30489b423fbac7484510b784c9211e2f63bd3366b894aa031bc0754115" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.079584 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7wxr8"] Jan 21 11:02:05 crc kubenswrapper[4881]: E0121 11:02:05.080426 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" containerName="extract-utilities" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.080443 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" containerName="extract-utilities" Jan 21 11:02:05 crc kubenswrapper[4881]: E0121 11:02:05.080457 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e94f1e92-21b2-44c9-b499-b879850c288d" containerName="marketplace-operator" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.080463 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="e94f1e92-21b2-44c9-b499-b879850c288d" containerName="marketplace-operator" Jan 21 11:02:05 crc kubenswrapper[4881]: E0121 11:02:05.080473 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" containerName="extract-content" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.080479 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" containerName="extract-content" Jan 21 11:02:05 crc kubenswrapper[4881]: E0121 11:02:05.080487 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a" containerName="extract-content" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.080493 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a" containerName="extract-content" Jan 21 11:02:05 crc kubenswrapper[4881]: E0121 11:02:05.080500 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" containerName="extract-content" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.080505 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" containerName="extract-content" Jan 21 11:02:05 crc kubenswrapper[4881]: E0121 11:02:05.080514 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" containerName="registry-server" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.080522 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" containerName="registry-server" Jan 21 11:02:05 crc kubenswrapper[4881]: E0121 11:02:05.080533 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" containerName="registry-server" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.080539 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" containerName="registry-server" Jan 21 11:02:05 crc kubenswrapper[4881]: E0121 11:02:05.080572 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a" containerName="registry-server" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.080580 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a" containerName="registry-server" Jan 21 11:02:05 crc kubenswrapper[4881]: E0121 11:02:05.080588 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a" containerName="extract-utilities" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.080594 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a" containerName="extract-utilities" Jan 21 11:02:05 crc kubenswrapper[4881]: E0121 11:02:05.080602 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" containerName="extract-utilities" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.080608 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" containerName="extract-utilities" Jan 21 11:02:05 crc kubenswrapper[4881]: E0121 11:02:05.080619 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d318e830-067f-4722-9d74-a45fcefc939d" containerName="extract-utilities" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.080625 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="d318e830-067f-4722-9d74-a45fcefc939d" containerName="extract-utilities" Jan 21 11:02:05 crc kubenswrapper[4881]: E0121 11:02:05.080633 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d318e830-067f-4722-9d74-a45fcefc939d" containerName="extract-content" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.080639 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="d318e830-067f-4722-9d74-a45fcefc939d" containerName="extract-content" Jan 21 11:02:05 crc kubenswrapper[4881]: E0121 11:02:05.080648 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d318e830-067f-4722-9d74-a45fcefc939d" containerName="registry-server" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.080653 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="d318e830-067f-4722-9d74-a45fcefc939d" containerName="registry-server" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.080753 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="d318e830-067f-4722-9d74-a45fcefc939d" containerName="registry-server" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.080768 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" containerName="registry-server" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.080778 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="e94f1e92-21b2-44c9-b499-b879850c288d" containerName="marketplace-operator" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.080808 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a" containerName="registry-server" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.080819 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" containerName="registry-server" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.081998 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7wxr8" Jan 21 11:02:05 crc kubenswrapper[4881]: W0121 11:02:05.085609 4881 reflector.go:561] object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g": failed to list *v1.Secret: secrets "certified-operators-dockercfg-4rs5g" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-marketplace": no relationship found between node 'crc' and this object Jan 21 11:02:05 crc kubenswrapper[4881]: E0121 11:02:05.085657 4881 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-4rs5g\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"certified-operators-dockercfg-4rs5g\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-marketplace\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.094923 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7wxr8"] Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.179153 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e9defc7-ad37-4742-b149-cb71d7ea177a-catalog-content\") pod \"certified-operators-7wxr8\" (UID: \"6e9defc7-ad37-4742-b149-cb71d7ea177a\") " pod="openshift-marketplace/certified-operators-7wxr8" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.179250 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e9defc7-ad37-4742-b149-cb71d7ea177a-utilities\") pod \"certified-operators-7wxr8\" (UID: \"6e9defc7-ad37-4742-b149-cb71d7ea177a\") " pod="openshift-marketplace/certified-operators-7wxr8" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.179296 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxc6x\" (UniqueName: \"kubernetes.io/projected/6e9defc7-ad37-4742-b149-cb71d7ea177a-kube-api-access-wxc6x\") pod \"certified-operators-7wxr8\" (UID: \"6e9defc7-ad37-4742-b149-cb71d7ea177a\") " pod="openshift-marketplace/certified-operators-7wxr8" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.280325 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e9defc7-ad37-4742-b149-cb71d7ea177a-catalog-content\") pod \"certified-operators-7wxr8\" (UID: \"6e9defc7-ad37-4742-b149-cb71d7ea177a\") " pod="openshift-marketplace/certified-operators-7wxr8" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.280397 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e9defc7-ad37-4742-b149-cb71d7ea177a-utilities\") pod \"certified-operators-7wxr8\" (UID: \"6e9defc7-ad37-4742-b149-cb71d7ea177a\") " pod="openshift-marketplace/certified-operators-7wxr8" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.280444 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxc6x\" (UniqueName: \"kubernetes.io/projected/6e9defc7-ad37-4742-b149-cb71d7ea177a-kube-api-access-wxc6x\") pod \"certified-operators-7wxr8\" (UID: \"6e9defc7-ad37-4742-b149-cb71d7ea177a\") " pod="openshift-marketplace/certified-operators-7wxr8" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.281363 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e9defc7-ad37-4742-b149-cb71d7ea177a-catalog-content\") pod \"certified-operators-7wxr8\" (UID: \"6e9defc7-ad37-4742-b149-cb71d7ea177a\") " pod="openshift-marketplace/certified-operators-7wxr8" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.281390 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e9defc7-ad37-4742-b149-cb71d7ea177a-utilities\") pod \"certified-operators-7wxr8\" (UID: \"6e9defc7-ad37-4742-b149-cb71d7ea177a\") " pod="openshift-marketplace/certified-operators-7wxr8" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.314022 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxc6x\" (UniqueName: \"kubernetes.io/projected/6e9defc7-ad37-4742-b149-cb71d7ea177a-kube-api-access-wxc6x\") pod \"certified-operators-7wxr8\" (UID: \"6e9defc7-ad37-4742-b149-cb71d7ea177a\") " pod="openshift-marketplace/certified-operators-7wxr8" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.319761 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="075db786-6ad0-4982-b70e-bd05d4f240ec" path="/var/lib/kubelet/pods/075db786-6ad0-4982-b70e-bd05d4f240ec/volumes" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.320720 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e002e57-13ab-477a-9e16-980e13b5e47f" path="/var/lib/kubelet/pods/8e002e57-13ab-477a-9e16-980e13b5e47f/volumes" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.321448 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d318e830-067f-4722-9d74-a45fcefc939d" path="/var/lib/kubelet/pods/d318e830-067f-4722-9d74-a45fcefc939d/volumes" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.322614 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a" path="/var/lib/kubelet/pods/e034cbba-e6a2-4b62-94e1-7bd2d3c5ae8a/volumes" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.323301 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e94f1e92-21b2-44c9-b499-b879850c288d" path="/var/lib/kubelet/pods/e94f1e92-21b2-44c9-b499-b879850c288d/volumes" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.465493 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vrcvz" event={"ID":"98f0e6fe-f27f-4d75-9149-6238b2220849","Type":"ContainerStarted","Data":"3d438dff4284b7b3533355ae936f073ed95243d784cbf4ae5e7206dc38abc68d"} Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.465808 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-vrcvz" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.471657 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-vrcvz" Jan 21 11:02:05 crc kubenswrapper[4881]: I0121 11:02:05.486219 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-vrcvz" podStartSLOduration=3.486196576 podStartE2EDuration="3.486196576s" podCreationTimestamp="2026-01-21 11:02:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:02:05.484261406 +0000 UTC m=+312.744217875" watchObservedRunningTime="2026-01-21 11:02:05.486196576 +0000 UTC m=+312.746153045" Jan 21 11:02:06 crc kubenswrapper[4881]: I0121 11:02:06.318674 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 21 11:02:06 crc kubenswrapper[4881]: I0121 11:02:06.326172 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7wxr8" Jan 21 11:02:06 crc kubenswrapper[4881]: I0121 11:02:06.816608 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7wxr8"] Jan 21 11:02:06 crc kubenswrapper[4881]: W0121 11:02:06.824071 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e9defc7_ad37_4742_b149_cb71d7ea177a.slice/crio-fa83766f89d1616cf56747b49c2fcf160a37e27aa6ba9e86f2b0cf1ec797c327 WatchSource:0}: Error finding container fa83766f89d1616cf56747b49c2fcf160a37e27aa6ba9e86f2b0cf1ec797c327: Status 404 returned error can't find the container with id fa83766f89d1616cf56747b49c2fcf160a37e27aa6ba9e86f2b0cf1ec797c327 Jan 21 11:02:06 crc kubenswrapper[4881]: I0121 11:02:06.879014 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rs9gj"] Jan 21 11:02:06 crc kubenswrapper[4881]: I0121 11:02:06.880308 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rs9gj" Jan 21 11:02:06 crc kubenswrapper[4881]: I0121 11:02:06.884518 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 21 11:02:06 crc kubenswrapper[4881]: I0121 11:02:06.898916 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rs9gj"] Jan 21 11:02:06 crc kubenswrapper[4881]: I0121 11:02:06.904072 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6d87675-513f-412d-a34c-d789cce5b4e8-catalog-content\") pod \"redhat-marketplace-rs9gj\" (UID: \"c6d87675-513f-412d-a34c-d789cce5b4e8\") " pod="openshift-marketplace/redhat-marketplace-rs9gj" Jan 21 11:02:06 crc kubenswrapper[4881]: I0121 11:02:06.904137 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqspx\" (UniqueName: \"kubernetes.io/projected/c6d87675-513f-412d-a34c-d789cce5b4e8-kube-api-access-pqspx\") pod \"redhat-marketplace-rs9gj\" (UID: \"c6d87675-513f-412d-a34c-d789cce5b4e8\") " pod="openshift-marketplace/redhat-marketplace-rs9gj" Jan 21 11:02:06 crc kubenswrapper[4881]: I0121 11:02:06.904362 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6d87675-513f-412d-a34c-d789cce5b4e8-utilities\") pod \"redhat-marketplace-rs9gj\" (UID: \"c6d87675-513f-412d-a34c-d789cce5b4e8\") " pod="openshift-marketplace/redhat-marketplace-rs9gj" Jan 21 11:02:07 crc kubenswrapper[4881]: I0121 11:02:07.006100 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6d87675-513f-412d-a34c-d789cce5b4e8-catalog-content\") pod \"redhat-marketplace-rs9gj\" (UID: \"c6d87675-513f-412d-a34c-d789cce5b4e8\") " pod="openshift-marketplace/redhat-marketplace-rs9gj" Jan 21 11:02:07 crc kubenswrapper[4881]: I0121 11:02:07.006158 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqspx\" (UniqueName: \"kubernetes.io/projected/c6d87675-513f-412d-a34c-d789cce5b4e8-kube-api-access-pqspx\") pod \"redhat-marketplace-rs9gj\" (UID: \"c6d87675-513f-412d-a34c-d789cce5b4e8\") " pod="openshift-marketplace/redhat-marketplace-rs9gj" Jan 21 11:02:07 crc kubenswrapper[4881]: I0121 11:02:07.006206 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6d87675-513f-412d-a34c-d789cce5b4e8-utilities\") pod \"redhat-marketplace-rs9gj\" (UID: \"c6d87675-513f-412d-a34c-d789cce5b4e8\") " pod="openshift-marketplace/redhat-marketplace-rs9gj" Jan 21 11:02:07 crc kubenswrapper[4881]: I0121 11:02:07.006702 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6d87675-513f-412d-a34c-d789cce5b4e8-utilities\") pod \"redhat-marketplace-rs9gj\" (UID: \"c6d87675-513f-412d-a34c-d789cce5b4e8\") " pod="openshift-marketplace/redhat-marketplace-rs9gj" Jan 21 11:02:07 crc kubenswrapper[4881]: I0121 11:02:07.006955 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6d87675-513f-412d-a34c-d789cce5b4e8-catalog-content\") pod \"redhat-marketplace-rs9gj\" (UID: \"c6d87675-513f-412d-a34c-d789cce5b4e8\") " pod="openshift-marketplace/redhat-marketplace-rs9gj" Jan 21 11:02:07 crc kubenswrapper[4881]: I0121 11:02:07.040383 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqspx\" (UniqueName: \"kubernetes.io/projected/c6d87675-513f-412d-a34c-d789cce5b4e8-kube-api-access-pqspx\") pod \"redhat-marketplace-rs9gj\" (UID: \"c6d87675-513f-412d-a34c-d789cce5b4e8\") " pod="openshift-marketplace/redhat-marketplace-rs9gj" Jan 21 11:02:07 crc kubenswrapper[4881]: I0121 11:02:07.223358 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rs9gj" Jan 21 11:02:07 crc kubenswrapper[4881]: I0121 11:02:07.481051 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kfzl8"] Jan 21 11:02:07 crc kubenswrapper[4881]: I0121 11:02:07.482923 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kfzl8" Jan 21 11:02:07 crc kubenswrapper[4881]: I0121 11:02:07.492979 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kfzl8"] Jan 21 11:02:07 crc kubenswrapper[4881]: I0121 11:02:07.493297 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 11:02:07 crc kubenswrapper[4881]: I0121 11:02:07.494578 4881 generic.go:334] "Generic (PLEG): container finished" podID="6e9defc7-ad37-4742-b149-cb71d7ea177a" containerID="33e03055f6685a2d8d66bf472cdde01237efd3237849c8e149705b78539ac11b" exitCode=0 Jan 21 11:02:07 crc kubenswrapper[4881]: I0121 11:02:07.495426 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7wxr8" event={"ID":"6e9defc7-ad37-4742-b149-cb71d7ea177a","Type":"ContainerDied","Data":"33e03055f6685a2d8d66bf472cdde01237efd3237849c8e149705b78539ac11b"} Jan 21 11:02:07 crc kubenswrapper[4881]: I0121 11:02:07.495459 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7wxr8" event={"ID":"6e9defc7-ad37-4742-b149-cb71d7ea177a","Type":"ContainerStarted","Data":"fa83766f89d1616cf56747b49c2fcf160a37e27aa6ba9e86f2b0cf1ec797c327"} Jan 21 11:02:07 crc kubenswrapper[4881]: I0121 11:02:07.512868 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ds4w\" (UniqueName: \"kubernetes.io/projected/8ab3938c-6614-4877-a94c-75b90f339523-kube-api-access-9ds4w\") pod \"redhat-operators-kfzl8\" (UID: \"8ab3938c-6614-4877-a94c-75b90f339523\") " pod="openshift-marketplace/redhat-operators-kfzl8" Jan 21 11:02:07 crc kubenswrapper[4881]: I0121 11:02:07.512934 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ab3938c-6614-4877-a94c-75b90f339523-utilities\") pod \"redhat-operators-kfzl8\" (UID: \"8ab3938c-6614-4877-a94c-75b90f339523\") " pod="openshift-marketplace/redhat-operators-kfzl8" Jan 21 11:02:07 crc kubenswrapper[4881]: I0121 11:02:07.512982 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ab3938c-6614-4877-a94c-75b90f339523-catalog-content\") pod \"redhat-operators-kfzl8\" (UID: \"8ab3938c-6614-4877-a94c-75b90f339523\") " pod="openshift-marketplace/redhat-operators-kfzl8" Jan 21 11:02:07 crc kubenswrapper[4881]: I0121 11:02:07.614412 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ds4w\" (UniqueName: \"kubernetes.io/projected/8ab3938c-6614-4877-a94c-75b90f339523-kube-api-access-9ds4w\") pod \"redhat-operators-kfzl8\" (UID: \"8ab3938c-6614-4877-a94c-75b90f339523\") " pod="openshift-marketplace/redhat-operators-kfzl8" Jan 21 11:02:07 crc kubenswrapper[4881]: I0121 11:02:07.614479 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ab3938c-6614-4877-a94c-75b90f339523-utilities\") pod \"redhat-operators-kfzl8\" (UID: \"8ab3938c-6614-4877-a94c-75b90f339523\") " pod="openshift-marketplace/redhat-operators-kfzl8" Jan 21 11:02:07 crc kubenswrapper[4881]: I0121 11:02:07.614545 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ab3938c-6614-4877-a94c-75b90f339523-catalog-content\") pod \"redhat-operators-kfzl8\" (UID: \"8ab3938c-6614-4877-a94c-75b90f339523\") " pod="openshift-marketplace/redhat-operators-kfzl8" Jan 21 11:02:07 crc kubenswrapper[4881]: I0121 11:02:07.615100 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ab3938c-6614-4877-a94c-75b90f339523-utilities\") pod \"redhat-operators-kfzl8\" (UID: \"8ab3938c-6614-4877-a94c-75b90f339523\") " pod="openshift-marketplace/redhat-operators-kfzl8" Jan 21 11:02:07 crc kubenswrapper[4881]: I0121 11:02:07.615160 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ab3938c-6614-4877-a94c-75b90f339523-catalog-content\") pod \"redhat-operators-kfzl8\" (UID: \"8ab3938c-6614-4877-a94c-75b90f339523\") " pod="openshift-marketplace/redhat-operators-kfzl8" Jan 21 11:02:07 crc kubenswrapper[4881]: I0121 11:02:07.774843 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rs9gj"] Jan 21 11:02:07 crc kubenswrapper[4881]: I0121 11:02:07.777689 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ds4w\" (UniqueName: \"kubernetes.io/projected/8ab3938c-6614-4877-a94c-75b90f339523-kube-api-access-9ds4w\") pod \"redhat-operators-kfzl8\" (UID: \"8ab3938c-6614-4877-a94c-75b90f339523\") " pod="openshift-marketplace/redhat-operators-kfzl8" Jan 21 11:02:07 crc kubenswrapper[4881]: I0121 11:02:07.860151 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kfzl8" Jan 21 11:02:08 crc kubenswrapper[4881]: I0121 11:02:08.289229 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kfzl8"] Jan 21 11:02:08 crc kubenswrapper[4881]: W0121 11:02:08.294227 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ab3938c_6614_4877_a94c_75b90f339523.slice/crio-88693e4459975d71f2437f1140fa85449acac7a24f76403599ddaf3666aae16f WatchSource:0}: Error finding container 88693e4459975d71f2437f1140fa85449acac7a24f76403599ddaf3666aae16f: Status 404 returned error can't find the container with id 88693e4459975d71f2437f1140fa85449acac7a24f76403599ddaf3666aae16f Jan 21 11:02:08 crc kubenswrapper[4881]: E0121 11:02:08.445514 4881 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod706c6a3b_823b_4ea3_b7a8_e20d571d3ace.slice/crio-conmon-9c8c8d93509d2a29c183d63351f0748ec6e60414dbb285df980924884b598111.scope\": RecentStats: unable to find data in memory cache]" Jan 21 11:02:08 crc kubenswrapper[4881]: I0121 11:02:08.503554 4881 generic.go:334] "Generic (PLEG): container finished" podID="8ab3938c-6614-4877-a94c-75b90f339523" containerID="80ed99dabfcdf4861f6392eac676390bb9f707460dae3cb2412782ac0dea7ce7" exitCode=0 Jan 21 11:02:08 crc kubenswrapper[4881]: I0121 11:02:08.503675 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kfzl8" event={"ID":"8ab3938c-6614-4877-a94c-75b90f339523","Type":"ContainerDied","Data":"80ed99dabfcdf4861f6392eac676390bb9f707460dae3cb2412782ac0dea7ce7"} Jan 21 11:02:08 crc kubenswrapper[4881]: I0121 11:02:08.503975 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kfzl8" event={"ID":"8ab3938c-6614-4877-a94c-75b90f339523","Type":"ContainerStarted","Data":"88693e4459975d71f2437f1140fa85449acac7a24f76403599ddaf3666aae16f"} Jan 21 11:02:08 crc kubenswrapper[4881]: I0121 11:02:08.505940 4881 generic.go:334] "Generic (PLEG): container finished" podID="c6d87675-513f-412d-a34c-d789cce5b4e8" containerID="f21d4cc6fd187e6ec66292e99a2bb2ca06f019c39a2d6d6b3adc53079835eb38" exitCode=0 Jan 21 11:02:08 crc kubenswrapper[4881]: I0121 11:02:08.505997 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rs9gj" event={"ID":"c6d87675-513f-412d-a34c-d789cce5b4e8","Type":"ContainerDied","Data":"f21d4cc6fd187e6ec66292e99a2bb2ca06f019c39a2d6d6b3adc53079835eb38"} Jan 21 11:02:08 crc kubenswrapper[4881]: I0121 11:02:08.506042 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rs9gj" event={"ID":"c6d87675-513f-412d-a34c-d789cce5b4e8","Type":"ContainerStarted","Data":"2424d1e04d00485140739da64c8bc221515f617d68355bbb5c646d9660b39e0f"} Jan 21 11:02:09 crc kubenswrapper[4881]: I0121 11:02:09.277703 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bn24k"] Jan 21 11:02:09 crc kubenswrapper[4881]: I0121 11:02:09.281238 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bn24k" Jan 21 11:02:09 crc kubenswrapper[4881]: I0121 11:02:09.284000 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 21 11:02:09 crc kubenswrapper[4881]: I0121 11:02:09.294200 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bn24k"] Jan 21 11:02:09 crc kubenswrapper[4881]: I0121 11:02:09.340915 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb2faf64-08ef-4413-84f0-10e88dcb7a8f-catalog-content\") pod \"community-operators-bn24k\" (UID: \"cb2faf64-08ef-4413-84f0-10e88dcb7a8f\") " pod="openshift-marketplace/community-operators-bn24k" Jan 21 11:02:09 crc kubenswrapper[4881]: I0121 11:02:09.341479 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n76l\" (UniqueName: \"kubernetes.io/projected/cb2faf64-08ef-4413-84f0-10e88dcb7a8f-kube-api-access-7n76l\") pod \"community-operators-bn24k\" (UID: \"cb2faf64-08ef-4413-84f0-10e88dcb7a8f\") " pod="openshift-marketplace/community-operators-bn24k" Jan 21 11:02:09 crc kubenswrapper[4881]: I0121 11:02:09.341530 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb2faf64-08ef-4413-84f0-10e88dcb7a8f-utilities\") pod \"community-operators-bn24k\" (UID: \"cb2faf64-08ef-4413-84f0-10e88dcb7a8f\") " pod="openshift-marketplace/community-operators-bn24k" Jan 21 11:02:09 crc kubenswrapper[4881]: I0121 11:02:09.443158 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb2faf64-08ef-4413-84f0-10e88dcb7a8f-catalog-content\") pod \"community-operators-bn24k\" (UID: \"cb2faf64-08ef-4413-84f0-10e88dcb7a8f\") " pod="openshift-marketplace/community-operators-bn24k" Jan 21 11:02:09 crc kubenswrapper[4881]: I0121 11:02:09.443244 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n76l\" (UniqueName: \"kubernetes.io/projected/cb2faf64-08ef-4413-84f0-10e88dcb7a8f-kube-api-access-7n76l\") pod \"community-operators-bn24k\" (UID: \"cb2faf64-08ef-4413-84f0-10e88dcb7a8f\") " pod="openshift-marketplace/community-operators-bn24k" Jan 21 11:02:09 crc kubenswrapper[4881]: I0121 11:02:09.443299 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb2faf64-08ef-4413-84f0-10e88dcb7a8f-utilities\") pod \"community-operators-bn24k\" (UID: \"cb2faf64-08ef-4413-84f0-10e88dcb7a8f\") " pod="openshift-marketplace/community-operators-bn24k" Jan 21 11:02:09 crc kubenswrapper[4881]: I0121 11:02:09.443875 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb2faf64-08ef-4413-84f0-10e88dcb7a8f-catalog-content\") pod \"community-operators-bn24k\" (UID: \"cb2faf64-08ef-4413-84f0-10e88dcb7a8f\") " pod="openshift-marketplace/community-operators-bn24k" Jan 21 11:02:09 crc kubenswrapper[4881]: I0121 11:02:09.444052 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb2faf64-08ef-4413-84f0-10e88dcb7a8f-utilities\") pod \"community-operators-bn24k\" (UID: \"cb2faf64-08ef-4413-84f0-10e88dcb7a8f\") " pod="openshift-marketplace/community-operators-bn24k" Jan 21 11:02:09 crc kubenswrapper[4881]: I0121 11:02:09.470004 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n76l\" (UniqueName: \"kubernetes.io/projected/cb2faf64-08ef-4413-84f0-10e88dcb7a8f-kube-api-access-7n76l\") pod \"community-operators-bn24k\" (UID: \"cb2faf64-08ef-4413-84f0-10e88dcb7a8f\") " pod="openshift-marketplace/community-operators-bn24k" Jan 21 11:02:09 crc kubenswrapper[4881]: I0121 11:02:09.525246 4881 generic.go:334] "Generic (PLEG): container finished" podID="6e9defc7-ad37-4742-b149-cb71d7ea177a" containerID="4db026c7a3931d2831df7d16599a8c6dcf49b2a19182776365bc55b2b2f46493" exitCode=0 Jan 21 11:02:09 crc kubenswrapper[4881]: I0121 11:02:09.525320 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7wxr8" event={"ID":"6e9defc7-ad37-4742-b149-cb71d7ea177a","Type":"ContainerDied","Data":"4db026c7a3931d2831df7d16599a8c6dcf49b2a19182776365bc55b2b2f46493"} Jan 21 11:02:09 crc kubenswrapper[4881]: I0121 11:02:09.601422 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bn24k" Jan 21 11:02:10 crc kubenswrapper[4881]: I0121 11:02:10.019602 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bn24k"] Jan 21 11:02:10 crc kubenswrapper[4881]: I0121 11:02:10.539565 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bn24k" event={"ID":"cb2faf64-08ef-4413-84f0-10e88dcb7a8f","Type":"ContainerDied","Data":"2bc3a6833c19a70d3aefa8d3c7bda35cb891f30c489f62da688f653e6d7c4048"} Jan 21 11:02:10 crc kubenswrapper[4881]: I0121 11:02:10.539388 4881 generic.go:334] "Generic (PLEG): container finished" podID="cb2faf64-08ef-4413-84f0-10e88dcb7a8f" containerID="2bc3a6833c19a70d3aefa8d3c7bda35cb891f30c489f62da688f653e6d7c4048" exitCode=0 Jan 21 11:02:10 crc kubenswrapper[4881]: I0121 11:02:10.541225 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bn24k" event={"ID":"cb2faf64-08ef-4413-84f0-10e88dcb7a8f","Type":"ContainerStarted","Data":"8d46bedb9408c2dc616eea8f07cc08e082f36cfe66f9e1afcb0ddd050f15dd6e"} Jan 21 11:02:10 crc kubenswrapper[4881]: I0121 11:02:10.544567 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kfzl8" event={"ID":"8ab3938c-6614-4877-a94c-75b90f339523","Type":"ContainerStarted","Data":"142270a0f15473b6b15a9291d78a9ba2f0025e0134ceb84d54d49e6513c177a4"} Jan 21 11:02:10 crc kubenswrapper[4881]: I0121 11:02:10.546807 4881 generic.go:334] "Generic (PLEG): container finished" podID="c6d87675-513f-412d-a34c-d789cce5b4e8" containerID="b5293e61a579622e926dcba79f271c961ed1e83eaf9a6ba92c4789455fe018fa" exitCode=0 Jan 21 11:02:10 crc kubenswrapper[4881]: I0121 11:02:10.546882 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rs9gj" event={"ID":"c6d87675-513f-412d-a34c-d789cce5b4e8","Type":"ContainerDied","Data":"b5293e61a579622e926dcba79f271c961ed1e83eaf9a6ba92c4789455fe018fa"} Jan 21 11:02:10 crc kubenswrapper[4881]: I0121 11:02:10.555027 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7wxr8" event={"ID":"6e9defc7-ad37-4742-b149-cb71d7ea177a","Type":"ContainerStarted","Data":"0e6453a359c5a4e747e31e98eddd534a0b0eb94099fbb500453c3b01a577db1a"} Jan 21 11:02:10 crc kubenswrapper[4881]: I0121 11:02:10.646762 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7wxr8" podStartSLOduration=3.086584767 podStartE2EDuration="5.646742865s" podCreationTimestamp="2026-01-21 11:02:05 +0000 UTC" firstStartedPulling="2026-01-21 11:02:07.498424984 +0000 UTC m=+314.758381453" lastFinishedPulling="2026-01-21 11:02:10.058583072 +0000 UTC m=+317.318539551" observedRunningTime="2026-01-21 11:02:10.619742225 +0000 UTC m=+317.879698704" watchObservedRunningTime="2026-01-21 11:02:10.646742865 +0000 UTC m=+317.906699354" Jan 21 11:02:11 crc kubenswrapper[4881]: I0121 11:02:11.564761 4881 generic.go:334] "Generic (PLEG): container finished" podID="8ab3938c-6614-4877-a94c-75b90f339523" containerID="142270a0f15473b6b15a9291d78a9ba2f0025e0134ceb84d54d49e6513c177a4" exitCode=0 Jan 21 11:02:11 crc kubenswrapper[4881]: I0121 11:02:11.564842 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kfzl8" event={"ID":"8ab3938c-6614-4877-a94c-75b90f339523","Type":"ContainerDied","Data":"142270a0f15473b6b15a9291d78a9ba2f0025e0134ceb84d54d49e6513c177a4"} Jan 21 11:02:11 crc kubenswrapper[4881]: I0121 11:02:11.571695 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rs9gj" event={"ID":"c6d87675-513f-412d-a34c-d789cce5b4e8","Type":"ContainerStarted","Data":"9eb18af2f3ac618610e0f5f123310ad2b3628cc38f624ea02bf868b24d18591d"} Jan 21 11:02:11 crc kubenswrapper[4881]: I0121 11:02:11.619511 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rs9gj" podStartSLOduration=3.144473123 podStartE2EDuration="5.619491123s" podCreationTimestamp="2026-01-21 11:02:06 +0000 UTC" firstStartedPulling="2026-01-21 11:02:08.508622882 +0000 UTC m=+315.768579361" lastFinishedPulling="2026-01-21 11:02:10.983640892 +0000 UTC m=+318.243597361" observedRunningTime="2026-01-21 11:02:11.618466847 +0000 UTC m=+318.878423326" watchObservedRunningTime="2026-01-21 11:02:11.619491123 +0000 UTC m=+318.879447592" Jan 21 11:02:14 crc kubenswrapper[4881]: I0121 11:02:14.595355 4881 generic.go:334] "Generic (PLEG): container finished" podID="cb2faf64-08ef-4413-84f0-10e88dcb7a8f" containerID="b3e533d4d70488faedff073733cc253d326f53f9694186d0d0cf9f09a4fc6782" exitCode=0 Jan 21 11:02:14 crc kubenswrapper[4881]: I0121 11:02:14.595473 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bn24k" event={"ID":"cb2faf64-08ef-4413-84f0-10e88dcb7a8f","Type":"ContainerDied","Data":"b3e533d4d70488faedff073733cc253d326f53f9694186d0d0cf9f09a4fc6782"} Jan 21 11:02:14 crc kubenswrapper[4881]: I0121 11:02:14.599855 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kfzl8" event={"ID":"8ab3938c-6614-4877-a94c-75b90f339523","Type":"ContainerStarted","Data":"6c985b0a85d51bc19103867cc9f550fc4307bd820ffe6880eab65e8191d76ff5"} Jan 21 11:02:14 crc kubenswrapper[4881]: I0121 11:02:14.646454 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kfzl8" podStartSLOduration=2.337105189 podStartE2EDuration="7.646425116s" podCreationTimestamp="2026-01-21 11:02:07 +0000 UTC" firstStartedPulling="2026-01-21 11:02:08.508932271 +0000 UTC m=+315.768888780" lastFinishedPulling="2026-01-21 11:02:13.818252238 +0000 UTC m=+321.078208707" observedRunningTime="2026-01-21 11:02:14.643916231 +0000 UTC m=+321.903872720" watchObservedRunningTime="2026-01-21 11:02:14.646425116 +0000 UTC m=+321.906381585" Jan 21 11:02:16 crc kubenswrapper[4881]: I0121 11:02:16.327425 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7wxr8" Jan 21 11:02:16 crc kubenswrapper[4881]: I0121 11:02:16.328675 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7wxr8" Jan 21 11:02:16 crc kubenswrapper[4881]: I0121 11:02:16.406437 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7wxr8" Jan 21 11:02:16 crc kubenswrapper[4881]: I0121 11:02:16.614232 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bn24k" event={"ID":"cb2faf64-08ef-4413-84f0-10e88dcb7a8f","Type":"ContainerStarted","Data":"72d93ab1b3e1b04224e69f553bae54791b77965d7fbd59e56d289adec26cd444"} Jan 21 11:02:16 crc kubenswrapper[4881]: I0121 11:02:16.644529 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bn24k" podStartSLOduration=2.506498622 podStartE2EDuration="7.644509046s" podCreationTimestamp="2026-01-21 11:02:09 +0000 UTC" firstStartedPulling="2026-01-21 11:02:10.542281716 +0000 UTC m=+317.802238185" lastFinishedPulling="2026-01-21 11:02:15.68029214 +0000 UTC m=+322.940248609" observedRunningTime="2026-01-21 11:02:16.641341354 +0000 UTC m=+323.901297833" watchObservedRunningTime="2026-01-21 11:02:16.644509046 +0000 UTC m=+323.904465515" Jan 21 11:02:16 crc kubenswrapper[4881]: I0121 11:02:16.765522 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54bb857fc-6xg7b"] Jan 21 11:02:16 crc kubenswrapper[4881]: I0121 11:02:16.765815 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-54bb857fc-6xg7b" podUID="b1ebf4ad-7b0d-4711-93bd-206ec36e7202" containerName="route-controller-manager" containerID="cri-o://03285c7f75ca0c5ea5fc4bbbace73cfbfd25315c2b430af309cd5af6d0d8503a" gracePeriod=30 Jan 21 11:02:16 crc kubenswrapper[4881]: I0121 11:02:16.800434 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7wxr8" Jan 21 11:02:17 crc kubenswrapper[4881]: I0121 11:02:17.224457 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rs9gj" Jan 21 11:02:17 crc kubenswrapper[4881]: I0121 11:02:17.224552 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rs9gj" Jan 21 11:02:17 crc kubenswrapper[4881]: I0121 11:02:17.276466 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rs9gj" Jan 21 11:02:17 crc kubenswrapper[4881]: I0121 11:02:17.625411 4881 generic.go:334] "Generic (PLEG): container finished" podID="b1ebf4ad-7b0d-4711-93bd-206ec36e7202" containerID="03285c7f75ca0c5ea5fc4bbbace73cfbfd25315c2b430af309cd5af6d0d8503a" exitCode=0 Jan 21 11:02:17 crc kubenswrapper[4881]: I0121 11:02:17.626715 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54bb857fc-6xg7b" event={"ID":"b1ebf4ad-7b0d-4711-93bd-206ec36e7202","Type":"ContainerDied","Data":"03285c7f75ca0c5ea5fc4bbbace73cfbfd25315c2b430af309cd5af6d0d8503a"} Jan 21 11:02:17 crc kubenswrapper[4881]: I0121 11:02:17.688919 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rs9gj" Jan 21 11:02:17 crc kubenswrapper[4881]: I0121 11:02:17.822397 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54bb857fc-6xg7b" Jan 21 11:02:17 crc kubenswrapper[4881]: I0121 11:02:17.856000 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c7f4fc56b-p8gtw"] Jan 21 11:02:17 crc kubenswrapper[4881]: E0121 11:02:17.856306 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1ebf4ad-7b0d-4711-93bd-206ec36e7202" containerName="route-controller-manager" Jan 21 11:02:17 crc kubenswrapper[4881]: I0121 11:02:17.856331 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1ebf4ad-7b0d-4711-93bd-206ec36e7202" containerName="route-controller-manager" Jan 21 11:02:17 crc kubenswrapper[4881]: I0121 11:02:17.856507 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1ebf4ad-7b0d-4711-93bd-206ec36e7202" containerName="route-controller-manager" Jan 21 11:02:17 crc kubenswrapper[4881]: I0121 11:02:17.857105 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c7f4fc56b-p8gtw" Jan 21 11:02:17 crc kubenswrapper[4881]: I0121 11:02:17.862946 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kfzl8" Jan 21 11:02:17 crc kubenswrapper[4881]: I0121 11:02:17.863273 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kfzl8" Jan 21 11:02:17 crc kubenswrapper[4881]: I0121 11:02:17.871777 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c7f4fc56b-p8gtw"] Jan 21 11:02:17 crc kubenswrapper[4881]: I0121 11:02:17.935124 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1ebf4ad-7b0d-4711-93bd-206ec36e7202-serving-cert\") pod \"b1ebf4ad-7b0d-4711-93bd-206ec36e7202\" (UID: \"b1ebf4ad-7b0d-4711-93bd-206ec36e7202\") " Jan 21 11:02:17 crc kubenswrapper[4881]: I0121 11:02:17.935239 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1ebf4ad-7b0d-4711-93bd-206ec36e7202-config\") pod \"b1ebf4ad-7b0d-4711-93bd-206ec36e7202\" (UID: \"b1ebf4ad-7b0d-4711-93bd-206ec36e7202\") " Jan 21 11:02:17 crc kubenswrapper[4881]: I0121 11:02:17.935304 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b1ebf4ad-7b0d-4711-93bd-206ec36e7202-client-ca\") pod \"b1ebf4ad-7b0d-4711-93bd-206ec36e7202\" (UID: \"b1ebf4ad-7b0d-4711-93bd-206ec36e7202\") " Jan 21 11:02:17 crc kubenswrapper[4881]: I0121 11:02:17.935464 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wx6k\" (UniqueName: \"kubernetes.io/projected/b1ebf4ad-7b0d-4711-93bd-206ec36e7202-kube-api-access-6wx6k\") pod \"b1ebf4ad-7b0d-4711-93bd-206ec36e7202\" (UID: \"b1ebf4ad-7b0d-4711-93bd-206ec36e7202\") " Jan 21 11:02:17 crc kubenswrapper[4881]: I0121 11:02:17.935803 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a91582ca-0d6d-4ed9-91bd-fdad383a8758-serving-cert\") pod \"route-controller-manager-5c7f4fc56b-p8gtw\" (UID: \"a91582ca-0d6d-4ed9-91bd-fdad383a8758\") " pod="openshift-route-controller-manager/route-controller-manager-5c7f4fc56b-p8gtw" Jan 21 11:02:17 crc kubenswrapper[4881]: I0121 11:02:17.935843 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt8n9\" (UniqueName: \"kubernetes.io/projected/a91582ca-0d6d-4ed9-91bd-fdad383a8758-kube-api-access-kt8n9\") pod \"route-controller-manager-5c7f4fc56b-p8gtw\" (UID: \"a91582ca-0d6d-4ed9-91bd-fdad383a8758\") " pod="openshift-route-controller-manager/route-controller-manager-5c7f4fc56b-p8gtw" Jan 21 11:02:17 crc kubenswrapper[4881]: I0121 11:02:17.935918 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a91582ca-0d6d-4ed9-91bd-fdad383a8758-config\") pod \"route-controller-manager-5c7f4fc56b-p8gtw\" (UID: \"a91582ca-0d6d-4ed9-91bd-fdad383a8758\") " pod="openshift-route-controller-manager/route-controller-manager-5c7f4fc56b-p8gtw" Jan 21 11:02:17 crc kubenswrapper[4881]: I0121 11:02:17.936001 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a91582ca-0d6d-4ed9-91bd-fdad383a8758-client-ca\") pod \"route-controller-manager-5c7f4fc56b-p8gtw\" (UID: \"a91582ca-0d6d-4ed9-91bd-fdad383a8758\") " pod="openshift-route-controller-manager/route-controller-manager-5c7f4fc56b-p8gtw" Jan 21 11:02:17 crc kubenswrapper[4881]: I0121 11:02:17.936660 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1ebf4ad-7b0d-4711-93bd-206ec36e7202-config" (OuterVolumeSpecName: "config") pod "b1ebf4ad-7b0d-4711-93bd-206ec36e7202" (UID: "b1ebf4ad-7b0d-4711-93bd-206ec36e7202"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:02:17 crc kubenswrapper[4881]: I0121 11:02:17.936610 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1ebf4ad-7b0d-4711-93bd-206ec36e7202-client-ca" (OuterVolumeSpecName: "client-ca") pod "b1ebf4ad-7b0d-4711-93bd-206ec36e7202" (UID: "b1ebf4ad-7b0d-4711-93bd-206ec36e7202"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:02:17 crc kubenswrapper[4881]: I0121 11:02:17.946406 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1ebf4ad-7b0d-4711-93bd-206ec36e7202-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b1ebf4ad-7b0d-4711-93bd-206ec36e7202" (UID: "b1ebf4ad-7b0d-4711-93bd-206ec36e7202"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:02:17 crc kubenswrapper[4881]: I0121 11:02:17.947212 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1ebf4ad-7b0d-4711-93bd-206ec36e7202-kube-api-access-6wx6k" (OuterVolumeSpecName: "kube-api-access-6wx6k") pod "b1ebf4ad-7b0d-4711-93bd-206ec36e7202" (UID: "b1ebf4ad-7b0d-4711-93bd-206ec36e7202"). InnerVolumeSpecName "kube-api-access-6wx6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:02:18 crc kubenswrapper[4881]: I0121 11:02:18.037600 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a91582ca-0d6d-4ed9-91bd-fdad383a8758-config\") pod \"route-controller-manager-5c7f4fc56b-p8gtw\" (UID: \"a91582ca-0d6d-4ed9-91bd-fdad383a8758\") " pod="openshift-route-controller-manager/route-controller-manager-5c7f4fc56b-p8gtw" Jan 21 11:02:18 crc kubenswrapper[4881]: I0121 11:02:18.037715 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a91582ca-0d6d-4ed9-91bd-fdad383a8758-client-ca\") pod \"route-controller-manager-5c7f4fc56b-p8gtw\" (UID: \"a91582ca-0d6d-4ed9-91bd-fdad383a8758\") " pod="openshift-route-controller-manager/route-controller-manager-5c7f4fc56b-p8gtw" Jan 21 11:02:18 crc kubenswrapper[4881]: I0121 11:02:18.037760 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a91582ca-0d6d-4ed9-91bd-fdad383a8758-serving-cert\") pod \"route-controller-manager-5c7f4fc56b-p8gtw\" (UID: \"a91582ca-0d6d-4ed9-91bd-fdad383a8758\") " pod="openshift-route-controller-manager/route-controller-manager-5c7f4fc56b-p8gtw" Jan 21 11:02:18 crc kubenswrapper[4881]: I0121 11:02:18.037806 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt8n9\" (UniqueName: \"kubernetes.io/projected/a91582ca-0d6d-4ed9-91bd-fdad383a8758-kube-api-access-kt8n9\") pod \"route-controller-manager-5c7f4fc56b-p8gtw\" (UID: \"a91582ca-0d6d-4ed9-91bd-fdad383a8758\") " pod="openshift-route-controller-manager/route-controller-manager-5c7f4fc56b-p8gtw" Jan 21 11:02:18 crc kubenswrapper[4881]: I0121 11:02:18.037908 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wx6k\" (UniqueName: \"kubernetes.io/projected/b1ebf4ad-7b0d-4711-93bd-206ec36e7202-kube-api-access-6wx6k\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:18 crc kubenswrapper[4881]: I0121 11:02:18.037932 4881 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1ebf4ad-7b0d-4711-93bd-206ec36e7202-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:18 crc kubenswrapper[4881]: I0121 11:02:18.037946 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1ebf4ad-7b0d-4711-93bd-206ec36e7202-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:18 crc kubenswrapper[4881]: I0121 11:02:18.037959 4881 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b1ebf4ad-7b0d-4711-93bd-206ec36e7202-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:18 crc kubenswrapper[4881]: I0121 11:02:18.038944 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a91582ca-0d6d-4ed9-91bd-fdad383a8758-client-ca\") pod \"route-controller-manager-5c7f4fc56b-p8gtw\" (UID: \"a91582ca-0d6d-4ed9-91bd-fdad383a8758\") " pod="openshift-route-controller-manager/route-controller-manager-5c7f4fc56b-p8gtw" Jan 21 11:02:18 crc kubenswrapper[4881]: I0121 11:02:18.039072 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a91582ca-0d6d-4ed9-91bd-fdad383a8758-config\") pod \"route-controller-manager-5c7f4fc56b-p8gtw\" (UID: \"a91582ca-0d6d-4ed9-91bd-fdad383a8758\") " pod="openshift-route-controller-manager/route-controller-manager-5c7f4fc56b-p8gtw" Jan 21 11:02:18 crc kubenswrapper[4881]: I0121 11:02:18.042342 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a91582ca-0d6d-4ed9-91bd-fdad383a8758-serving-cert\") pod \"route-controller-manager-5c7f4fc56b-p8gtw\" (UID: \"a91582ca-0d6d-4ed9-91bd-fdad383a8758\") " pod="openshift-route-controller-manager/route-controller-manager-5c7f4fc56b-p8gtw" Jan 21 11:02:18 crc kubenswrapper[4881]: I0121 11:02:18.061183 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt8n9\" (UniqueName: \"kubernetes.io/projected/a91582ca-0d6d-4ed9-91bd-fdad383a8758-kube-api-access-kt8n9\") pod \"route-controller-manager-5c7f4fc56b-p8gtw\" (UID: \"a91582ca-0d6d-4ed9-91bd-fdad383a8758\") " pod="openshift-route-controller-manager/route-controller-manager-5c7f4fc56b-p8gtw" Jan 21 11:02:18 crc kubenswrapper[4881]: I0121 11:02:18.177141 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c7f4fc56b-p8gtw" Jan 21 11:02:18 crc kubenswrapper[4881]: I0121 11:02:18.639641 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54bb857fc-6xg7b" event={"ID":"b1ebf4ad-7b0d-4711-93bd-206ec36e7202","Type":"ContainerDied","Data":"cf1ccaca8e9193a4546c7cd1215ccba45fb7b47029b1d20906ee6e97c1d22afe"} Jan 21 11:02:18 crc kubenswrapper[4881]: I0121 11:02:18.640215 4881 scope.go:117] "RemoveContainer" containerID="03285c7f75ca0c5ea5fc4bbbace73cfbfd25315c2b430af309cd5af6d0d8503a" Jan 21 11:02:18 crc kubenswrapper[4881]: I0121 11:02:18.640148 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54bb857fc-6xg7b" Jan 21 11:02:18 crc kubenswrapper[4881]: I0121 11:02:18.675208 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c7f4fc56b-p8gtw"] Jan 21 11:02:18 crc kubenswrapper[4881]: E0121 11:02:18.686262 4881 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod706c6a3b_823b_4ea3_b7a8_e20d571d3ace.slice/crio-conmon-9c8c8d93509d2a29c183d63351f0748ec6e60414dbb285df980924884b598111.scope\": RecentStats: unable to find data in memory cache]" Jan 21 11:02:18 crc kubenswrapper[4881]: I0121 11:02:18.692694 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54bb857fc-6xg7b"] Jan 21 11:02:18 crc kubenswrapper[4881]: I0121 11:02:18.701023 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54bb857fc-6xg7b"] Jan 21 11:02:18 crc kubenswrapper[4881]: I0121 11:02:18.906177 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kfzl8" podUID="8ab3938c-6614-4877-a94c-75b90f339523" containerName="registry-server" probeResult="failure" output=< Jan 21 11:02:18 crc kubenswrapper[4881]: timeout: failed to connect service ":50051" within 1s Jan 21 11:02:18 crc kubenswrapper[4881]: > Jan 21 11:02:19 crc kubenswrapper[4881]: I0121 11:02:19.320215 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1ebf4ad-7b0d-4711-93bd-206ec36e7202" path="/var/lib/kubelet/pods/b1ebf4ad-7b0d-4711-93bd-206ec36e7202/volumes" Jan 21 11:02:19 crc kubenswrapper[4881]: I0121 11:02:19.602460 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bn24k" Jan 21 11:02:19 crc kubenswrapper[4881]: I0121 11:02:19.602528 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bn24k" Jan 21 11:02:19 crc kubenswrapper[4881]: I0121 11:02:19.648347 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c7f4fc56b-p8gtw" event={"ID":"a91582ca-0d6d-4ed9-91bd-fdad383a8758","Type":"ContainerStarted","Data":"00e00e53c8a8435e0245f2df4afdd5939a672ed22efb269424a38149036c2228"} Jan 21 11:02:19 crc kubenswrapper[4881]: I0121 11:02:19.649450 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c7f4fc56b-p8gtw" event={"ID":"a91582ca-0d6d-4ed9-91bd-fdad383a8758","Type":"ContainerStarted","Data":"e56dcc3cfafeda7c1b921610ed5ce11b403f59be14f8f975934cc18b0f5f6f01"} Jan 21 11:02:19 crc kubenswrapper[4881]: I0121 11:02:19.649483 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5c7f4fc56b-p8gtw" Jan 21 11:02:19 crc kubenswrapper[4881]: I0121 11:02:19.667592 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bn24k" Jan 21 11:02:19 crc kubenswrapper[4881]: I0121 11:02:19.677333 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5c7f4fc56b-p8gtw" podStartSLOduration=3.677283061 podStartE2EDuration="3.677283061s" podCreationTimestamp="2026-01-21 11:02:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:02:19.676470731 +0000 UTC m=+326.936427190" watchObservedRunningTime="2026-01-21 11:02:19.677283061 +0000 UTC m=+326.937239530" Jan 21 11:02:19 crc kubenswrapper[4881]: I0121 11:02:19.825535 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5c7f4fc56b-p8gtw" Jan 21 11:02:27 crc kubenswrapper[4881]: I0121 11:02:27.911161 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kfzl8" Jan 21 11:02:27 crc kubenswrapper[4881]: I0121 11:02:27.965202 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kfzl8" Jan 21 11:02:28 crc kubenswrapper[4881]: E0121 11:02:28.824186 4881 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod706c6a3b_823b_4ea3_b7a8_e20d571d3ace.slice/crio-conmon-9c8c8d93509d2a29c183d63351f0748ec6e60414dbb285df980924884b598111.scope\": RecentStats: unable to find data in memory cache]" Jan 21 11:02:29 crc kubenswrapper[4881]: I0121 11:02:29.645934 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bn24k" Jan 21 11:02:38 crc kubenswrapper[4881]: E0121 11:02:38.977248 4881 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod706c6a3b_823b_4ea3_b7a8_e20d571d3ace.slice/crio-conmon-9c8c8d93509d2a29c183d63351f0748ec6e60414dbb285df980924884b598111.scope\": RecentStats: unable to find data in memory cache]" Jan 21 11:02:42 crc kubenswrapper[4881]: I0121 11:02:42.619588 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-lh85c"] Jan 21 11:02:42 crc kubenswrapper[4881]: I0121 11:02:42.620719 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-lh85c" Jan 21 11:02:42 crc kubenswrapper[4881]: I0121 11:02:42.636922 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-lh85c"] Jan 21 11:02:42 crc kubenswrapper[4881]: I0121 11:02:42.714846 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d13d5c9d-4cbc-4c0d-befa-79c9589deaaa-bound-sa-token\") pod \"image-registry-66df7c8f76-lh85c\" (UID: \"d13d5c9d-4cbc-4c0d-befa-79c9589deaaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-lh85c" Jan 21 11:02:42 crc kubenswrapper[4881]: I0121 11:02:42.715512 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d13d5c9d-4cbc-4c0d-befa-79c9589deaaa-ca-trust-extracted\") pod \"image-registry-66df7c8f76-lh85c\" (UID: \"d13d5c9d-4cbc-4c0d-befa-79c9589deaaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-lh85c" Jan 21 11:02:42 crc kubenswrapper[4881]: I0121 11:02:42.715553 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-lh85c\" (UID: \"d13d5c9d-4cbc-4c0d-befa-79c9589deaaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-lh85c" Jan 21 11:02:42 crc kubenswrapper[4881]: I0121 11:02:42.715578 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d13d5c9d-4cbc-4c0d-befa-79c9589deaaa-registry-certificates\") pod \"image-registry-66df7c8f76-lh85c\" (UID: \"d13d5c9d-4cbc-4c0d-befa-79c9589deaaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-lh85c" Jan 21 11:02:42 crc kubenswrapper[4881]: I0121 11:02:42.715594 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d13d5c9d-4cbc-4c0d-befa-79c9589deaaa-installation-pull-secrets\") pod \"image-registry-66df7c8f76-lh85c\" (UID: \"d13d5c9d-4cbc-4c0d-befa-79c9589deaaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-lh85c" Jan 21 11:02:42 crc kubenswrapper[4881]: I0121 11:02:42.715614 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d13d5c9d-4cbc-4c0d-befa-79c9589deaaa-trusted-ca\") pod \"image-registry-66df7c8f76-lh85c\" (UID: \"d13d5c9d-4cbc-4c0d-befa-79c9589deaaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-lh85c" Jan 21 11:02:42 crc kubenswrapper[4881]: I0121 11:02:42.715630 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd2ws\" (UniqueName: \"kubernetes.io/projected/d13d5c9d-4cbc-4c0d-befa-79c9589deaaa-kube-api-access-rd2ws\") pod \"image-registry-66df7c8f76-lh85c\" (UID: \"d13d5c9d-4cbc-4c0d-befa-79c9589deaaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-lh85c" Jan 21 11:02:42 crc kubenswrapper[4881]: I0121 11:02:42.715657 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d13d5c9d-4cbc-4c0d-befa-79c9589deaaa-registry-tls\") pod \"image-registry-66df7c8f76-lh85c\" (UID: \"d13d5c9d-4cbc-4c0d-befa-79c9589deaaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-lh85c" Jan 21 11:02:42 crc kubenswrapper[4881]: I0121 11:02:42.754771 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-lh85c\" (UID: \"d13d5c9d-4cbc-4c0d-befa-79c9589deaaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-lh85c" Jan 21 11:02:42 crc kubenswrapper[4881]: I0121 11:02:42.817894 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d13d5c9d-4cbc-4c0d-befa-79c9589deaaa-ca-trust-extracted\") pod \"image-registry-66df7c8f76-lh85c\" (UID: \"d13d5c9d-4cbc-4c0d-befa-79c9589deaaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-lh85c" Jan 21 11:02:42 crc kubenswrapper[4881]: I0121 11:02:42.818874 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d13d5c9d-4cbc-4c0d-befa-79c9589deaaa-registry-certificates\") pod \"image-registry-66df7c8f76-lh85c\" (UID: \"d13d5c9d-4cbc-4c0d-befa-79c9589deaaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-lh85c" Jan 21 11:02:42 crc kubenswrapper[4881]: I0121 11:02:42.818961 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d13d5c9d-4cbc-4c0d-befa-79c9589deaaa-installation-pull-secrets\") pod \"image-registry-66df7c8f76-lh85c\" (UID: \"d13d5c9d-4cbc-4c0d-befa-79c9589deaaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-lh85c" Jan 21 11:02:42 crc kubenswrapper[4881]: I0121 11:02:42.819063 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d13d5c9d-4cbc-4c0d-befa-79c9589deaaa-trusted-ca\") pod \"image-registry-66df7c8f76-lh85c\" (UID: \"d13d5c9d-4cbc-4c0d-befa-79c9589deaaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-lh85c" Jan 21 11:02:42 crc kubenswrapper[4881]: I0121 11:02:42.819109 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rd2ws\" (UniqueName: \"kubernetes.io/projected/d13d5c9d-4cbc-4c0d-befa-79c9589deaaa-kube-api-access-rd2ws\") pod \"image-registry-66df7c8f76-lh85c\" (UID: \"d13d5c9d-4cbc-4c0d-befa-79c9589deaaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-lh85c" Jan 21 11:02:42 crc kubenswrapper[4881]: I0121 11:02:42.819111 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d13d5c9d-4cbc-4c0d-befa-79c9589deaaa-ca-trust-extracted\") pod \"image-registry-66df7c8f76-lh85c\" (UID: \"d13d5c9d-4cbc-4c0d-befa-79c9589deaaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-lh85c" Jan 21 11:02:42 crc kubenswrapper[4881]: I0121 11:02:42.819250 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d13d5c9d-4cbc-4c0d-befa-79c9589deaaa-registry-tls\") pod \"image-registry-66df7c8f76-lh85c\" (UID: \"d13d5c9d-4cbc-4c0d-befa-79c9589deaaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-lh85c" Jan 21 11:02:42 crc kubenswrapper[4881]: I0121 11:02:42.819344 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d13d5c9d-4cbc-4c0d-befa-79c9589deaaa-bound-sa-token\") pod \"image-registry-66df7c8f76-lh85c\" (UID: \"d13d5c9d-4cbc-4c0d-befa-79c9589deaaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-lh85c" Jan 21 11:02:42 crc kubenswrapper[4881]: I0121 11:02:42.820746 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d13d5c9d-4cbc-4c0d-befa-79c9589deaaa-registry-certificates\") pod \"image-registry-66df7c8f76-lh85c\" (UID: \"d13d5c9d-4cbc-4c0d-befa-79c9589deaaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-lh85c" Jan 21 11:02:42 crc kubenswrapper[4881]: I0121 11:02:42.821129 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d13d5c9d-4cbc-4c0d-befa-79c9589deaaa-trusted-ca\") pod \"image-registry-66df7c8f76-lh85c\" (UID: \"d13d5c9d-4cbc-4c0d-befa-79c9589deaaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-lh85c" Jan 21 11:02:42 crc kubenswrapper[4881]: I0121 11:02:42.828934 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d13d5c9d-4cbc-4c0d-befa-79c9589deaaa-registry-tls\") pod \"image-registry-66df7c8f76-lh85c\" (UID: \"d13d5c9d-4cbc-4c0d-befa-79c9589deaaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-lh85c" Jan 21 11:02:42 crc kubenswrapper[4881]: I0121 11:02:42.829713 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d13d5c9d-4cbc-4c0d-befa-79c9589deaaa-installation-pull-secrets\") pod \"image-registry-66df7c8f76-lh85c\" (UID: \"d13d5c9d-4cbc-4c0d-befa-79c9589deaaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-lh85c" Jan 21 11:02:42 crc kubenswrapper[4881]: I0121 11:02:42.837242 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d13d5c9d-4cbc-4c0d-befa-79c9589deaaa-bound-sa-token\") pod \"image-registry-66df7c8f76-lh85c\" (UID: \"d13d5c9d-4cbc-4c0d-befa-79c9589deaaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-lh85c" Jan 21 11:02:42 crc kubenswrapper[4881]: I0121 11:02:42.838903 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rd2ws\" (UniqueName: \"kubernetes.io/projected/d13d5c9d-4cbc-4c0d-befa-79c9589deaaa-kube-api-access-rd2ws\") pod \"image-registry-66df7c8f76-lh85c\" (UID: \"d13d5c9d-4cbc-4c0d-befa-79c9589deaaa\") " pod="openshift-image-registry/image-registry-66df7c8f76-lh85c" Jan 21 11:02:42 crc kubenswrapper[4881]: I0121 11:02:42.942116 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-lh85c" Jan 21 11:02:43 crc kubenswrapper[4881]: I0121 11:02:43.378175 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-lh85c"] Jan 21 11:02:43 crc kubenswrapper[4881]: I0121 11:02:43.820795 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-lh85c" event={"ID":"d13d5c9d-4cbc-4c0d-befa-79c9589deaaa","Type":"ContainerStarted","Data":"dc30c29d5d8e02dfaa22bbf78b9e3f9bf16a636a23423878f9927c0a8128eba4"} Jan 21 11:02:44 crc kubenswrapper[4881]: I0121 11:02:44.832025 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-lh85c" event={"ID":"d13d5c9d-4cbc-4c0d-befa-79c9589deaaa","Type":"ContainerStarted","Data":"8019f2e642a1262fa8ab8b87531ffad064f8fef236a2da2d0aabe26186baff21"} Jan 21 11:02:44 crc kubenswrapper[4881]: I0121 11:02:44.832649 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-lh85c" Jan 21 11:02:44 crc kubenswrapper[4881]: I0121 11:02:44.866660 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-lh85c" podStartSLOduration=2.866623023 podStartE2EDuration="2.866623023s" podCreationTimestamp="2026-01-21 11:02:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:02:44.860868233 +0000 UTC m=+352.120824712" watchObservedRunningTime="2026-01-21 11:02:44.866623023 +0000 UTC m=+352.126579492" Jan 21 11:02:49 crc kubenswrapper[4881]: E0121 11:02:49.120401 4881 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod706c6a3b_823b_4ea3_b7a8_e20d571d3ace.slice/crio-conmon-9c8c8d93509d2a29c183d63351f0748ec6e60414dbb285df980924884b598111.scope\": RecentStats: unable to find data in memory cache]" Jan 21 11:02:59 crc kubenswrapper[4881]: I0121 11:02:59.851535 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:02:59 crc kubenswrapper[4881]: I0121 11:02:59.852481 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:03:02 crc kubenswrapper[4881]: I0121 11:03:02.948928 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-lh85c" Jan 21 11:03:03 crc kubenswrapper[4881]: I0121 11:03:03.016948 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-n98tz"] Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.082104 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" podUID="ec369bed-0b60-48b0-9de0-fcfd6ca7776d" containerName="registry" containerID="cri-o://2afb4777e26b8b9ed3649e0224c3ebc4424187c098e907f770d7f03bdea5704c" gracePeriod=30 Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.501957 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.508703 4881 generic.go:334] "Generic (PLEG): container finished" podID="ec369bed-0b60-48b0-9de0-fcfd6ca7776d" containerID="2afb4777e26b8b9ed3649e0224c3ebc4424187c098e907f770d7f03bdea5704c" exitCode=0 Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.508768 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" event={"ID":"ec369bed-0b60-48b0-9de0-fcfd6ca7776d","Type":"ContainerDied","Data":"2afb4777e26b8b9ed3649e0224c3ebc4424187c098e907f770d7f03bdea5704c"} Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.508831 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" event={"ID":"ec369bed-0b60-48b0-9de0-fcfd6ca7776d","Type":"ContainerDied","Data":"5474c3ee513cde1d48c15d56d09e1c7f705a56319c7e90c496d397eeca80a458"} Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.508828 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-n98tz" Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.508855 4881 scope.go:117] "RemoveContainer" containerID="2afb4777e26b8b9ed3649e0224c3ebc4424187c098e907f770d7f03bdea5704c" Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.540604 4881 scope.go:117] "RemoveContainer" containerID="2afb4777e26b8b9ed3649e0224c3ebc4424187c098e907f770d7f03bdea5704c" Jan 21 11:03:28 crc kubenswrapper[4881]: E0121 11:03:28.543388 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2afb4777e26b8b9ed3649e0224c3ebc4424187c098e907f770d7f03bdea5704c\": container with ID starting with 2afb4777e26b8b9ed3649e0224c3ebc4424187c098e907f770d7f03bdea5704c not found: ID does not exist" containerID="2afb4777e26b8b9ed3649e0224c3ebc4424187c098e907f770d7f03bdea5704c" Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.543881 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2afb4777e26b8b9ed3649e0224c3ebc4424187c098e907f770d7f03bdea5704c"} err="failed to get container status \"2afb4777e26b8b9ed3649e0224c3ebc4424187c098e907f770d7f03bdea5704c\": rpc error: code = NotFound desc = could not find container \"2afb4777e26b8b9ed3649e0224c3ebc4424187c098e907f770d7f03bdea5704c\": container with ID starting with 2afb4777e26b8b9ed3649e0224c3ebc4424187c098e907f770d7f03bdea5704c not found: ID does not exist" Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.642258 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.642530 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-bound-sa-token\") pod \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.642573 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-registry-tls\") pod \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.642626 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6ljz\" (UniqueName: \"kubernetes.io/projected/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-kube-api-access-z6ljz\") pod \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.642691 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-trusted-ca\") pod \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.642727 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-registry-certificates\") pod \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.642756 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-installation-pull-secrets\") pod \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.642828 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-ca-trust-extracted\") pod \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\" (UID: \"ec369bed-0b60-48b0-9de0-fcfd6ca7776d\") " Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.644548 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "ec369bed-0b60-48b0-9de0-fcfd6ca7776d" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.652405 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "ec369bed-0b60-48b0-9de0-fcfd6ca7776d" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.654022 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "ec369bed-0b60-48b0-9de0-fcfd6ca7776d" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.654868 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "ec369bed-0b60-48b0-9de0-fcfd6ca7776d" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.655700 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-kube-api-access-z6ljz" (OuterVolumeSpecName: "kube-api-access-z6ljz") pod "ec369bed-0b60-48b0-9de0-fcfd6ca7776d" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d"). InnerVolumeSpecName "kube-api-access-z6ljz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.655971 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "ec369bed-0b60-48b0-9de0-fcfd6ca7776d" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.661375 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "ec369bed-0b60-48b0-9de0-fcfd6ca7776d" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.663614 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "ec369bed-0b60-48b0-9de0-fcfd6ca7776d" (UID: "ec369bed-0b60-48b0-9de0-fcfd6ca7776d"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.744929 4881 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.744982 4881 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.744994 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6ljz\" (UniqueName: \"kubernetes.io/projected/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-kube-api-access-z6ljz\") on node \"crc\" DevicePath \"\"" Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.745041 4881 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.745051 4881 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.745062 4881 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.745075 4881 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ec369bed-0b60-48b0-9de0-fcfd6ca7776d-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.843032 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-n98tz"] Jan 21 11:03:28 crc kubenswrapper[4881]: I0121 11:03:28.859099 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-n98tz"] Jan 21 11:03:29 crc kubenswrapper[4881]: I0121 11:03:29.320011 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec369bed-0b60-48b0-9de0-fcfd6ca7776d" path="/var/lib/kubelet/pods/ec369bed-0b60-48b0-9de0-fcfd6ca7776d/volumes" Jan 21 11:03:29 crc kubenswrapper[4881]: I0121 11:03:29.851907 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:03:29 crc kubenswrapper[4881]: I0121 11:03:29.852489 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:03:59 crc kubenswrapper[4881]: I0121 11:03:59.851824 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:03:59 crc kubenswrapper[4881]: I0121 11:03:59.852776 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:03:59 crc kubenswrapper[4881]: I0121 11:03:59.852881 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 11:03:59 crc kubenswrapper[4881]: I0121 11:03:59.853829 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f08eae3fb5bfbc3b6dfa6839a34471cb41febf3495ae4845e42b68ed33af40f1"} pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 11:03:59 crc kubenswrapper[4881]: I0121 11:03:59.853900 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" containerID="cri-o://f08eae3fb5bfbc3b6dfa6839a34471cb41febf3495ae4845e42b68ed33af40f1" gracePeriod=600 Jan 21 11:04:00 crc kubenswrapper[4881]: I0121 11:04:00.730805 4881 generic.go:334] "Generic (PLEG): container finished" podID="3687b313-1df2-4274-80db-8c758b51bf2d" containerID="f08eae3fb5bfbc3b6dfa6839a34471cb41febf3495ae4845e42b68ed33af40f1" exitCode=0 Jan 21 11:04:00 crc kubenswrapper[4881]: I0121 11:04:00.730921 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerDied","Data":"f08eae3fb5bfbc3b6dfa6839a34471cb41febf3495ae4845e42b68ed33af40f1"} Jan 21 11:04:00 crc kubenswrapper[4881]: I0121 11:04:00.731821 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"51d484e782c204b0b6011f8d0be626571952d106a910dddde0a66e728028905b"} Jan 21 11:04:00 crc kubenswrapper[4881]: I0121 11:04:00.731866 4881 scope.go:117] "RemoveContainer" containerID="7172ca109a870f3c1c8d3af0117700b7b23e7a65c3e841f3a4ef3f445e85270d" Jan 21 11:04:53 crc kubenswrapper[4881]: I0121 11:04:53.891258 4881 scope.go:117] "RemoveContainer" containerID="8f66d538b15eac6e19eeb1b6e73b0917e7cb4600d289674a11496b4ddb805259" Jan 21 11:06:29 crc kubenswrapper[4881]: I0121 11:06:29.851510 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:06:29 crc kubenswrapper[4881]: I0121 11:06:29.852698 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:06:53 crc kubenswrapper[4881]: I0121 11:06:53.939955 4881 scope.go:117] "RemoveContainer" containerID="af52521bc076413d8e72a4c4cff88c04fc3be6a74567d99416c9a8f9f7a66758" Jan 21 11:06:53 crc kubenswrapper[4881]: I0121 11:06:53.980319 4881 scope.go:117] "RemoveContainer" containerID="091b8c7421a6daba2d38abc6600200f92a99a9d9fffb2a18673337cc1cab5a28" Jan 21 11:06:59 crc kubenswrapper[4881]: I0121 11:06:59.851658 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:06:59 crc kubenswrapper[4881]: I0121 11:06:59.852373 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:07:29 crc kubenswrapper[4881]: I0121 11:07:29.851683 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:07:29 crc kubenswrapper[4881]: I0121 11:07:29.852354 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:07:29 crc kubenswrapper[4881]: I0121 11:07:29.852415 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 11:07:29 crc kubenswrapper[4881]: I0121 11:07:29.853159 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"51d484e782c204b0b6011f8d0be626571952d106a910dddde0a66e728028905b"} pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 11:07:29 crc kubenswrapper[4881]: I0121 11:07:29.853248 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" containerID="cri-o://51d484e782c204b0b6011f8d0be626571952d106a910dddde0a66e728028905b" gracePeriod=600 Jan 21 11:07:30 crc kubenswrapper[4881]: I0121 11:07:30.143186 4881 generic.go:334] "Generic (PLEG): container finished" podID="3687b313-1df2-4274-80db-8c758b51bf2d" containerID="51d484e782c204b0b6011f8d0be626571952d106a910dddde0a66e728028905b" exitCode=0 Jan 21 11:07:30 crc kubenswrapper[4881]: I0121 11:07:30.143294 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerDied","Data":"51d484e782c204b0b6011f8d0be626571952d106a910dddde0a66e728028905b"} Jan 21 11:07:30 crc kubenswrapper[4881]: I0121 11:07:30.143400 4881 scope.go:117] "RemoveContainer" containerID="f08eae3fb5bfbc3b6dfa6839a34471cb41febf3495ae4845e42b68ed33af40f1" Jan 21 11:07:31 crc kubenswrapper[4881]: I0121 11:07:31.153609 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"c61b3d568dcd0ae9a4c5e1f2de21cf5a0db2cf65652a9e217f03473254856b16"} Jan 21 11:08:49 crc kubenswrapper[4881]: I0121 11:08:49.889683 4881 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 11:09:04 crc kubenswrapper[4881]: I0121 11:09:04.242101 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-cdm4s"] Jan 21 11:09:04 crc kubenswrapper[4881]: E0121 11:09:04.244318 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec369bed-0b60-48b0-9de0-fcfd6ca7776d" containerName="registry" Jan 21 11:09:04 crc kubenswrapper[4881]: I0121 11:09:04.244438 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec369bed-0b60-48b0-9de0-fcfd6ca7776d" containerName="registry" Jan 21 11:09:04 crc kubenswrapper[4881]: I0121 11:09:04.244665 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec369bed-0b60-48b0-9de0-fcfd6ca7776d" containerName="registry" Jan 21 11:09:04 crc kubenswrapper[4881]: I0121 11:09:04.245288 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-cdm4s" Jan 21 11:09:04 crc kubenswrapper[4881]: I0121 11:09:04.248557 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 21 11:09:04 crc kubenswrapper[4881]: I0121 11:09:04.252328 4881 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-wtp5l" Jan 21 11:09:04 crc kubenswrapper[4881]: I0121 11:09:04.252587 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 21 11:09:04 crc kubenswrapper[4881]: I0121 11:09:04.262174 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-h2ttp"] Jan 21 11:09:04 crc kubenswrapper[4881]: I0121 11:09:04.263197 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-h2ttp" Jan 21 11:09:04 crc kubenswrapper[4881]: I0121 11:09:04.265730 4881 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-fpfvh" Jan 21 11:09:04 crc kubenswrapper[4881]: I0121 11:09:04.269614 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-cdm4s"] Jan 21 11:09:04 crc kubenswrapper[4881]: I0121 11:09:04.275440 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-h2ttp"] Jan 21 11:09:04 crc kubenswrapper[4881]: I0121 11:09:04.299411 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-csqtv"] Jan 21 11:09:04 crc kubenswrapper[4881]: I0121 11:09:04.300400 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-csqtv" Jan 21 11:09:04 crc kubenswrapper[4881]: I0121 11:09:04.306152 4881 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-nbb9f" Jan 21 11:09:04 crc kubenswrapper[4881]: I0121 11:09:04.317393 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-csqtv"] Jan 21 11:09:04 crc kubenswrapper[4881]: I0121 11:09:04.362622 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s947\" (UniqueName: \"kubernetes.io/projected/faf7e95d-07e7-4d3d-936b-26b187fc0b0c-kube-api-access-5s947\") pod \"cert-manager-858654f9db-h2ttp\" (UID: \"faf7e95d-07e7-4d3d-936b-26b187fc0b0c\") " pod="cert-manager/cert-manager-858654f9db-h2ttp" Jan 21 11:09:04 crc kubenswrapper[4881]: I0121 11:09:04.362690 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l24bg\" (UniqueName: \"kubernetes.io/projected/1d8014cf-8827-449d-b5fa-d0c098cc377e-kube-api-access-l24bg\") pod \"cert-manager-cainjector-cf98fcc89-cdm4s\" (UID: \"1d8014cf-8827-449d-b5fa-d0c098cc377e\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-cdm4s" Jan 21 11:09:04 crc kubenswrapper[4881]: I0121 11:09:04.464762 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s947\" (UniqueName: \"kubernetes.io/projected/faf7e95d-07e7-4d3d-936b-26b187fc0b0c-kube-api-access-5s947\") pod \"cert-manager-858654f9db-h2ttp\" (UID: \"faf7e95d-07e7-4d3d-936b-26b187fc0b0c\") " pod="cert-manager/cert-manager-858654f9db-h2ttp" Jan 21 11:09:04 crc kubenswrapper[4881]: I0121 11:09:04.464860 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l24bg\" (UniqueName: \"kubernetes.io/projected/1d8014cf-8827-449d-b5fa-d0c098cc377e-kube-api-access-l24bg\") pod \"cert-manager-cainjector-cf98fcc89-cdm4s\" (UID: \"1d8014cf-8827-449d-b5fa-d0c098cc377e\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-cdm4s" Jan 21 11:09:04 crc kubenswrapper[4881]: I0121 11:09:04.464931 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgl4w\" (UniqueName: \"kubernetes.io/projected/2aeab03b-23ac-4cc2-8f0f-db1111ef2cc4-kube-api-access-lgl4w\") pod \"cert-manager-webhook-687f57d79b-csqtv\" (UID: \"2aeab03b-23ac-4cc2-8f0f-db1111ef2cc4\") " pod="cert-manager/cert-manager-webhook-687f57d79b-csqtv" Jan 21 11:09:04 crc kubenswrapper[4881]: I0121 11:09:04.488541 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5s947\" (UniqueName: \"kubernetes.io/projected/faf7e95d-07e7-4d3d-936b-26b187fc0b0c-kube-api-access-5s947\") pod \"cert-manager-858654f9db-h2ttp\" (UID: \"faf7e95d-07e7-4d3d-936b-26b187fc0b0c\") " pod="cert-manager/cert-manager-858654f9db-h2ttp" Jan 21 11:09:04 crc kubenswrapper[4881]: I0121 11:09:04.488618 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l24bg\" (UniqueName: \"kubernetes.io/projected/1d8014cf-8827-449d-b5fa-d0c098cc377e-kube-api-access-l24bg\") pod \"cert-manager-cainjector-cf98fcc89-cdm4s\" (UID: \"1d8014cf-8827-449d-b5fa-d0c098cc377e\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-cdm4s" Jan 21 11:09:04 crc kubenswrapper[4881]: I0121 11:09:04.565814 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-cdm4s" Jan 21 11:09:04 crc kubenswrapper[4881]: I0121 11:09:04.566227 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgl4w\" (UniqueName: \"kubernetes.io/projected/2aeab03b-23ac-4cc2-8f0f-db1111ef2cc4-kube-api-access-lgl4w\") pod \"cert-manager-webhook-687f57d79b-csqtv\" (UID: \"2aeab03b-23ac-4cc2-8f0f-db1111ef2cc4\") " pod="cert-manager/cert-manager-webhook-687f57d79b-csqtv" Jan 21 11:09:04 crc kubenswrapper[4881]: I0121 11:09:04.583073 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-h2ttp" Jan 21 11:09:04 crc kubenswrapper[4881]: I0121 11:09:04.586436 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgl4w\" (UniqueName: \"kubernetes.io/projected/2aeab03b-23ac-4cc2-8f0f-db1111ef2cc4-kube-api-access-lgl4w\") pod \"cert-manager-webhook-687f57d79b-csqtv\" (UID: \"2aeab03b-23ac-4cc2-8f0f-db1111ef2cc4\") " pod="cert-manager/cert-manager-webhook-687f57d79b-csqtv" Jan 21 11:09:04 crc kubenswrapper[4881]: I0121 11:09:04.619713 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-csqtv" Jan 21 11:09:04 crc kubenswrapper[4881]: I0121 11:09:04.908821 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-csqtv"] Jan 21 11:09:04 crc kubenswrapper[4881]: I0121 11:09:04.923750 4881 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 11:09:05 crc kubenswrapper[4881]: I0121 11:09:05.012041 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-cdm4s"] Jan 21 11:09:05 crc kubenswrapper[4881]: W0121 11:09:05.021932 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d8014cf_8827_449d_b5fa_d0c098cc377e.slice/crio-d11a137a4d487b3787674cc7c05277ae88a77a2b6d288a5cc6a94e6b0be4df11 WatchSource:0}: Error finding container d11a137a4d487b3787674cc7c05277ae88a77a2b6d288a5cc6a94e6b0be4df11: Status 404 returned error can't find the container with id d11a137a4d487b3787674cc7c05277ae88a77a2b6d288a5cc6a94e6b0be4df11 Jan 21 11:09:05 crc kubenswrapper[4881]: I0121 11:09:05.062489 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-h2ttp"] Jan 21 11:09:05 crc kubenswrapper[4881]: W0121 11:09:05.066363 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfaf7e95d_07e7_4d3d_936b_26b187fc0b0c.slice/crio-1f83a47acfbc835dde164804eff14272ef2b40ace6b303463f86bdf150b16ae1 WatchSource:0}: Error finding container 1f83a47acfbc835dde164804eff14272ef2b40ace6b303463f86bdf150b16ae1: Status 404 returned error can't find the container with id 1f83a47acfbc835dde164804eff14272ef2b40ace6b303463f86bdf150b16ae1 Jan 21 11:09:05 crc kubenswrapper[4881]: I0121 11:09:05.747662 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-csqtv" event={"ID":"2aeab03b-23ac-4cc2-8f0f-db1111ef2cc4","Type":"ContainerStarted","Data":"418f7c4757445d467f2ed9218b1861b0d514cd5a2f430ae2561534473ee1f49f"} Jan 21 11:09:05 crc kubenswrapper[4881]: I0121 11:09:05.749731 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-cdm4s" event={"ID":"1d8014cf-8827-449d-b5fa-d0c098cc377e","Type":"ContainerStarted","Data":"d11a137a4d487b3787674cc7c05277ae88a77a2b6d288a5cc6a94e6b0be4df11"} Jan 21 11:09:05 crc kubenswrapper[4881]: I0121 11:09:05.751028 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-h2ttp" event={"ID":"faf7e95d-07e7-4d3d-936b-26b187fc0b0c","Type":"ContainerStarted","Data":"1f83a47acfbc835dde164804eff14272ef2b40ace6b303463f86bdf150b16ae1"} Jan 21 11:09:08 crc kubenswrapper[4881]: I0121 11:09:08.779799 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-csqtv" event={"ID":"2aeab03b-23ac-4cc2-8f0f-db1111ef2cc4","Type":"ContainerStarted","Data":"eae0c35d82930a00fe111e3513015ecf6b34c7f998296bd2aca0cd7bab741ad9"} Jan 21 11:09:08 crc kubenswrapper[4881]: I0121 11:09:08.780692 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-csqtv" Jan 21 11:09:08 crc kubenswrapper[4881]: I0121 11:09:08.806249 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-csqtv" podStartSLOduration=1.732344085 podStartE2EDuration="4.806216973s" podCreationTimestamp="2026-01-21 11:09:04 +0000 UTC" firstStartedPulling="2026-01-21 11:09:04.923532437 +0000 UTC m=+732.183488906" lastFinishedPulling="2026-01-21 11:09:07.997405325 +0000 UTC m=+735.257361794" observedRunningTime="2026-01-21 11:09:08.797046896 +0000 UTC m=+736.057003365" watchObservedRunningTime="2026-01-21 11:09:08.806216973 +0000 UTC m=+736.066173442" Jan 21 11:09:09 crc kubenswrapper[4881]: I0121 11:09:09.788551 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-cdm4s" event={"ID":"1d8014cf-8827-449d-b5fa-d0c098cc377e","Type":"ContainerStarted","Data":"da7dcfda8047a2fe8f0f19443f177b4697d37e30ea4a5e9c8911abd0ed087d28"} Jan 21 11:09:09 crc kubenswrapper[4881]: I0121 11:09:09.790500 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-h2ttp" event={"ID":"faf7e95d-07e7-4d3d-936b-26b187fc0b0c","Type":"ContainerStarted","Data":"bf0b79d023e0d95935fe58142c4d76a87be786faf89630d7e53563d975f0c8e3"} Jan 21 11:09:09 crc kubenswrapper[4881]: I0121 11:09:09.807207 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-cdm4s" podStartSLOduration=1.43455704 podStartE2EDuration="5.80718568s" podCreationTimestamp="2026-01-21 11:09:04 +0000 UTC" firstStartedPulling="2026-01-21 11:09:05.024520413 +0000 UTC m=+732.284476882" lastFinishedPulling="2026-01-21 11:09:09.397149053 +0000 UTC m=+736.657105522" observedRunningTime="2026-01-21 11:09:09.802988591 +0000 UTC m=+737.062945060" watchObservedRunningTime="2026-01-21 11:09:09.80718568 +0000 UTC m=+737.067142149" Jan 21 11:09:09 crc kubenswrapper[4881]: I0121 11:09:09.827738 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-h2ttp" podStartSLOduration=1.43672156 podStartE2EDuration="5.827702114s" podCreationTimestamp="2026-01-21 11:09:04 +0000 UTC" firstStartedPulling="2026-01-21 11:09:05.069529166 +0000 UTC m=+732.329485635" lastFinishedPulling="2026-01-21 11:09:09.46050972 +0000 UTC m=+736.720466189" observedRunningTime="2026-01-21 11:09:09.821105529 +0000 UTC m=+737.081062028" watchObservedRunningTime="2026-01-21 11:09:09.827702114 +0000 UTC m=+737.087658593" Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.278512 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bx64f"] Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.279177 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="ovn-controller" containerID="cri-o://d29c170fdf57f1f4c678e2902443d9ef467ac3157370e997fd5596b4eecfb54e" gracePeriod=30 Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.279600 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="sbdb" containerID="cri-o://47c176d51b5558e85897ac8c72c02bb6f152a972d8e941aeb114333f777e22ef" gracePeriod=30 Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.279647 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="nbdb" containerID="cri-o://9454db8c9af3187ee06e017ac192668036b1f565305a37de61c3cffe4d94e7db" gracePeriod=30 Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.279693 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="northd" containerID="cri-o://f7fa3136ee876ebb14cd5164f9176c91c60cc04fd7882a3ad0957ce3a36184d6" gracePeriod=30 Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.279743 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://599bfaf6bfc412f779f4732cf9f4273382c1f0af20576581264ea4fc63b2ec38" gracePeriod=30 Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.279810 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="kube-rbac-proxy-node" containerID="cri-o://e45d7cde8c26fc00bb1b5ad5cc88c41aff02acf0860856e86f3c3bfdc01e7045" gracePeriod=30 Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.279870 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="ovn-acl-logging" containerID="cri-o://b2c1d1496a7f7e5fad6a3d4ade4104e8e0c40fd0ca4722ca5e1ca1fbf0ebf3cb" gracePeriod=30 Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.335448 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="ovnkube-controller" containerID="cri-o://d5e11e8e5cd4b0f5d5b59050f20100006189356085839bd098e65e66ddf3accb" gracePeriod=30 Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.812971 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fs42r_09da9e14-f6d5-4346-a4a0-c17711e3b603/kube-multus/1.log" Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.813426 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fs42r_09da9e14-f6d5-4346-a4a0-c17711e3b603/kube-multus/0.log" Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.813537 4881 generic.go:334] "Generic (PLEG): container finished" podID="09da9e14-f6d5-4346-a4a0-c17711e3b603" containerID="e44307f5cc08335dc686c05c12b4ac57aeb2211a1072fff108a06b37b2e1461b" exitCode=2 Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.813583 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fs42r" event={"ID":"09da9e14-f6d5-4346-a4a0-c17711e3b603","Type":"ContainerDied","Data":"e44307f5cc08335dc686c05c12b4ac57aeb2211a1072fff108a06b37b2e1461b"} Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.813764 4881 scope.go:117] "RemoveContainer" containerID="821c7c539a796f03cdff04f5f89e6adab4d088c53f5c1ad85851862e5409e7eb" Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.814414 4881 scope.go:117] "RemoveContainer" containerID="e44307f5cc08335dc686c05c12b4ac57aeb2211a1072fff108a06b37b2e1461b" Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.819025 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bx64f_e8bb6d97-b3b8-4e31-b704-8e565385ab26/ovnkube-controller/2.log" Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.822564 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bx64f_e8bb6d97-b3b8-4e31-b704-8e565385ab26/ovn-acl-logging/0.log" Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.823090 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bx64f_e8bb6d97-b3b8-4e31-b704-8e565385ab26/ovn-controller/0.log" Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.823620 4881 generic.go:334] "Generic (PLEG): container finished" podID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerID="d5e11e8e5cd4b0f5d5b59050f20100006189356085839bd098e65e66ddf3accb" exitCode=0 Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.823651 4881 generic.go:334] "Generic (PLEG): container finished" podID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerID="47c176d51b5558e85897ac8c72c02bb6f152a972d8e941aeb114333f777e22ef" exitCode=0 Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.823661 4881 generic.go:334] "Generic (PLEG): container finished" podID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerID="9454db8c9af3187ee06e017ac192668036b1f565305a37de61c3cffe4d94e7db" exitCode=0 Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.823657 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" event={"ID":"e8bb6d97-b3b8-4e31-b704-8e565385ab26","Type":"ContainerDied","Data":"d5e11e8e5cd4b0f5d5b59050f20100006189356085839bd098e65e66ddf3accb"} Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.823705 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" event={"ID":"e8bb6d97-b3b8-4e31-b704-8e565385ab26","Type":"ContainerDied","Data":"47c176d51b5558e85897ac8c72c02bb6f152a972d8e941aeb114333f777e22ef"} Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.823719 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" event={"ID":"e8bb6d97-b3b8-4e31-b704-8e565385ab26","Type":"ContainerDied","Data":"9454db8c9af3187ee06e017ac192668036b1f565305a37de61c3cffe4d94e7db"} Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.823728 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" event={"ID":"e8bb6d97-b3b8-4e31-b704-8e565385ab26","Type":"ContainerDied","Data":"f7fa3136ee876ebb14cd5164f9176c91c60cc04fd7882a3ad0957ce3a36184d6"} Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.823672 4881 generic.go:334] "Generic (PLEG): container finished" podID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerID="f7fa3136ee876ebb14cd5164f9176c91c60cc04fd7882a3ad0957ce3a36184d6" exitCode=0 Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.823754 4881 generic.go:334] "Generic (PLEG): container finished" podID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerID="599bfaf6bfc412f779f4732cf9f4273382c1f0af20576581264ea4fc63b2ec38" exitCode=0 Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.823766 4881 generic.go:334] "Generic (PLEG): container finished" podID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerID="e45d7cde8c26fc00bb1b5ad5cc88c41aff02acf0860856e86f3c3bfdc01e7045" exitCode=0 Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.823772 4881 generic.go:334] "Generic (PLEG): container finished" podID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerID="b2c1d1496a7f7e5fad6a3d4ade4104e8e0c40fd0ca4722ca5e1ca1fbf0ebf3cb" exitCode=143 Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.823812 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" event={"ID":"e8bb6d97-b3b8-4e31-b704-8e565385ab26","Type":"ContainerDied","Data":"599bfaf6bfc412f779f4732cf9f4273382c1f0af20576581264ea4fc63b2ec38"} Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.823831 4881 generic.go:334] "Generic (PLEG): container finished" podID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerID="d29c170fdf57f1f4c678e2902443d9ef467ac3157370e997fd5596b4eecfb54e" exitCode=143 Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.823849 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" event={"ID":"e8bb6d97-b3b8-4e31-b704-8e565385ab26","Type":"ContainerDied","Data":"e45d7cde8c26fc00bb1b5ad5cc88c41aff02acf0860856e86f3c3bfdc01e7045"} Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.823863 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" event={"ID":"e8bb6d97-b3b8-4e31-b704-8e565385ab26","Type":"ContainerDied","Data":"b2c1d1496a7f7e5fad6a3d4ade4104e8e0c40fd0ca4722ca5e1ca1fbf0ebf3cb"} Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.823877 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" event={"ID":"e8bb6d97-b3b8-4e31-b704-8e565385ab26","Type":"ContainerDied","Data":"d29c170fdf57f1f4c678e2902443d9ef467ac3157370e997fd5596b4eecfb54e"} Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.848699 4881 scope.go:117] "RemoveContainer" containerID="ff735c08dae242cbd531e458695a99bcbe3a5e6c9753266141b14f67cb0799a2" Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.973209 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bx64f_e8bb6d97-b3b8-4e31-b704-8e565385ab26/ovn-acl-logging/0.log" Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.974103 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bx64f_e8bb6d97-b3b8-4e31-b704-8e565385ab26/ovn-controller/0.log" Jan 21 11:09:13 crc kubenswrapper[4881]: I0121 11:09:13.974649 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.040241 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6zplb"] Jan 21 11:09:14 crc kubenswrapper[4881]: E0121 11:09:14.040492 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.040509 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 11:09:14 crc kubenswrapper[4881]: E0121 11:09:14.040521 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="nbdb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.040527 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="nbdb" Jan 21 11:09:14 crc kubenswrapper[4881]: E0121 11:09:14.040533 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="ovnkube-controller" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.040541 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="ovnkube-controller" Jan 21 11:09:14 crc kubenswrapper[4881]: E0121 11:09:14.040551 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="kube-rbac-proxy-node" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.040558 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="kube-rbac-proxy-node" Jan 21 11:09:14 crc kubenswrapper[4881]: E0121 11:09:14.040566 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="northd" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.040574 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="northd" Jan 21 11:09:14 crc kubenswrapper[4881]: E0121 11:09:14.040583 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="ovnkube-controller" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.040589 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="ovnkube-controller" Jan 21 11:09:14 crc kubenswrapper[4881]: E0121 11:09:14.040596 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="sbdb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.040602 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="sbdb" Jan 21 11:09:14 crc kubenswrapper[4881]: E0121 11:09:14.040608 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="ovnkube-controller" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.040614 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="ovnkube-controller" Jan 21 11:09:14 crc kubenswrapper[4881]: E0121 11:09:14.040620 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="kubecfg-setup" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.040626 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="kubecfg-setup" Jan 21 11:09:14 crc kubenswrapper[4881]: E0121 11:09:14.040639 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="ovn-acl-logging" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.040644 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="ovn-acl-logging" Jan 21 11:09:14 crc kubenswrapper[4881]: E0121 11:09:14.040657 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="ovn-controller" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.040663 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="ovn-controller" Jan 21 11:09:14 crc kubenswrapper[4881]: E0121 11:09:14.040670 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="ovnkube-controller" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.040675 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="ovnkube-controller" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.040763 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="ovn-controller" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.040775 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="ovnkube-controller" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.040805 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="kube-rbac-proxy-node" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.040814 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="sbdb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.040824 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="ovnkube-controller" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.040834 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.040844 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="ovn-acl-logging" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.040851 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="nbdb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.040863 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="ovnkube-controller" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.040870 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="northd" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.041070 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" containerName="ovnkube-controller" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.044540 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.106191 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-cni-netd\") pod \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.106252 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-run-ovn-kubernetes\") pod \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.106278 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-cni-bin\") pod \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.106309 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e8bb6d97-b3b8-4e31-b704-8e565385ab26-ovnkube-script-lib\") pod \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.106320 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "e8bb6d97-b3b8-4e31-b704-8e565385ab26" (UID: "e8bb6d97-b3b8-4e31-b704-8e565385ab26"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.106331 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-etc-openvswitch\") pod \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.106357 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "e8bb6d97-b3b8-4e31-b704-8e565385ab26" (UID: "e8bb6d97-b3b8-4e31-b704-8e565385ab26"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.106402 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-var-lib-openvswitch\") pod \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.106417 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "e8bb6d97-b3b8-4e31-b704-8e565385ab26" (UID: "e8bb6d97-b3b8-4e31-b704-8e565385ab26"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.106477 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "e8bb6d97-b3b8-4e31-b704-8e565385ab26" (UID: "e8bb6d97-b3b8-4e31-b704-8e565385ab26"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.106489 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kz6fb\" (UniqueName: \"kubernetes.io/projected/e8bb6d97-b3b8-4e31-b704-8e565385ab26-kube-api-access-kz6fb\") pod \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.106516 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-run-netns\") pod \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.106523 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "e8bb6d97-b3b8-4e31-b704-8e565385ab26" (UID: "e8bb6d97-b3b8-4e31-b704-8e565385ab26"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.106544 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-run-ovn\") pod \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.106568 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-kubelet\") pod \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.106591 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-run-openvswitch\") pod \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.106621 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e8bb6d97-b3b8-4e31-b704-8e565385ab26-ovnkube-config\") pod \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.106639 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-slash\") pod \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.106674 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-log-socket\") pod \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.106729 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-run-systemd\") pod \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.106753 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-var-lib-cni-networks-ovn-kubernetes\") pod \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.106772 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-node-log\") pod \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.106826 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e8bb6d97-b3b8-4e31-b704-8e565385ab26-ovn-node-metrics-cert\") pod \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.106846 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e8bb6d97-b3b8-4e31-b704-8e565385ab26-env-overrides\") pod \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.106863 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-systemd-units\") pod \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\" (UID: \"e8bb6d97-b3b8-4e31-b704-8e565385ab26\") " Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.106926 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8bb6d97-b3b8-4e31-b704-8e565385ab26-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "e8bb6d97-b3b8-4e31-b704-8e565385ab26" (UID: "e8bb6d97-b3b8-4e31-b704-8e565385ab26"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.106962 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-slash" (OuterVolumeSpecName: "host-slash") pod "e8bb6d97-b3b8-4e31-b704-8e565385ab26" (UID: "e8bb6d97-b3b8-4e31-b704-8e565385ab26"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.106985 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "e8bb6d97-b3b8-4e31-b704-8e565385ab26" (UID: "e8bb6d97-b3b8-4e31-b704-8e565385ab26"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107008 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "e8bb6d97-b3b8-4e31-b704-8e565385ab26" (UID: "e8bb6d97-b3b8-4e31-b704-8e565385ab26"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107029 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "e8bb6d97-b3b8-4e31-b704-8e565385ab26" (UID: "e8bb6d97-b3b8-4e31-b704-8e565385ab26"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107044 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-host-cni-netd\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107087 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-node-log\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107117 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-run-ovn\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107192 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a91a67db-c0f5-4c55-8e84-bea013d635d8-ovnkube-script-lib\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107217 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-run-openvswitch\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107240 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a91a67db-c0f5-4c55-8e84-bea013d635d8-ovn-node-metrics-cert\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107264 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a91a67db-c0f5-4c55-8e84-bea013d635d8-env-overrides\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107288 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-run-systemd\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107307 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-var-lib-openvswitch\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107331 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107355 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgmjg\" (UniqueName: \"kubernetes.io/projected/a91a67db-c0f5-4c55-8e84-bea013d635d8-kube-api-access-wgmjg\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107378 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-host-slash\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107408 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-host-kubelet\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107439 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-systemd-units\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107461 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-host-run-netns\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107508 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-etc-openvswitch\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107538 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-host-cni-bin\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107567 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a91a67db-c0f5-4c55-8e84-bea013d635d8-ovnkube-config\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107588 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-log-socket\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107612 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-host-run-ovn-kubernetes\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107671 4881 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107685 4881 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107699 4881 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107731 4881 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107743 4881 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e8bb6d97-b3b8-4e31-b704-8e565385ab26-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107754 4881 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107765 4881 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107775 4881 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107801 4881 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107813 4881 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-slash\") on node \"crc\" DevicePath \"\"" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107050 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "e8bb6d97-b3b8-4e31-b704-8e565385ab26" (UID: "e8bb6d97-b3b8-4e31-b704-8e565385ab26"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107505 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-log-socket" (OuterVolumeSpecName: "log-socket") pod "e8bb6d97-b3b8-4e31-b704-8e565385ab26" (UID: "e8bb6d97-b3b8-4e31-b704-8e565385ab26"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107520 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8bb6d97-b3b8-4e31-b704-8e565385ab26-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "e8bb6d97-b3b8-4e31-b704-8e565385ab26" (UID: "e8bb6d97-b3b8-4e31-b704-8e565385ab26"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107929 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8bb6d97-b3b8-4e31-b704-8e565385ab26-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "e8bb6d97-b3b8-4e31-b704-8e565385ab26" (UID: "e8bb6d97-b3b8-4e31-b704-8e565385ab26"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107956 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "e8bb6d97-b3b8-4e31-b704-8e565385ab26" (UID: "e8bb6d97-b3b8-4e31-b704-8e565385ab26"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107975 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-node-log" (OuterVolumeSpecName: "node-log") pod "e8bb6d97-b3b8-4e31-b704-8e565385ab26" (UID: "e8bb6d97-b3b8-4e31-b704-8e565385ab26"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.107941 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "e8bb6d97-b3b8-4e31-b704-8e565385ab26" (UID: "e8bb6d97-b3b8-4e31-b704-8e565385ab26"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.112447 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8bb6d97-b3b8-4e31-b704-8e565385ab26-kube-api-access-kz6fb" (OuterVolumeSpecName: "kube-api-access-kz6fb") pod "e8bb6d97-b3b8-4e31-b704-8e565385ab26" (UID: "e8bb6d97-b3b8-4e31-b704-8e565385ab26"). InnerVolumeSpecName "kube-api-access-kz6fb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.112645 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8bb6d97-b3b8-4e31-b704-8e565385ab26-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "e8bb6d97-b3b8-4e31-b704-8e565385ab26" (UID: "e8bb6d97-b3b8-4e31-b704-8e565385ab26"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.120326 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "e8bb6d97-b3b8-4e31-b704-8e565385ab26" (UID: "e8bb6d97-b3b8-4e31-b704-8e565385ab26"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.209404 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-host-kubelet\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.209482 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-systemd-units\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.209500 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-host-run-netns\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.209522 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-etc-openvswitch\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.209538 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-host-cni-bin\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.209532 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-host-kubelet\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.209603 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-host-run-netns\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.209614 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-etc-openvswitch\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.209666 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-systemd-units\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.209635 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-host-cni-bin\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.209571 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a91a67db-c0f5-4c55-8e84-bea013d635d8-ovnkube-config\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.209703 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-log-socket\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.209719 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-host-run-ovn-kubernetes\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.209734 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-host-cni-netd\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.209760 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-node-log\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.209798 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-run-ovn\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.209828 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a91a67db-c0f5-4c55-8e84-bea013d635d8-ovnkube-script-lib\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.209846 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-run-openvswitch\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.209861 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a91a67db-c0f5-4c55-8e84-bea013d635d8-ovn-node-metrics-cert\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.209879 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a91a67db-c0f5-4c55-8e84-bea013d635d8-env-overrides\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.209896 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-run-systemd\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.209911 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-var-lib-openvswitch\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.210004 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.210075 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgmjg\" (UniqueName: \"kubernetes.io/projected/a91a67db-c0f5-4c55-8e84-bea013d635d8-kube-api-access-wgmjg\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.210095 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-host-slash\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.210139 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kz6fb\" (UniqueName: \"kubernetes.io/projected/e8bb6d97-b3b8-4e31-b704-8e565385ab26-kube-api-access-kz6fb\") on node \"crc\" DevicePath \"\"" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.210149 4881 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.210158 4881 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e8bb6d97-b3b8-4e31-b704-8e565385ab26-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.210169 4881 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-log-socket\") on node \"crc\" DevicePath \"\"" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.210178 4881 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.210188 4881 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.210198 4881 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-node-log\") on node \"crc\" DevicePath \"\"" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.210206 4881 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e8bb6d97-b3b8-4e31-b704-8e565385ab26-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.210214 4881 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e8bb6d97-b3b8-4e31-b704-8e565385ab26-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.210222 4881 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e8bb6d97-b3b8-4e31-b704-8e565385ab26-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.210248 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-host-slash\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.210269 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-log-socket\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.210289 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-host-run-ovn-kubernetes\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.210309 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-host-cni-netd\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.210329 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-node-log\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.210328 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a91a67db-c0f5-4c55-8e84-bea013d635d8-ovnkube-config\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.210366 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-run-systemd\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.210386 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-var-lib-openvswitch\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.210408 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.210705 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a91a67db-c0f5-4c55-8e84-bea013d635d8-env-overrides\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.210754 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-run-ovn\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.211316 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a91a67db-c0f5-4c55-8e84-bea013d635d8-ovnkube-script-lib\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.211361 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a91a67db-c0f5-4c55-8e84-bea013d635d8-run-openvswitch\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.215457 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a91a67db-c0f5-4c55-8e84-bea013d635d8-ovn-node-metrics-cert\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.227427 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgmjg\" (UniqueName: \"kubernetes.io/projected/a91a67db-c0f5-4c55-8e84-bea013d635d8-kube-api-access-wgmjg\") pod \"ovnkube-node-6zplb\" (UID: \"a91a67db-c0f5-4c55-8e84-bea013d635d8\") " pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.364960 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:14 crc kubenswrapper[4881]: W0121 11:09:14.386827 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda91a67db_c0f5_4c55_8e84_bea013d635d8.slice/crio-0e012d8de72d92869fde4655c9c49b3e09d459cb824deef01a7961522e4e160e WatchSource:0}: Error finding container 0e012d8de72d92869fde4655c9c49b3e09d459cb824deef01a7961522e4e160e: Status 404 returned error can't find the container with id 0e012d8de72d92869fde4655c9c49b3e09d459cb824deef01a7961522e4e160e Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.623923 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-csqtv" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.834421 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fs42r_09da9e14-f6d5-4346-a4a0-c17711e3b603/kube-multus/1.log" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.834501 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-fs42r" event={"ID":"09da9e14-f6d5-4346-a4a0-c17711e3b603","Type":"ContainerStarted","Data":"fb9e5e2cf8dadd445787c765b905521bee2d9a16e6fce0aac52c49f34c828713"} Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.841584 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bx64f_e8bb6d97-b3b8-4e31-b704-8e565385ab26/ovn-acl-logging/0.log" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.842051 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bx64f_e8bb6d97-b3b8-4e31-b704-8e565385ab26/ovn-controller/0.log" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.842528 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" event={"ID":"e8bb6d97-b3b8-4e31-b704-8e565385ab26","Type":"ContainerDied","Data":"a06b3458bc6abd92816719b2c657b7e45cd4d79bda9753bf86e22c8e99a3027c"} Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.842605 4881 scope.go:117] "RemoveContainer" containerID="d5e11e8e5cd4b0f5d5b59050f20100006189356085839bd098e65e66ddf3accb" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.842551 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bx64f" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.844301 4881 generic.go:334] "Generic (PLEG): container finished" podID="a91a67db-c0f5-4c55-8e84-bea013d635d8" containerID="d3e8393a708912b620f5e14e2013c207e4959dc41b6e81113d0c0ac8a1a442a0" exitCode=0 Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.844335 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" event={"ID":"a91a67db-c0f5-4c55-8e84-bea013d635d8","Type":"ContainerDied","Data":"d3e8393a708912b620f5e14e2013c207e4959dc41b6e81113d0c0ac8a1a442a0"} Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.844359 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" event={"ID":"a91a67db-c0f5-4c55-8e84-bea013d635d8","Type":"ContainerStarted","Data":"0e012d8de72d92869fde4655c9c49b3e09d459cb824deef01a7961522e4e160e"} Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.876589 4881 scope.go:117] "RemoveContainer" containerID="47c176d51b5558e85897ac8c72c02bb6f152a972d8e941aeb114333f777e22ef" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.905586 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bx64f"] Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.913879 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bx64f"] Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.917280 4881 scope.go:117] "RemoveContainer" containerID="9454db8c9af3187ee06e017ac192668036b1f565305a37de61c3cffe4d94e7db" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.940757 4881 scope.go:117] "RemoveContainer" containerID="f7fa3136ee876ebb14cd5164f9176c91c60cc04fd7882a3ad0957ce3a36184d6" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.970649 4881 scope.go:117] "RemoveContainer" containerID="599bfaf6bfc412f779f4732cf9f4273382c1f0af20576581264ea4fc63b2ec38" Jan 21 11:09:14 crc kubenswrapper[4881]: I0121 11:09:14.990985 4881 scope.go:117] "RemoveContainer" containerID="e45d7cde8c26fc00bb1b5ad5cc88c41aff02acf0860856e86f3c3bfdc01e7045" Jan 21 11:09:15 crc kubenswrapper[4881]: I0121 11:09:15.010811 4881 scope.go:117] "RemoveContainer" containerID="b2c1d1496a7f7e5fad6a3d4ade4104e8e0c40fd0ca4722ca5e1ca1fbf0ebf3cb" Jan 21 11:09:15 crc kubenswrapper[4881]: I0121 11:09:15.032033 4881 scope.go:117] "RemoveContainer" containerID="d29c170fdf57f1f4c678e2902443d9ef467ac3157370e997fd5596b4eecfb54e" Jan 21 11:09:15 crc kubenswrapper[4881]: I0121 11:09:15.053654 4881 scope.go:117] "RemoveContainer" containerID="db879e2b1a43a0ae350c4777ee6355bf5b97b1d5977e909ee0968c77852519dd" Jan 21 11:09:15 crc kubenswrapper[4881]: I0121 11:09:15.318541 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8bb6d97-b3b8-4e31-b704-8e565385ab26" path="/var/lib/kubelet/pods/e8bb6d97-b3b8-4e31-b704-8e565385ab26/volumes" Jan 21 11:09:15 crc kubenswrapper[4881]: I0121 11:09:15.856242 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" event={"ID":"a91a67db-c0f5-4c55-8e84-bea013d635d8","Type":"ContainerStarted","Data":"a4725048a64c4e17e4af56b9f0f6b04b5a55ef0c14f491c09e2fe39c6be0318d"} Jan 21 11:09:15 crc kubenswrapper[4881]: I0121 11:09:15.856317 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" event={"ID":"a91a67db-c0f5-4c55-8e84-bea013d635d8","Type":"ContainerStarted","Data":"a478e4a8979018f2acec5b4287a08c99b39860a144ac2ddd45e87a9e040109f1"} Jan 21 11:09:15 crc kubenswrapper[4881]: I0121 11:09:15.856342 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" event={"ID":"a91a67db-c0f5-4c55-8e84-bea013d635d8","Type":"ContainerStarted","Data":"e8c6c84126fdb3b2719f792c2385e8724a341e3996df3de0d5f86a747404a3d3"} Jan 21 11:09:15 crc kubenswrapper[4881]: I0121 11:09:15.856357 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" event={"ID":"a91a67db-c0f5-4c55-8e84-bea013d635d8","Type":"ContainerStarted","Data":"2144d25eae82c27e114599e7589d6e03f970d068cef8cc80ff9b650beba5440c"} Jan 21 11:09:15 crc kubenswrapper[4881]: I0121 11:09:15.856368 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" event={"ID":"a91a67db-c0f5-4c55-8e84-bea013d635d8","Type":"ContainerStarted","Data":"9d8b2814d009b89a7b9c947e01341e1d6bf0ba6feb3289ba739ecbc7d693a99a"} Jan 21 11:09:15 crc kubenswrapper[4881]: I0121 11:09:15.856379 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" event={"ID":"a91a67db-c0f5-4c55-8e84-bea013d635d8","Type":"ContainerStarted","Data":"7024df4a849a9c3072244b84f5effd45379ecc7d07d0dd890f4a027255244eed"} Jan 21 11:09:18 crc kubenswrapper[4881]: I0121 11:09:18.885444 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" event={"ID":"a91a67db-c0f5-4c55-8e84-bea013d635d8","Type":"ContainerStarted","Data":"7a69d816f1d1650253dc14e4afaa0acc554cf2e4aae031e84fc8be1626d15637"} Jan 21 11:09:21 crc kubenswrapper[4881]: I0121 11:09:21.913312 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" event={"ID":"a91a67db-c0f5-4c55-8e84-bea013d635d8","Type":"ContainerStarted","Data":"5685e4bcc7bbdf6712541b8ca39fd0d9d2d1d34c28cdeca8299f5c2650fb05c0"} Jan 21 11:09:21 crc kubenswrapper[4881]: I0121 11:09:21.915199 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:21 crc kubenswrapper[4881]: I0121 11:09:21.915241 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:21 crc kubenswrapper[4881]: I0121 11:09:21.915301 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:21 crc kubenswrapper[4881]: I0121 11:09:21.946204 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:21 crc kubenswrapper[4881]: I0121 11:09:21.949568 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" podStartSLOduration=7.949541963 podStartE2EDuration="7.949541963s" podCreationTimestamp="2026-01-21 11:09:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:09:21.94813276 +0000 UTC m=+749.208089229" watchObservedRunningTime="2026-01-21 11:09:21.949541963 +0000 UTC m=+749.209498432" Jan 21 11:09:21 crc kubenswrapper[4881]: I0121 11:09:21.951773 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:44 crc kubenswrapper[4881]: I0121 11:09:44.436595 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6zplb" Jan 21 11:09:47 crc kubenswrapper[4881]: I0121 11:09:47.419311 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x"] Jan 21 11:09:47 crc kubenswrapper[4881]: I0121 11:09:47.420963 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x" Jan 21 11:09:47 crc kubenswrapper[4881]: I0121 11:09:47.423729 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 21 11:09:47 crc kubenswrapper[4881]: I0121 11:09:47.436585 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x"] Jan 21 11:09:47 crc kubenswrapper[4881]: I0121 11:09:47.563197 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/31ed4736-a43c-4891-aeb4-e09d573a30b3-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x\" (UID: \"31ed4736-a43c-4891-aeb4-e09d573a30b3\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x" Jan 21 11:09:47 crc kubenswrapper[4881]: I0121 11:09:47.563699 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gftjk\" (UniqueName: \"kubernetes.io/projected/31ed4736-a43c-4891-aeb4-e09d573a30b3-kube-api-access-gftjk\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x\" (UID: \"31ed4736-a43c-4891-aeb4-e09d573a30b3\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x" Jan 21 11:09:47 crc kubenswrapper[4881]: I0121 11:09:47.564070 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/31ed4736-a43c-4891-aeb4-e09d573a30b3-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x\" (UID: \"31ed4736-a43c-4891-aeb4-e09d573a30b3\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x" Jan 21 11:09:47 crc kubenswrapper[4881]: I0121 11:09:47.666081 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/31ed4736-a43c-4891-aeb4-e09d573a30b3-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x\" (UID: \"31ed4736-a43c-4891-aeb4-e09d573a30b3\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x" Jan 21 11:09:47 crc kubenswrapper[4881]: I0121 11:09:47.666127 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gftjk\" (UniqueName: \"kubernetes.io/projected/31ed4736-a43c-4891-aeb4-e09d573a30b3-kube-api-access-gftjk\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x\" (UID: \"31ed4736-a43c-4891-aeb4-e09d573a30b3\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x" Jan 21 11:09:47 crc kubenswrapper[4881]: I0121 11:09:47.666174 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/31ed4736-a43c-4891-aeb4-e09d573a30b3-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x\" (UID: \"31ed4736-a43c-4891-aeb4-e09d573a30b3\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x" Jan 21 11:09:47 crc kubenswrapper[4881]: I0121 11:09:47.666775 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/31ed4736-a43c-4891-aeb4-e09d573a30b3-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x\" (UID: \"31ed4736-a43c-4891-aeb4-e09d573a30b3\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x" Jan 21 11:09:47 crc kubenswrapper[4881]: I0121 11:09:47.666806 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/31ed4736-a43c-4891-aeb4-e09d573a30b3-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x\" (UID: \"31ed4736-a43c-4891-aeb4-e09d573a30b3\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x" Jan 21 11:09:47 crc kubenswrapper[4881]: I0121 11:09:47.695923 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gftjk\" (UniqueName: \"kubernetes.io/projected/31ed4736-a43c-4891-aeb4-e09d573a30b3-kube-api-access-gftjk\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x\" (UID: \"31ed4736-a43c-4891-aeb4-e09d573a30b3\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x" Jan 21 11:09:47 crc kubenswrapper[4881]: I0121 11:09:47.748595 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x" Jan 21 11:09:48 crc kubenswrapper[4881]: I0121 11:09:48.177114 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x"] Jan 21 11:09:49 crc kubenswrapper[4881]: I0121 11:09:49.088763 4881 generic.go:334] "Generic (PLEG): container finished" podID="31ed4736-a43c-4891-aeb4-e09d573a30b3" containerID="c2a56a521d759800c9653b77ec0ef19cc98db2ff50ec2ac953c6bdf463eef3f0" exitCode=0 Jan 21 11:09:49 crc kubenswrapper[4881]: I0121 11:09:49.088983 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x" event={"ID":"31ed4736-a43c-4891-aeb4-e09d573a30b3","Type":"ContainerDied","Data":"c2a56a521d759800c9653b77ec0ef19cc98db2ff50ec2ac953c6bdf463eef3f0"} Jan 21 11:09:49 crc kubenswrapper[4881]: I0121 11:09:49.089114 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x" event={"ID":"31ed4736-a43c-4891-aeb4-e09d573a30b3","Type":"ContainerStarted","Data":"59eca7aeecaa5488e578bb8d01ce90db7f1786d13aa2b2c8774bd4b63d6ef339"} Jan 21 11:09:49 crc kubenswrapper[4881]: I0121 11:09:49.757470 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qcdp7"] Jan 21 11:09:49 crc kubenswrapper[4881]: I0121 11:09:49.758731 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qcdp7" Jan 21 11:09:49 crc kubenswrapper[4881]: I0121 11:09:49.774329 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qcdp7"] Jan 21 11:09:49 crc kubenswrapper[4881]: I0121 11:09:49.898952 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b0b6a69-9749-44d9-a00e-1e2ab801ffb5-utilities\") pod \"redhat-operators-qcdp7\" (UID: \"0b0b6a69-9749-44d9-a00e-1e2ab801ffb5\") " pod="openshift-marketplace/redhat-operators-qcdp7" Jan 21 11:09:49 crc kubenswrapper[4881]: I0121 11:09:49.899008 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f784f\" (UniqueName: \"kubernetes.io/projected/0b0b6a69-9749-44d9-a00e-1e2ab801ffb5-kube-api-access-f784f\") pod \"redhat-operators-qcdp7\" (UID: \"0b0b6a69-9749-44d9-a00e-1e2ab801ffb5\") " pod="openshift-marketplace/redhat-operators-qcdp7" Jan 21 11:09:49 crc kubenswrapper[4881]: I0121 11:09:49.899036 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b0b6a69-9749-44d9-a00e-1e2ab801ffb5-catalog-content\") pod \"redhat-operators-qcdp7\" (UID: \"0b0b6a69-9749-44d9-a00e-1e2ab801ffb5\") " pod="openshift-marketplace/redhat-operators-qcdp7" Jan 21 11:09:50 crc kubenswrapper[4881]: I0121 11:09:50.000710 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b0b6a69-9749-44d9-a00e-1e2ab801ffb5-utilities\") pod \"redhat-operators-qcdp7\" (UID: \"0b0b6a69-9749-44d9-a00e-1e2ab801ffb5\") " pod="openshift-marketplace/redhat-operators-qcdp7" Jan 21 11:09:50 crc kubenswrapper[4881]: I0121 11:09:50.000802 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f784f\" (UniqueName: \"kubernetes.io/projected/0b0b6a69-9749-44d9-a00e-1e2ab801ffb5-kube-api-access-f784f\") pod \"redhat-operators-qcdp7\" (UID: \"0b0b6a69-9749-44d9-a00e-1e2ab801ffb5\") " pod="openshift-marketplace/redhat-operators-qcdp7" Jan 21 11:09:50 crc kubenswrapper[4881]: I0121 11:09:50.000894 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b0b6a69-9749-44d9-a00e-1e2ab801ffb5-catalog-content\") pod \"redhat-operators-qcdp7\" (UID: \"0b0b6a69-9749-44d9-a00e-1e2ab801ffb5\") " pod="openshift-marketplace/redhat-operators-qcdp7" Jan 21 11:09:50 crc kubenswrapper[4881]: I0121 11:09:50.001426 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b0b6a69-9749-44d9-a00e-1e2ab801ffb5-utilities\") pod \"redhat-operators-qcdp7\" (UID: \"0b0b6a69-9749-44d9-a00e-1e2ab801ffb5\") " pod="openshift-marketplace/redhat-operators-qcdp7" Jan 21 11:09:50 crc kubenswrapper[4881]: I0121 11:09:50.001485 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b0b6a69-9749-44d9-a00e-1e2ab801ffb5-catalog-content\") pod \"redhat-operators-qcdp7\" (UID: \"0b0b6a69-9749-44d9-a00e-1e2ab801ffb5\") " pod="openshift-marketplace/redhat-operators-qcdp7" Jan 21 11:09:50 crc kubenswrapper[4881]: I0121 11:09:50.033405 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f784f\" (UniqueName: \"kubernetes.io/projected/0b0b6a69-9749-44d9-a00e-1e2ab801ffb5-kube-api-access-f784f\") pod \"redhat-operators-qcdp7\" (UID: \"0b0b6a69-9749-44d9-a00e-1e2ab801ffb5\") " pod="openshift-marketplace/redhat-operators-qcdp7" Jan 21 11:09:50 crc kubenswrapper[4881]: I0121 11:09:50.087980 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qcdp7" Jan 21 11:09:50 crc kubenswrapper[4881]: I0121 11:09:50.512195 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qcdp7"] Jan 21 11:09:50 crc kubenswrapper[4881]: W0121 11:09:50.522587 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b0b6a69_9749_44d9_a00e_1e2ab801ffb5.slice/crio-98532678eeab7c4042478a6c9766f4371541822211024856817d2abded4b5cbf WatchSource:0}: Error finding container 98532678eeab7c4042478a6c9766f4371541822211024856817d2abded4b5cbf: Status 404 returned error can't find the container with id 98532678eeab7c4042478a6c9766f4371541822211024856817d2abded4b5cbf Jan 21 11:09:51 crc kubenswrapper[4881]: I0121 11:09:51.104091 4881 generic.go:334] "Generic (PLEG): container finished" podID="31ed4736-a43c-4891-aeb4-e09d573a30b3" containerID="feaf7a7c35393a3016bf0e0da39270751fc90da64abf56d09a63cf394acffd6d" exitCode=0 Jan 21 11:09:51 crc kubenswrapper[4881]: I0121 11:09:51.104148 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x" event={"ID":"31ed4736-a43c-4891-aeb4-e09d573a30b3","Type":"ContainerDied","Data":"feaf7a7c35393a3016bf0e0da39270751fc90da64abf56d09a63cf394acffd6d"} Jan 21 11:09:51 crc kubenswrapper[4881]: I0121 11:09:51.106013 4881 generic.go:334] "Generic (PLEG): container finished" podID="0b0b6a69-9749-44d9-a00e-1e2ab801ffb5" containerID="8d96b6ac2acd440f7e60cdd073c30593c6e0c4417e979419134016d123abd969" exitCode=0 Jan 21 11:09:51 crc kubenswrapper[4881]: I0121 11:09:51.106049 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qcdp7" event={"ID":"0b0b6a69-9749-44d9-a00e-1e2ab801ffb5","Type":"ContainerDied","Data":"8d96b6ac2acd440f7e60cdd073c30593c6e0c4417e979419134016d123abd969"} Jan 21 11:09:51 crc kubenswrapper[4881]: I0121 11:09:51.106069 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qcdp7" event={"ID":"0b0b6a69-9749-44d9-a00e-1e2ab801ffb5","Type":"ContainerStarted","Data":"98532678eeab7c4042478a6c9766f4371541822211024856817d2abded4b5cbf"} Jan 21 11:09:52 crc kubenswrapper[4881]: I0121 11:09:52.123827 4881 generic.go:334] "Generic (PLEG): container finished" podID="31ed4736-a43c-4891-aeb4-e09d573a30b3" containerID="ed22a5764e1b97078db6eeb1512ee4dbaf13083258d1f179d89e99f7e3bdd2d4" exitCode=0 Jan 21 11:09:52 crc kubenswrapper[4881]: I0121 11:09:52.123900 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x" event={"ID":"31ed4736-a43c-4891-aeb4-e09d573a30b3","Type":"ContainerDied","Data":"ed22a5764e1b97078db6eeb1512ee4dbaf13083258d1f179d89e99f7e3bdd2d4"} Jan 21 11:09:52 crc kubenswrapper[4881]: I0121 11:09:52.127591 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qcdp7" event={"ID":"0b0b6a69-9749-44d9-a00e-1e2ab801ffb5","Type":"ContainerStarted","Data":"6c72489f579e659d3691891984c6b73c6e38f55451044ec4d36e63d9b6a30869"} Jan 21 11:09:53 crc kubenswrapper[4881]: I0121 11:09:53.731829 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x" Jan 21 11:09:53 crc kubenswrapper[4881]: I0121 11:09:53.856507 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/31ed4736-a43c-4891-aeb4-e09d573a30b3-util\") pod \"31ed4736-a43c-4891-aeb4-e09d573a30b3\" (UID: \"31ed4736-a43c-4891-aeb4-e09d573a30b3\") " Jan 21 11:09:53 crc kubenswrapper[4881]: I0121 11:09:53.856681 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/31ed4736-a43c-4891-aeb4-e09d573a30b3-bundle\") pod \"31ed4736-a43c-4891-aeb4-e09d573a30b3\" (UID: \"31ed4736-a43c-4891-aeb4-e09d573a30b3\") " Jan 21 11:09:53 crc kubenswrapper[4881]: I0121 11:09:53.856832 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gftjk\" (UniqueName: \"kubernetes.io/projected/31ed4736-a43c-4891-aeb4-e09d573a30b3-kube-api-access-gftjk\") pod \"31ed4736-a43c-4891-aeb4-e09d573a30b3\" (UID: \"31ed4736-a43c-4891-aeb4-e09d573a30b3\") " Jan 21 11:09:53 crc kubenswrapper[4881]: I0121 11:09:53.861299 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31ed4736-a43c-4891-aeb4-e09d573a30b3-bundle" (OuterVolumeSpecName: "bundle") pod "31ed4736-a43c-4891-aeb4-e09d573a30b3" (UID: "31ed4736-a43c-4891-aeb4-e09d573a30b3"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:09:53 crc kubenswrapper[4881]: I0121 11:09:53.873602 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31ed4736-a43c-4891-aeb4-e09d573a30b3-util" (OuterVolumeSpecName: "util") pod "31ed4736-a43c-4891-aeb4-e09d573a30b3" (UID: "31ed4736-a43c-4891-aeb4-e09d573a30b3"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:09:53 crc kubenswrapper[4881]: I0121 11:09:53.912770 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31ed4736-a43c-4891-aeb4-e09d573a30b3-kube-api-access-gftjk" (OuterVolumeSpecName: "kube-api-access-gftjk") pod "31ed4736-a43c-4891-aeb4-e09d573a30b3" (UID: "31ed4736-a43c-4891-aeb4-e09d573a30b3"). InnerVolumeSpecName "kube-api-access-gftjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:09:53 crc kubenswrapper[4881]: I0121 11:09:53.958989 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gftjk\" (UniqueName: \"kubernetes.io/projected/31ed4736-a43c-4891-aeb4-e09d573a30b3-kube-api-access-gftjk\") on node \"crc\" DevicePath \"\"" Jan 21 11:09:53 crc kubenswrapper[4881]: I0121 11:09:53.959032 4881 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/31ed4736-a43c-4891-aeb4-e09d573a30b3-util\") on node \"crc\" DevicePath \"\"" Jan 21 11:09:53 crc kubenswrapper[4881]: I0121 11:09:53.959041 4881 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/31ed4736-a43c-4891-aeb4-e09d573a30b3-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:09:54 crc kubenswrapper[4881]: I0121 11:09:54.243807 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x" event={"ID":"31ed4736-a43c-4891-aeb4-e09d573a30b3","Type":"ContainerDied","Data":"59eca7aeecaa5488e578bb8d01ce90db7f1786d13aa2b2c8774bd4b63d6ef339"} Jan 21 11:09:54 crc kubenswrapper[4881]: I0121 11:09:54.243858 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59eca7aeecaa5488e578bb8d01ce90db7f1786d13aa2b2c8774bd4b63d6ef339" Jan 21 11:09:54 crc kubenswrapper[4881]: I0121 11:09:54.243953 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x" Jan 21 11:09:55 crc kubenswrapper[4881]: I0121 11:09:55.266742 4881 generic.go:334] "Generic (PLEG): container finished" podID="0b0b6a69-9749-44d9-a00e-1e2ab801ffb5" containerID="6c72489f579e659d3691891984c6b73c6e38f55451044ec4d36e63d9b6a30869" exitCode=0 Jan 21 11:09:55 crc kubenswrapper[4881]: I0121 11:09:55.266857 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qcdp7" event={"ID":"0b0b6a69-9749-44d9-a00e-1e2ab801ffb5","Type":"ContainerDied","Data":"6c72489f579e659d3691891984c6b73c6e38f55451044ec4d36e63d9b6a30869"} Jan 21 11:09:56 crc kubenswrapper[4881]: I0121 11:09:56.277833 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qcdp7" event={"ID":"0b0b6a69-9749-44d9-a00e-1e2ab801ffb5","Type":"ContainerStarted","Data":"caff78396a524a2b7173fa89076846a700461a26e3edd64b51c4f8b958b5c232"} Jan 21 11:09:56 crc kubenswrapper[4881]: I0121 11:09:56.302984 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qcdp7" podStartSLOduration=2.674724336 podStartE2EDuration="7.302964407s" podCreationTimestamp="2026-01-21 11:09:49 +0000 UTC" firstStartedPulling="2026-01-21 11:09:51.107644157 +0000 UTC m=+778.367600616" lastFinishedPulling="2026-01-21 11:09:55.735884218 +0000 UTC m=+782.995840687" observedRunningTime="2026-01-21 11:09:56.301032049 +0000 UTC m=+783.560988518" watchObservedRunningTime="2026-01-21 11:09:56.302964407 +0000 UTC m=+783.562920876" Jan 21 11:09:59 crc kubenswrapper[4881]: I0121 11:09:59.850994 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:09:59 crc kubenswrapper[4881]: I0121 11:09:59.852349 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:10:00 crc kubenswrapper[4881]: I0121 11:10:00.088375 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qcdp7" Jan 21 11:10:00 crc kubenswrapper[4881]: I0121 11:10:00.089622 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qcdp7" Jan 21 11:10:01 crc kubenswrapper[4881]: I0121 11:10:01.361090 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qcdp7" podUID="0b0b6a69-9749-44d9-a00e-1e2ab801ffb5" containerName="registry-server" probeResult="failure" output=< Jan 21 11:10:01 crc kubenswrapper[4881]: timeout: failed to connect service ":50051" within 1s Jan 21 11:10:01 crc kubenswrapper[4881]: > Jan 21 11:10:06 crc kubenswrapper[4881]: I0121 11:10:06.620205 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-rp92p"] Jan 21 11:10:06 crc kubenswrapper[4881]: E0121 11:10:06.621597 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31ed4736-a43c-4891-aeb4-e09d573a30b3" containerName="extract" Jan 21 11:10:06 crc kubenswrapper[4881]: I0121 11:10:06.621676 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="31ed4736-a43c-4891-aeb4-e09d573a30b3" containerName="extract" Jan 21 11:10:06 crc kubenswrapper[4881]: E0121 11:10:06.621738 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31ed4736-a43c-4891-aeb4-e09d573a30b3" containerName="pull" Jan 21 11:10:06 crc kubenswrapper[4881]: I0121 11:10:06.621809 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="31ed4736-a43c-4891-aeb4-e09d573a30b3" containerName="pull" Jan 21 11:10:06 crc kubenswrapper[4881]: E0121 11:10:06.621873 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31ed4736-a43c-4891-aeb4-e09d573a30b3" containerName="util" Jan 21 11:10:06 crc kubenswrapper[4881]: I0121 11:10:06.621930 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="31ed4736-a43c-4891-aeb4-e09d573a30b3" containerName="util" Jan 21 11:10:06 crc kubenswrapper[4881]: I0121 11:10:06.622108 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="31ed4736-a43c-4891-aeb4-e09d573a30b3" containerName="extract" Jan 21 11:10:06 crc kubenswrapper[4881]: I0121 11:10:06.622638 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rp92p" Jan 21 11:10:06 crc kubenswrapper[4881]: I0121 11:10:06.627443 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-nmb98" Jan 21 11:10:06 crc kubenswrapper[4881]: I0121 11:10:06.627541 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 21 11:10:06 crc kubenswrapper[4881]: I0121 11:10:06.627454 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 21 11:10:06 crc kubenswrapper[4881]: I0121 11:10:06.634667 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-rp92p"] Jan 21 11:10:06 crc kubenswrapper[4881]: I0121 11:10:06.681305 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlqxv\" (UniqueName: \"kubernetes.io/projected/999c36a2-9f08-4da1-b14a-859ac888ae38-kube-api-access-rlqxv\") pod \"obo-prometheus-operator-68bc856cb9-rp92p\" (UID: \"999c36a2-9f08-4da1-b14a-859ac888ae38\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rp92p" Jan 21 11:10:06 crc kubenswrapper[4881]: I0121 11:10:06.782723 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlqxv\" (UniqueName: \"kubernetes.io/projected/999c36a2-9f08-4da1-b14a-859ac888ae38-kube-api-access-rlqxv\") pod \"obo-prometheus-operator-68bc856cb9-rp92p\" (UID: \"999c36a2-9f08-4da1-b14a-859ac888ae38\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rp92p" Jan 21 11:10:06 crc kubenswrapper[4881]: I0121 11:10:06.793419 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-75db897d97-n5xvb"] Jan 21 11:10:06 crc kubenswrapper[4881]: I0121 11:10:06.794748 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75db897d97-n5xvb" Jan 21 11:10:06 crc kubenswrapper[4881]: I0121 11:10:06.799253 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 21 11:10:06 crc kubenswrapper[4881]: I0121 11:10:06.800609 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-kbbml" Jan 21 11:10:06 crc kubenswrapper[4881]: I0121 11:10:06.817871 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlqxv\" (UniqueName: \"kubernetes.io/projected/999c36a2-9f08-4da1-b14a-859ac888ae38-kube-api-access-rlqxv\") pod \"obo-prometheus-operator-68bc856cb9-rp92p\" (UID: \"999c36a2-9f08-4da1-b14a-859ac888ae38\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rp92p" Jan 21 11:10:06 crc kubenswrapper[4881]: I0121 11:10:06.822518 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-75db897d97-h5vzg"] Jan 21 11:10:06 crc kubenswrapper[4881]: I0121 11:10:06.823466 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75db897d97-h5vzg" Jan 21 11:10:06 crc kubenswrapper[4881]: I0121 11:10:06.827942 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-75db897d97-n5xvb"] Jan 21 11:10:06 crc kubenswrapper[4881]: I0121 11:10:06.870095 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-75db897d97-h5vzg"] Jan 21 11:10:06 crc kubenswrapper[4881]: I0121 11:10:06.884897 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/952218f5-7dfc-40d5-a1df-2c462e1e4dcc-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-75db897d97-n5xvb\" (UID: \"952218f5-7dfc-40d5-a1df-2c462e1e4dcc\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75db897d97-n5xvb" Jan 21 11:10:06 crc kubenswrapper[4881]: I0121 11:10:06.885373 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/952218f5-7dfc-40d5-a1df-2c462e1e4dcc-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-75db897d97-n5xvb\" (UID: \"952218f5-7dfc-40d5-a1df-2c462e1e4dcc\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75db897d97-n5xvb" Jan 21 11:10:06 crc kubenswrapper[4881]: I0121 11:10:06.885516 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c2181303-fd96-43e5-b6f2-158cca65c0b4-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-75db897d97-h5vzg\" (UID: \"c2181303-fd96-43e5-b6f2-158cca65c0b4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75db897d97-h5vzg" Jan 21 11:10:06 crc kubenswrapper[4881]: I0121 11:10:06.885648 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c2181303-fd96-43e5-b6f2-158cca65c0b4-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-75db897d97-h5vzg\" (UID: \"c2181303-fd96-43e5-b6f2-158cca65c0b4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75db897d97-h5vzg" Jan 21 11:10:06 crc kubenswrapper[4881]: I0121 11:10:06.945886 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rp92p" Jan 21 11:10:06 crc kubenswrapper[4881]: I0121 11:10:06.987298 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/952218f5-7dfc-40d5-a1df-2c462e1e4dcc-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-75db897d97-n5xvb\" (UID: \"952218f5-7dfc-40d5-a1df-2c462e1e4dcc\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75db897d97-n5xvb" Jan 21 11:10:06 crc kubenswrapper[4881]: I0121 11:10:06.987373 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/952218f5-7dfc-40d5-a1df-2c462e1e4dcc-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-75db897d97-n5xvb\" (UID: \"952218f5-7dfc-40d5-a1df-2c462e1e4dcc\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75db897d97-n5xvb" Jan 21 11:10:06 crc kubenswrapper[4881]: I0121 11:10:06.987421 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c2181303-fd96-43e5-b6f2-158cca65c0b4-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-75db897d97-h5vzg\" (UID: \"c2181303-fd96-43e5-b6f2-158cca65c0b4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75db897d97-h5vzg" Jan 21 11:10:06 crc kubenswrapper[4881]: I0121 11:10:06.987472 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c2181303-fd96-43e5-b6f2-158cca65c0b4-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-75db897d97-h5vzg\" (UID: \"c2181303-fd96-43e5-b6f2-158cca65c0b4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75db897d97-h5vzg" Jan 21 11:10:07 crc kubenswrapper[4881]: I0121 11:10:06.999841 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c2181303-fd96-43e5-b6f2-158cca65c0b4-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-75db897d97-h5vzg\" (UID: \"c2181303-fd96-43e5-b6f2-158cca65c0b4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75db897d97-h5vzg" Jan 21 11:10:07 crc kubenswrapper[4881]: I0121 11:10:07.002267 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/952218f5-7dfc-40d5-a1df-2c462e1e4dcc-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-75db897d97-n5xvb\" (UID: \"952218f5-7dfc-40d5-a1df-2c462e1e4dcc\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75db897d97-n5xvb" Jan 21 11:10:07 crc kubenswrapper[4881]: I0121 11:10:07.012389 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c2181303-fd96-43e5-b6f2-158cca65c0b4-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-75db897d97-h5vzg\" (UID: \"c2181303-fd96-43e5-b6f2-158cca65c0b4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75db897d97-h5vzg" Jan 21 11:10:07 crc kubenswrapper[4881]: I0121 11:10:07.017288 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/952218f5-7dfc-40d5-a1df-2c462e1e4dcc-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-75db897d97-n5xvb\" (UID: \"952218f5-7dfc-40d5-a1df-2c462e1e4dcc\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75db897d97-n5xvb" Jan 21 11:10:07 crc kubenswrapper[4881]: I0121 11:10:07.043216 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-tfzsc"] Jan 21 11:10:07 crc kubenswrapper[4881]: I0121 11:10:07.044572 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-tfzsc" Jan 21 11:10:07 crc kubenswrapper[4881]: I0121 11:10:07.047748 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 21 11:10:07 crc kubenswrapper[4881]: I0121 11:10:07.050978 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-rj78c" Jan 21 11:10:07 crc kubenswrapper[4881]: I0121 11:10:07.090548 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-tfzsc"] Jan 21 11:10:07 crc kubenswrapper[4881]: I0121 11:10:07.118201 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75db897d97-n5xvb" Jan 21 11:10:07 crc kubenswrapper[4881]: I0121 11:10:07.150104 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75db897d97-h5vzg" Jan 21 11:10:07 crc kubenswrapper[4881]: I0121 11:10:07.190552 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/19be64a6-6795-4219-8d58-47f744ef8e13-observability-operator-tls\") pod \"observability-operator-59bdc8b94-tfzsc\" (UID: \"19be64a6-6795-4219-8d58-47f744ef8e13\") " pod="openshift-operators/observability-operator-59bdc8b94-tfzsc" Jan 21 11:10:07 crc kubenswrapper[4881]: I0121 11:10:07.191016 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtnd9\" (UniqueName: \"kubernetes.io/projected/19be64a6-6795-4219-8d58-47f744ef8e13-kube-api-access-vtnd9\") pod \"observability-operator-59bdc8b94-tfzsc\" (UID: \"19be64a6-6795-4219-8d58-47f744ef8e13\") " pod="openshift-operators/observability-operator-59bdc8b94-tfzsc" Jan 21 11:10:07 crc kubenswrapper[4881]: I0121 11:10:07.248733 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-6srxm"] Jan 21 11:10:07 crc kubenswrapper[4881]: I0121 11:10:07.250756 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-6srxm" Jan 21 11:10:07 crc kubenswrapper[4881]: I0121 11:10:07.255589 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-65rjm" Jan 21 11:10:07 crc kubenswrapper[4881]: I0121 11:10:07.267068 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-6srxm"] Jan 21 11:10:07 crc kubenswrapper[4881]: I0121 11:10:07.293150 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtnd9\" (UniqueName: \"kubernetes.io/projected/19be64a6-6795-4219-8d58-47f744ef8e13-kube-api-access-vtnd9\") pod \"observability-operator-59bdc8b94-tfzsc\" (UID: \"19be64a6-6795-4219-8d58-47f744ef8e13\") " pod="openshift-operators/observability-operator-59bdc8b94-tfzsc" Jan 21 11:10:07 crc kubenswrapper[4881]: I0121 11:10:07.293231 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/19be64a6-6795-4219-8d58-47f744ef8e13-observability-operator-tls\") pod \"observability-operator-59bdc8b94-tfzsc\" (UID: \"19be64a6-6795-4219-8d58-47f744ef8e13\") " pod="openshift-operators/observability-operator-59bdc8b94-tfzsc" Jan 21 11:10:07 crc kubenswrapper[4881]: I0121 11:10:07.293331 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcj9n\" (UniqueName: \"kubernetes.io/projected/1cfbfa78-5e7c-4a57-9d98-e11fb36d0f50-kube-api-access-gcj9n\") pod \"perses-operator-5bf474d74f-6srxm\" (UID: \"1cfbfa78-5e7c-4a57-9d98-e11fb36d0f50\") " pod="openshift-operators/perses-operator-5bf474d74f-6srxm" Jan 21 11:10:07 crc kubenswrapper[4881]: I0121 11:10:07.293374 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/1cfbfa78-5e7c-4a57-9d98-e11fb36d0f50-openshift-service-ca\") pod \"perses-operator-5bf474d74f-6srxm\" (UID: \"1cfbfa78-5e7c-4a57-9d98-e11fb36d0f50\") " pod="openshift-operators/perses-operator-5bf474d74f-6srxm" Jan 21 11:10:07 crc kubenswrapper[4881]: I0121 11:10:07.299516 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/19be64a6-6795-4219-8d58-47f744ef8e13-observability-operator-tls\") pod \"observability-operator-59bdc8b94-tfzsc\" (UID: \"19be64a6-6795-4219-8d58-47f744ef8e13\") " pod="openshift-operators/observability-operator-59bdc8b94-tfzsc" Jan 21 11:10:07 crc kubenswrapper[4881]: I0121 11:10:07.345555 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtnd9\" (UniqueName: \"kubernetes.io/projected/19be64a6-6795-4219-8d58-47f744ef8e13-kube-api-access-vtnd9\") pod \"observability-operator-59bdc8b94-tfzsc\" (UID: \"19be64a6-6795-4219-8d58-47f744ef8e13\") " pod="openshift-operators/observability-operator-59bdc8b94-tfzsc" Jan 21 11:10:07 crc kubenswrapper[4881]: I0121 11:10:07.363972 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-tfzsc" Jan 21 11:10:07 crc kubenswrapper[4881]: I0121 11:10:07.395028 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/1cfbfa78-5e7c-4a57-9d98-e11fb36d0f50-openshift-service-ca\") pod \"perses-operator-5bf474d74f-6srxm\" (UID: \"1cfbfa78-5e7c-4a57-9d98-e11fb36d0f50\") " pod="openshift-operators/perses-operator-5bf474d74f-6srxm" Jan 21 11:10:07 crc kubenswrapper[4881]: I0121 11:10:07.395238 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcj9n\" (UniqueName: \"kubernetes.io/projected/1cfbfa78-5e7c-4a57-9d98-e11fb36d0f50-kube-api-access-gcj9n\") pod \"perses-operator-5bf474d74f-6srxm\" (UID: \"1cfbfa78-5e7c-4a57-9d98-e11fb36d0f50\") " pod="openshift-operators/perses-operator-5bf474d74f-6srxm" Jan 21 11:10:07 crc kubenswrapper[4881]: I0121 11:10:07.398426 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/1cfbfa78-5e7c-4a57-9d98-e11fb36d0f50-openshift-service-ca\") pod \"perses-operator-5bf474d74f-6srxm\" (UID: \"1cfbfa78-5e7c-4a57-9d98-e11fb36d0f50\") " pod="openshift-operators/perses-operator-5bf474d74f-6srxm" Jan 21 11:10:07 crc kubenswrapper[4881]: I0121 11:10:07.426547 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcj9n\" (UniqueName: \"kubernetes.io/projected/1cfbfa78-5e7c-4a57-9d98-e11fb36d0f50-kube-api-access-gcj9n\") pod \"perses-operator-5bf474d74f-6srxm\" (UID: \"1cfbfa78-5e7c-4a57-9d98-e11fb36d0f50\") " pod="openshift-operators/perses-operator-5bf474d74f-6srxm" Jan 21 11:10:07 crc kubenswrapper[4881]: I0121 11:10:07.634368 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-6srxm" Jan 21 11:10:08 crc kubenswrapper[4881]: I0121 11:10:08.609658 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-rp92p"] Jan 21 11:10:08 crc kubenswrapper[4881]: I0121 11:10:08.766911 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-6srxm"] Jan 21 11:10:08 crc kubenswrapper[4881]: W0121 11:10:08.768919 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1cfbfa78_5e7c_4a57_9d98_e11fb36d0f50.slice/crio-29d6e582a45a89f893e70dc747c3d30492687e38b9e2a00344cf54adb1b12764 WatchSource:0}: Error finding container 29d6e582a45a89f893e70dc747c3d30492687e38b9e2a00344cf54adb1b12764: Status 404 returned error can't find the container with id 29d6e582a45a89f893e70dc747c3d30492687e38b9e2a00344cf54adb1b12764 Jan 21 11:10:08 crc kubenswrapper[4881]: I0121 11:10:08.835377 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-tfzsc"] Jan 21 11:10:08 crc kubenswrapper[4881]: I0121 11:10:08.927799 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-75db897d97-h5vzg"] Jan 21 11:10:08 crc kubenswrapper[4881]: I0121 11:10:08.970397 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-75db897d97-n5xvb"] Jan 21 11:10:09 crc kubenswrapper[4881]: I0121 11:10:09.701934 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75db897d97-n5xvb" event={"ID":"952218f5-7dfc-40d5-a1df-2c462e1e4dcc","Type":"ContainerStarted","Data":"ca1991c1fe099cb2d669d1556a3f32de2ee53253fe42bbb64fdbee0199a2c8cf"} Jan 21 11:10:09 crc kubenswrapper[4881]: I0121 11:10:09.703595 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75db897d97-h5vzg" event={"ID":"c2181303-fd96-43e5-b6f2-158cca65c0b4","Type":"ContainerStarted","Data":"7fcc611037f50df47e76edc764dfdfd5cfaedff64681ab53d2c9269b4961e76c"} Jan 21 11:10:09 crc kubenswrapper[4881]: I0121 11:10:09.704579 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-6srxm" event={"ID":"1cfbfa78-5e7c-4a57-9d98-e11fb36d0f50","Type":"ContainerStarted","Data":"29d6e582a45a89f893e70dc747c3d30492687e38b9e2a00344cf54adb1b12764"} Jan 21 11:10:09 crc kubenswrapper[4881]: I0121 11:10:09.705417 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rp92p" event={"ID":"999c36a2-9f08-4da1-b14a-859ac888ae38","Type":"ContainerStarted","Data":"65f830bd8d0ac124324c1d731cb461efd52d6fdf91617bb3de2eed67af920956"} Jan 21 11:10:09 crc kubenswrapper[4881]: I0121 11:10:09.706249 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-tfzsc" event={"ID":"19be64a6-6795-4219-8d58-47f744ef8e13","Type":"ContainerStarted","Data":"09322c086c2445daa49e5e3bca74eeb493a75c74b89b9522118b07ac62da1250"} Jan 21 11:10:10 crc kubenswrapper[4881]: I0121 11:10:10.176610 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qcdp7" Jan 21 11:10:10 crc kubenswrapper[4881]: I0121 11:10:10.262714 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qcdp7" Jan 21 11:10:10 crc kubenswrapper[4881]: I0121 11:10:10.425129 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qcdp7"] Jan 21 11:10:11 crc kubenswrapper[4881]: I0121 11:10:11.720530 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qcdp7" podUID="0b0b6a69-9749-44d9-a00e-1e2ab801ffb5" containerName="registry-server" containerID="cri-o://caff78396a524a2b7173fa89076846a700461a26e3edd64b51c4f8b958b5c232" gracePeriod=2 Jan 21 11:10:12 crc kubenswrapper[4881]: I0121 11:10:12.796630 4881 generic.go:334] "Generic (PLEG): container finished" podID="0b0b6a69-9749-44d9-a00e-1e2ab801ffb5" containerID="caff78396a524a2b7173fa89076846a700461a26e3edd64b51c4f8b958b5c232" exitCode=0 Jan 21 11:10:12 crc kubenswrapper[4881]: I0121 11:10:12.796906 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qcdp7" event={"ID":"0b0b6a69-9749-44d9-a00e-1e2ab801ffb5","Type":"ContainerDied","Data":"caff78396a524a2b7173fa89076846a700461a26e3edd64b51c4f8b958b5c232"} Jan 21 11:10:13 crc kubenswrapper[4881]: I0121 11:10:13.820292 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qcdp7" event={"ID":"0b0b6a69-9749-44d9-a00e-1e2ab801ffb5","Type":"ContainerDied","Data":"98532678eeab7c4042478a6c9766f4371541822211024856817d2abded4b5cbf"} Jan 21 11:10:13 crc kubenswrapper[4881]: I0121 11:10:13.820351 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98532678eeab7c4042478a6c9766f4371541822211024856817d2abded4b5cbf" Jan 21 11:10:13 crc kubenswrapper[4881]: I0121 11:10:13.905027 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qcdp7" Jan 21 11:10:14 crc kubenswrapper[4881]: I0121 11:10:14.062883 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b0b6a69-9749-44d9-a00e-1e2ab801ffb5-utilities\") pod \"0b0b6a69-9749-44d9-a00e-1e2ab801ffb5\" (UID: \"0b0b6a69-9749-44d9-a00e-1e2ab801ffb5\") " Jan 21 11:10:14 crc kubenswrapper[4881]: I0121 11:10:14.062951 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b0b6a69-9749-44d9-a00e-1e2ab801ffb5-catalog-content\") pod \"0b0b6a69-9749-44d9-a00e-1e2ab801ffb5\" (UID: \"0b0b6a69-9749-44d9-a00e-1e2ab801ffb5\") " Jan 21 11:10:14 crc kubenswrapper[4881]: I0121 11:10:14.063100 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f784f\" (UniqueName: \"kubernetes.io/projected/0b0b6a69-9749-44d9-a00e-1e2ab801ffb5-kube-api-access-f784f\") pod \"0b0b6a69-9749-44d9-a00e-1e2ab801ffb5\" (UID: \"0b0b6a69-9749-44d9-a00e-1e2ab801ffb5\") " Jan 21 11:10:14 crc kubenswrapper[4881]: I0121 11:10:14.066048 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b0b6a69-9749-44d9-a00e-1e2ab801ffb5-utilities" (OuterVolumeSpecName: "utilities") pod "0b0b6a69-9749-44d9-a00e-1e2ab801ffb5" (UID: "0b0b6a69-9749-44d9-a00e-1e2ab801ffb5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:10:14 crc kubenswrapper[4881]: I0121 11:10:14.072345 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b0b6a69-9749-44d9-a00e-1e2ab801ffb5-kube-api-access-f784f" (OuterVolumeSpecName: "kube-api-access-f784f") pod "0b0b6a69-9749-44d9-a00e-1e2ab801ffb5" (UID: "0b0b6a69-9749-44d9-a00e-1e2ab801ffb5"). InnerVolumeSpecName "kube-api-access-f784f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:10:14 crc kubenswrapper[4881]: I0121 11:10:14.167644 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f784f\" (UniqueName: \"kubernetes.io/projected/0b0b6a69-9749-44d9-a00e-1e2ab801ffb5-kube-api-access-f784f\") on node \"crc\" DevicePath \"\"" Jan 21 11:10:14 crc kubenswrapper[4881]: I0121 11:10:14.167686 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b0b6a69-9749-44d9-a00e-1e2ab801ffb5-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:10:14 crc kubenswrapper[4881]: I0121 11:10:14.237142 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b0b6a69-9749-44d9-a00e-1e2ab801ffb5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0b0b6a69-9749-44d9-a00e-1e2ab801ffb5" (UID: "0b0b6a69-9749-44d9-a00e-1e2ab801ffb5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:10:14 crc kubenswrapper[4881]: I0121 11:10:14.269860 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b0b6a69-9749-44d9-a00e-1e2ab801ffb5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:10:14 crc kubenswrapper[4881]: I0121 11:10:14.825606 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qcdp7" Jan 21 11:10:14 crc kubenswrapper[4881]: I0121 11:10:14.870538 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qcdp7"] Jan 21 11:10:14 crc kubenswrapper[4881]: I0121 11:10:14.877187 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qcdp7"] Jan 21 11:10:15 crc kubenswrapper[4881]: I0121 11:10:15.321711 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b0b6a69-9749-44d9-a00e-1e2ab801ffb5" path="/var/lib/kubelet/pods/0b0b6a69-9749-44d9-a00e-1e2ab801ffb5/volumes" Jan 21 11:10:26 crc kubenswrapper[4881]: E0121 11:10:26.634847 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:b5c8526d2ae660fe092dd8a7acf18ec4957d5c265890a222f55396fc2cdaeed8" Jan 21 11:10:26 crc kubenswrapper[4881]: E0121 11:10:26.635596 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:perses-operator,Image:registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:b5c8526d2ae660fe092dd8a7acf18ec4957d5c265890a222f55396fc2cdaeed8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.1,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{134217728 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openshift-service-ca,ReadOnly:true,MountPath:/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gcj9n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000350000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod perses-operator-5bf474d74f-6srxm_openshift-operators(1cfbfa78-5e7c-4a57-9d98-e11fb36d0f50): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 11:10:26 crc kubenswrapper[4881]: E0121 11:10:26.637045 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"perses-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/perses-operator-5bf474d74f-6srxm" podUID="1cfbfa78-5e7c-4a57-9d98-e11fb36d0f50" Jan 21 11:10:27 crc kubenswrapper[4881]: E0121 11:10:27.007535 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"perses-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:b5c8526d2ae660fe092dd8a7acf18ec4957d5c265890a222f55396fc2cdaeed8\\\"\"" pod="openshift-operators/perses-operator-5bf474d74f-6srxm" podUID="1cfbfa78-5e7c-4a57-9d98-e11fb36d0f50" Jan 21 11:10:28 crc kubenswrapper[4881]: E0121 11:10:28.222111 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a" Jan 21 11:10:28 crc kubenswrapper[4881]: E0121 11:10:28.222509 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-operator,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a,Command:[],Args:[--prometheus-config-reloader=$(RELATED_IMAGE_PROMETHEUS_CONFIG_RELOADER) --prometheus-instance-selector=app.kubernetes.io/managed-by=observability-operator --alertmanager-instance-selector=app.kubernetes.io/managed-by=observability-operator --thanos-ruler-instance-selector=app.kubernetes.io/managed-by=observability-operator --watch-referenced-objects-in-all-namespaces=true --disable-unmanaged-prometheus-configuration=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOGC,Value:30,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PROMETHEUS_CONFIG_RELOADER,Value:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.1,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{157286400 0} {} 150Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rlqxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod obo-prometheus-operator-68bc856cb9-rp92p_openshift-operators(999c36a2-9f08-4da1-b14a-859ac888ae38): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 11:10:28 crc kubenswrapper[4881]: E0121 11:10:28.224043 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rp92p" podUID="999c36a2-9f08-4da1-b14a-859ac888ae38" Jan 21 11:10:29 crc kubenswrapper[4881]: I0121 11:10:29.022993 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-tfzsc" event={"ID":"19be64a6-6795-4219-8d58-47f744ef8e13","Type":"ContainerStarted","Data":"17e0c2d07ce4246619e0344f14e7c92d918936d15766cb45bda2f876e228395c"} Jan 21 11:10:29 crc kubenswrapper[4881]: I0121 11:10:29.023212 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-tfzsc" Jan 21 11:10:29 crc kubenswrapper[4881]: I0121 11:10:29.025054 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75db897d97-n5xvb" event={"ID":"952218f5-7dfc-40d5-a1df-2c462e1e4dcc","Type":"ContainerStarted","Data":"8c7ccbb502e1aab769bdb56a7cbe8b6a680233a33d735a487cdf56a0358129e3"} Jan 21 11:10:29 crc kubenswrapper[4881]: I0121 11:10:29.025349 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-tfzsc" Jan 21 11:10:29 crc kubenswrapper[4881]: I0121 11:10:29.028452 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75db897d97-h5vzg" event={"ID":"c2181303-fd96-43e5-b6f2-158cca65c0b4","Type":"ContainerStarted","Data":"981ce77d2d713b29004c7b615571658e3dfc3bc52d20c3d79bc9e6731e0fc0ca"} Jan 21 11:10:29 crc kubenswrapper[4881]: E0121 11:10:29.030290 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a\\\"\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rp92p" podUID="999c36a2-9f08-4da1-b14a-859ac888ae38" Jan 21 11:10:29 crc kubenswrapper[4881]: I0121 11:10:29.051015 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-tfzsc" podStartSLOduration=3.586385187 podStartE2EDuration="23.050989442s" podCreationTimestamp="2026-01-21 11:10:06 +0000 UTC" firstStartedPulling="2026-01-21 11:10:08.867449957 +0000 UTC m=+796.127406426" lastFinishedPulling="2026-01-21 11:10:28.332054212 +0000 UTC m=+815.592010681" observedRunningTime="2026-01-21 11:10:29.047915896 +0000 UTC m=+816.307872375" watchObservedRunningTime="2026-01-21 11:10:29.050989442 +0000 UTC m=+816.310945911" Jan 21 11:10:29 crc kubenswrapper[4881]: I0121 11:10:29.078155 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75db897d97-h5vzg" podStartSLOduration=3.774144918 podStartE2EDuration="23.078130737s" podCreationTimestamp="2026-01-21 11:10:06 +0000 UTC" firstStartedPulling="2026-01-21 11:10:09.011272392 +0000 UTC m=+796.271228861" lastFinishedPulling="2026-01-21 11:10:28.315258211 +0000 UTC m=+815.575214680" observedRunningTime="2026-01-21 11:10:29.075490872 +0000 UTC m=+816.335447351" watchObservedRunningTime="2026-01-21 11:10:29.078130737 +0000 UTC m=+816.338087206" Jan 21 11:10:29 crc kubenswrapper[4881]: I0121 11:10:29.131045 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75db897d97-n5xvb" podStartSLOduration=3.818455015 podStartE2EDuration="23.131020683s" podCreationTimestamp="2026-01-21 11:10:06 +0000 UTC" firstStartedPulling="2026-01-21 11:10:08.988673229 +0000 UTC m=+796.248629698" lastFinishedPulling="2026-01-21 11:10:28.301238897 +0000 UTC m=+815.561195366" observedRunningTime="2026-01-21 11:10:29.12603403 +0000 UTC m=+816.385990499" watchObservedRunningTime="2026-01-21 11:10:29.131020683 +0000 UTC m=+816.390977152" Jan 21 11:10:29 crc kubenswrapper[4881]: I0121 11:10:29.850831 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:10:29 crc kubenswrapper[4881]: I0121 11:10:29.850901 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:10:42 crc kubenswrapper[4881]: I0121 11:10:42.145678 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-6srxm" event={"ID":"1cfbfa78-5e7c-4a57-9d98-e11fb36d0f50","Type":"ContainerStarted","Data":"7a0d646d4e071851d7ae6efc7bb55b00951ba41c92e4ee17fd7b4e1ccbaa52ce"} Jan 21 11:10:42 crc kubenswrapper[4881]: I0121 11:10:42.147137 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-6srxm" Jan 21 11:10:42 crc kubenswrapper[4881]: I0121 11:10:42.165342 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-6srxm" podStartSLOduration=3.022655099 podStartE2EDuration="35.165326859s" podCreationTimestamp="2026-01-21 11:10:07 +0000 UTC" firstStartedPulling="2026-01-21 11:10:08.774218483 +0000 UTC m=+796.034174952" lastFinishedPulling="2026-01-21 11:10:40.916890223 +0000 UTC m=+828.176846712" observedRunningTime="2026-01-21 11:10:42.161247709 +0000 UTC m=+829.421204178" watchObservedRunningTime="2026-01-21 11:10:42.165326859 +0000 UTC m=+829.425283328" Jan 21 11:10:43 crc kubenswrapper[4881]: I0121 11:10:43.152225 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rp92p" event={"ID":"999c36a2-9f08-4da1-b14a-859ac888ae38","Type":"ContainerStarted","Data":"4ce79473caabd6c07995b3e5afa25c90af88575ae15a49fe39ef109530a02b1e"} Jan 21 11:10:43 crc kubenswrapper[4881]: I0121 11:10:43.172680 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-rp92p" podStartSLOduration=3.83139522 podStartE2EDuration="37.172653075s" podCreationTimestamp="2026-01-21 11:10:06 +0000 UTC" firstStartedPulling="2026-01-21 11:10:08.642072454 +0000 UTC m=+795.902028923" lastFinishedPulling="2026-01-21 11:10:41.983330309 +0000 UTC m=+829.243286778" observedRunningTime="2026-01-21 11:10:43.167179402 +0000 UTC m=+830.427135901" watchObservedRunningTime="2026-01-21 11:10:43.172653075 +0000 UTC m=+830.432609564" Jan 21 11:10:47 crc kubenswrapper[4881]: I0121 11:10:47.637382 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-6srxm" Jan 21 11:10:59 crc kubenswrapper[4881]: I0121 11:10:59.850494 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:10:59 crc kubenswrapper[4881]: I0121 11:10:59.850975 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:10:59 crc kubenswrapper[4881]: I0121 11:10:59.851027 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 11:10:59 crc kubenswrapper[4881]: I0121 11:10:59.851730 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c61b3d568dcd0ae9a4c5e1f2de21cf5a0db2cf65652a9e217f03473254856b16"} pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 11:10:59 crc kubenswrapper[4881]: I0121 11:10:59.851808 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" containerID="cri-o://c61b3d568dcd0ae9a4c5e1f2de21cf5a0db2cf65652a9e217f03473254856b16" gracePeriod=600 Jan 21 11:11:01 crc kubenswrapper[4881]: I0121 11:11:01.272180 4881 generic.go:334] "Generic (PLEG): container finished" podID="3687b313-1df2-4274-80db-8c758b51bf2d" containerID="c61b3d568dcd0ae9a4c5e1f2de21cf5a0db2cf65652a9e217f03473254856b16" exitCode=0 Jan 21 11:11:01 crc kubenswrapper[4881]: I0121 11:11:01.272215 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerDied","Data":"c61b3d568dcd0ae9a4c5e1f2de21cf5a0db2cf65652a9e217f03473254856b16"} Jan 21 11:11:01 crc kubenswrapper[4881]: I0121 11:11:01.272583 4881 scope.go:117] "RemoveContainer" containerID="51d484e782c204b0b6011f8d0be626571952d106a910dddde0a66e728028905b" Jan 21 11:11:02 crc kubenswrapper[4881]: I0121 11:11:02.282020 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"abaaf16a1930b4e2e9a1e1d952f2948a8b09bfb0c0f18add47eef44fe07067c5"} Jan 21 11:11:08 crc kubenswrapper[4881]: I0121 11:11:08.400689 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq"] Jan 21 11:11:08 crc kubenswrapper[4881]: E0121 11:11:08.401643 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b0b6a69-9749-44d9-a00e-1e2ab801ffb5" containerName="extract-utilities" Jan 21 11:11:08 crc kubenswrapper[4881]: I0121 11:11:08.401659 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b0b6a69-9749-44d9-a00e-1e2ab801ffb5" containerName="extract-utilities" Jan 21 11:11:08 crc kubenswrapper[4881]: E0121 11:11:08.401672 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b0b6a69-9749-44d9-a00e-1e2ab801ffb5" containerName="extract-content" Jan 21 11:11:08 crc kubenswrapper[4881]: I0121 11:11:08.401677 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b0b6a69-9749-44d9-a00e-1e2ab801ffb5" containerName="extract-content" Jan 21 11:11:08 crc kubenswrapper[4881]: E0121 11:11:08.401695 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b0b6a69-9749-44d9-a00e-1e2ab801ffb5" containerName="registry-server" Jan 21 11:11:08 crc kubenswrapper[4881]: I0121 11:11:08.401702 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b0b6a69-9749-44d9-a00e-1e2ab801ffb5" containerName="registry-server" Jan 21 11:11:08 crc kubenswrapper[4881]: I0121 11:11:08.401817 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b0b6a69-9749-44d9-a00e-1e2ab801ffb5" containerName="registry-server" Jan 21 11:11:08 crc kubenswrapper[4881]: I0121 11:11:08.402625 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq" Jan 21 11:11:08 crc kubenswrapper[4881]: I0121 11:11:08.404621 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 21 11:11:08 crc kubenswrapper[4881]: I0121 11:11:08.414963 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq"] Jan 21 11:11:08 crc kubenswrapper[4881]: I0121 11:11:08.497163 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1bb22c78-c1fd-422e-900a-52c4b73fb451-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq\" (UID: \"1bb22c78-c1fd-422e-900a-52c4b73fb451\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq" Jan 21 11:11:08 crc kubenswrapper[4881]: I0121 11:11:08.497254 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69mwj\" (UniqueName: \"kubernetes.io/projected/1bb22c78-c1fd-422e-900a-52c4b73fb451-kube-api-access-69mwj\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq\" (UID: \"1bb22c78-c1fd-422e-900a-52c4b73fb451\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq" Jan 21 11:11:08 crc kubenswrapper[4881]: I0121 11:11:08.497394 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1bb22c78-c1fd-422e-900a-52c4b73fb451-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq\" (UID: \"1bb22c78-c1fd-422e-900a-52c4b73fb451\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq" Jan 21 11:11:08 crc kubenswrapper[4881]: I0121 11:11:08.599452 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69mwj\" (UniqueName: \"kubernetes.io/projected/1bb22c78-c1fd-422e-900a-52c4b73fb451-kube-api-access-69mwj\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq\" (UID: \"1bb22c78-c1fd-422e-900a-52c4b73fb451\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq" Jan 21 11:11:08 crc kubenswrapper[4881]: I0121 11:11:08.599535 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1bb22c78-c1fd-422e-900a-52c4b73fb451-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq\" (UID: \"1bb22c78-c1fd-422e-900a-52c4b73fb451\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq" Jan 21 11:11:08 crc kubenswrapper[4881]: I0121 11:11:08.599640 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1bb22c78-c1fd-422e-900a-52c4b73fb451-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq\" (UID: \"1bb22c78-c1fd-422e-900a-52c4b73fb451\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq" Jan 21 11:11:08 crc kubenswrapper[4881]: I0121 11:11:08.600292 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1bb22c78-c1fd-422e-900a-52c4b73fb451-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq\" (UID: \"1bb22c78-c1fd-422e-900a-52c4b73fb451\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq" Jan 21 11:11:08 crc kubenswrapper[4881]: I0121 11:11:08.600388 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1bb22c78-c1fd-422e-900a-52c4b73fb451-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq\" (UID: \"1bb22c78-c1fd-422e-900a-52c4b73fb451\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq" Jan 21 11:11:08 crc kubenswrapper[4881]: I0121 11:11:08.620208 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69mwj\" (UniqueName: \"kubernetes.io/projected/1bb22c78-c1fd-422e-900a-52c4b73fb451-kube-api-access-69mwj\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq\" (UID: \"1bb22c78-c1fd-422e-900a-52c4b73fb451\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq" Jan 21 11:11:08 crc kubenswrapper[4881]: I0121 11:11:08.719166 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq" Jan 21 11:11:09 crc kubenswrapper[4881]: I0121 11:11:09.014893 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq"] Jan 21 11:11:09 crc kubenswrapper[4881]: I0121 11:11:09.328397 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq" event={"ID":"1bb22c78-c1fd-422e-900a-52c4b73fb451","Type":"ContainerStarted","Data":"40bd3a7c64e9ea2a8dc049ad18ecc00565b1a2d412a0f6424dbd722f44e55c77"} Jan 21 11:11:10 crc kubenswrapper[4881]: I0121 11:11:10.337130 4881 generic.go:334] "Generic (PLEG): container finished" podID="1bb22c78-c1fd-422e-900a-52c4b73fb451" containerID="c9ed00009e2a833f1d6678a36314637e6447458f2b1a304bf57edb500bc4e94f" exitCode=0 Jan 21 11:11:10 crc kubenswrapper[4881]: I0121 11:11:10.337216 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq" event={"ID":"1bb22c78-c1fd-422e-900a-52c4b73fb451","Type":"ContainerDied","Data":"c9ed00009e2a833f1d6678a36314637e6447458f2b1a304bf57edb500bc4e94f"} Jan 21 11:11:12 crc kubenswrapper[4881]: I0121 11:11:12.352125 4881 generic.go:334] "Generic (PLEG): container finished" podID="1bb22c78-c1fd-422e-900a-52c4b73fb451" containerID="abaae8f4635dfc8073c654713ae4fd8459a0ed4d66141b1f6aaf0e2395aa0f08" exitCode=0 Jan 21 11:11:12 crc kubenswrapper[4881]: I0121 11:11:12.352260 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq" event={"ID":"1bb22c78-c1fd-422e-900a-52c4b73fb451","Type":"ContainerDied","Data":"abaae8f4635dfc8073c654713ae4fd8459a0ed4d66141b1f6aaf0e2395aa0f08"} Jan 21 11:11:13 crc kubenswrapper[4881]: I0121 11:11:13.363056 4881 generic.go:334] "Generic (PLEG): container finished" podID="1bb22c78-c1fd-422e-900a-52c4b73fb451" containerID="bf4e152e561f858eb56118ae54e7090f18e80d7b4252fb965ebd4fb6a084de56" exitCode=0 Jan 21 11:11:13 crc kubenswrapper[4881]: I0121 11:11:13.363113 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq" event={"ID":"1bb22c78-c1fd-422e-900a-52c4b73fb451","Type":"ContainerDied","Data":"bf4e152e561f858eb56118ae54e7090f18e80d7b4252fb965ebd4fb6a084de56"} Jan 21 11:11:14 crc kubenswrapper[4881]: I0121 11:11:14.746914 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq" Jan 21 11:11:14 crc kubenswrapper[4881]: I0121 11:11:14.892392 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1bb22c78-c1fd-422e-900a-52c4b73fb451-bundle\") pod \"1bb22c78-c1fd-422e-900a-52c4b73fb451\" (UID: \"1bb22c78-c1fd-422e-900a-52c4b73fb451\") " Jan 21 11:11:14 crc kubenswrapper[4881]: I0121 11:11:14.892457 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1bb22c78-c1fd-422e-900a-52c4b73fb451-util\") pod \"1bb22c78-c1fd-422e-900a-52c4b73fb451\" (UID: \"1bb22c78-c1fd-422e-900a-52c4b73fb451\") " Jan 21 11:11:14 crc kubenswrapper[4881]: I0121 11:11:14.892561 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69mwj\" (UniqueName: \"kubernetes.io/projected/1bb22c78-c1fd-422e-900a-52c4b73fb451-kube-api-access-69mwj\") pod \"1bb22c78-c1fd-422e-900a-52c4b73fb451\" (UID: \"1bb22c78-c1fd-422e-900a-52c4b73fb451\") " Jan 21 11:11:14 crc kubenswrapper[4881]: I0121 11:11:14.893095 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bb22c78-c1fd-422e-900a-52c4b73fb451-bundle" (OuterVolumeSpecName: "bundle") pod "1bb22c78-c1fd-422e-900a-52c4b73fb451" (UID: "1bb22c78-c1fd-422e-900a-52c4b73fb451"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:11:14 crc kubenswrapper[4881]: I0121 11:11:14.897669 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bb22c78-c1fd-422e-900a-52c4b73fb451-kube-api-access-69mwj" (OuterVolumeSpecName: "kube-api-access-69mwj") pod "1bb22c78-c1fd-422e-900a-52c4b73fb451" (UID: "1bb22c78-c1fd-422e-900a-52c4b73fb451"). InnerVolumeSpecName "kube-api-access-69mwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:11:14 crc kubenswrapper[4881]: I0121 11:11:14.907412 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bb22c78-c1fd-422e-900a-52c4b73fb451-util" (OuterVolumeSpecName: "util") pod "1bb22c78-c1fd-422e-900a-52c4b73fb451" (UID: "1bb22c78-c1fd-422e-900a-52c4b73fb451"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:11:14 crc kubenswrapper[4881]: I0121 11:11:14.994400 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69mwj\" (UniqueName: \"kubernetes.io/projected/1bb22c78-c1fd-422e-900a-52c4b73fb451-kube-api-access-69mwj\") on node \"crc\" DevicePath \"\"" Jan 21 11:11:14 crc kubenswrapper[4881]: I0121 11:11:14.994447 4881 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1bb22c78-c1fd-422e-900a-52c4b73fb451-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:11:14 crc kubenswrapper[4881]: I0121 11:11:14.994461 4881 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1bb22c78-c1fd-422e-900a-52c4b73fb451-util\") on node \"crc\" DevicePath \"\"" Jan 21 11:11:15 crc kubenswrapper[4881]: I0121 11:11:15.378233 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq" event={"ID":"1bb22c78-c1fd-422e-900a-52c4b73fb451","Type":"ContainerDied","Data":"40bd3a7c64e9ea2a8dc049ad18ecc00565b1a2d412a0f6424dbd722f44e55c77"} Jan 21 11:11:15 crc kubenswrapper[4881]: I0121 11:11:15.378287 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq" Jan 21 11:11:15 crc kubenswrapper[4881]: I0121 11:11:15.378297 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40bd3a7c64e9ea2a8dc049ad18ecc00565b1a2d412a0f6424dbd722f44e55c77" Jan 21 11:11:16 crc kubenswrapper[4881]: I0121 11:11:16.948574 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-zlxs9"] Jan 21 11:11:16 crc kubenswrapper[4881]: E0121 11:11:16.950720 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bb22c78-c1fd-422e-900a-52c4b73fb451" containerName="util" Jan 21 11:11:16 crc kubenswrapper[4881]: I0121 11:11:16.950820 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bb22c78-c1fd-422e-900a-52c4b73fb451" containerName="util" Jan 21 11:11:16 crc kubenswrapper[4881]: E0121 11:11:16.950900 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bb22c78-c1fd-422e-900a-52c4b73fb451" containerName="pull" Jan 21 11:11:16 crc kubenswrapper[4881]: I0121 11:11:16.950954 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bb22c78-c1fd-422e-900a-52c4b73fb451" containerName="pull" Jan 21 11:11:16 crc kubenswrapper[4881]: E0121 11:11:16.951067 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bb22c78-c1fd-422e-900a-52c4b73fb451" containerName="extract" Jan 21 11:11:16 crc kubenswrapper[4881]: I0121 11:11:16.951118 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bb22c78-c1fd-422e-900a-52c4b73fb451" containerName="extract" Jan 21 11:11:16 crc kubenswrapper[4881]: I0121 11:11:16.951309 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bb22c78-c1fd-422e-900a-52c4b73fb451" containerName="extract" Jan 21 11:11:16 crc kubenswrapper[4881]: I0121 11:11:16.951982 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-zlxs9" Jan 21 11:11:16 crc kubenswrapper[4881]: I0121 11:11:16.958112 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 21 11:11:16 crc kubenswrapper[4881]: I0121 11:11:16.958271 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 21 11:11:16 crc kubenswrapper[4881]: I0121 11:11:16.958271 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-tpcqs" Jan 21 11:11:16 crc kubenswrapper[4881]: I0121 11:11:16.967187 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-zlxs9"] Jan 21 11:11:17 crc kubenswrapper[4881]: I0121 11:11:17.055006 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chnwc\" (UniqueName: \"kubernetes.io/projected/14878b0e-37cc-4c03-89df-ba23d94589a0-kube-api-access-chnwc\") pod \"nmstate-operator-646758c888-zlxs9\" (UID: \"14878b0e-37cc-4c03-89df-ba23d94589a0\") " pod="openshift-nmstate/nmstate-operator-646758c888-zlxs9" Jan 21 11:11:17 crc kubenswrapper[4881]: I0121 11:11:17.156759 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chnwc\" (UniqueName: \"kubernetes.io/projected/14878b0e-37cc-4c03-89df-ba23d94589a0-kube-api-access-chnwc\") pod \"nmstate-operator-646758c888-zlxs9\" (UID: \"14878b0e-37cc-4c03-89df-ba23d94589a0\") " pod="openshift-nmstate/nmstate-operator-646758c888-zlxs9" Jan 21 11:11:17 crc kubenswrapper[4881]: I0121 11:11:17.198918 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chnwc\" (UniqueName: \"kubernetes.io/projected/14878b0e-37cc-4c03-89df-ba23d94589a0-kube-api-access-chnwc\") pod \"nmstate-operator-646758c888-zlxs9\" (UID: \"14878b0e-37cc-4c03-89df-ba23d94589a0\") " pod="openshift-nmstate/nmstate-operator-646758c888-zlxs9" Jan 21 11:11:17 crc kubenswrapper[4881]: I0121 11:11:17.271565 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-zlxs9" Jan 21 11:11:17 crc kubenswrapper[4881]: I0121 11:11:17.553363 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-zlxs9"] Jan 21 11:11:18 crc kubenswrapper[4881]: I0121 11:11:18.408437 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-zlxs9" event={"ID":"14878b0e-37cc-4c03-89df-ba23d94589a0","Type":"ContainerStarted","Data":"7ed46c79bb08a2c1612067064decb37ed8b04c6a79956da7192766e827f18ea7"} Jan 21 11:11:20 crc kubenswrapper[4881]: I0121 11:11:20.425860 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-zlxs9" event={"ID":"14878b0e-37cc-4c03-89df-ba23d94589a0","Type":"ContainerStarted","Data":"f6090be0fcc0b7c7a66c51f9657cad982b8158dbaa93ebaf2206d9ce9fc7fccf"} Jan 21 11:11:20 crc kubenswrapper[4881]: I0121 11:11:20.446748 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-zlxs9" podStartSLOduration=1.968076664 podStartE2EDuration="4.44673052s" podCreationTimestamp="2026-01-21 11:11:16 +0000 UTC" firstStartedPulling="2026-01-21 11:11:17.564111515 +0000 UTC m=+864.824067984" lastFinishedPulling="2026-01-21 11:11:20.042765371 +0000 UTC m=+867.302721840" observedRunningTime="2026-01-21 11:11:20.441229065 +0000 UTC m=+867.701185534" watchObservedRunningTime="2026-01-21 11:11:20.44673052 +0000 UTC m=+867.706686989" Jan 21 11:11:21 crc kubenswrapper[4881]: I0121 11:11:21.821171 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-ft48b"] Jan 21 11:11:21 crc kubenswrapper[4881]: I0121 11:11:21.822174 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-ft48b" Jan 21 11:11:21 crc kubenswrapper[4881]: I0121 11:11:21.824115 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-2flt2" Jan 21 11:11:21 crc kubenswrapper[4881]: I0121 11:11:21.838946 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-ft48b"] Jan 21 11:11:21 crc kubenswrapper[4881]: I0121 11:11:21.848282 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-qmv5k"] Jan 21 11:11:21 crc kubenswrapper[4881]: I0121 11:11:21.849378 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qmv5k" Jan 21 11:11:21 crc kubenswrapper[4881]: I0121 11:11:21.850983 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 21 11:11:21 crc kubenswrapper[4881]: I0121 11:11:21.861873 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-qmv5k"] Jan 21 11:11:21 crc kubenswrapper[4881]: I0121 11:11:21.874478 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-b9rcw"] Jan 21 11:11:21 crc kubenswrapper[4881]: I0121 11:11:21.876929 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-b9rcw" Jan 21 11:11:21 crc kubenswrapper[4881]: I0121 11:11:21.966512 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-lgdjc"] Jan 21 11:11:21 crc kubenswrapper[4881]: I0121 11:11:21.967398 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-lgdjc" Jan 21 11:11:21 crc kubenswrapper[4881]: I0121 11:11:21.969085 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 21 11:11:21 crc kubenswrapper[4881]: I0121 11:11:21.971034 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-zgb88" Jan 21 11:11:21 crc kubenswrapper[4881]: I0121 11:11:21.971260 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 21 11:11:21 crc kubenswrapper[4881]: I0121 11:11:21.985901 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-lgdjc"] Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.003578 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b6262b8c-2531-4008-9bb8-c3beeb66a3ed-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-qmv5k\" (UID: \"b6262b8c-2531-4008-9bb8-c3beeb66a3ed\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qmv5k" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.003658 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/5c705c83-efa0-436f-a0b5-9164dbb6b1df-ovs-socket\") pod \"nmstate-handler-b9rcw\" (UID: \"5c705c83-efa0-436f-a0b5-9164dbb6b1df\") " pod="openshift-nmstate/nmstate-handler-b9rcw" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.003708 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7mrm\" (UniqueName: \"kubernetes.io/projected/5c705c83-efa0-436f-a0b5-9164dbb6b1df-kube-api-access-h7mrm\") pod \"nmstate-handler-b9rcw\" (UID: \"5c705c83-efa0-436f-a0b5-9164dbb6b1df\") " pod="openshift-nmstate/nmstate-handler-b9rcw" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.003730 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/5c705c83-efa0-436f-a0b5-9164dbb6b1df-nmstate-lock\") pod \"nmstate-handler-b9rcw\" (UID: \"5c705c83-efa0-436f-a0b5-9164dbb6b1df\") " pod="openshift-nmstate/nmstate-handler-b9rcw" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.003756 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54gh7\" (UniqueName: \"kubernetes.io/projected/b6262b8c-2531-4008-9bb8-c3beeb66a3ed-kube-api-access-54gh7\") pod \"nmstate-webhook-8474b5b9d8-qmv5k\" (UID: \"b6262b8c-2531-4008-9bb8-c3beeb66a3ed\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qmv5k" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.003779 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/5c705c83-efa0-436f-a0b5-9164dbb6b1df-dbus-socket\") pod \"nmstate-handler-b9rcw\" (UID: \"5c705c83-efa0-436f-a0b5-9164dbb6b1df\") " pod="openshift-nmstate/nmstate-handler-b9rcw" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.003822 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rdkr\" (UniqueName: \"kubernetes.io/projected/f68408aa-3450-42af-a6f8-b5260973f272-kube-api-access-7rdkr\") pod \"nmstate-metrics-54757c584b-ft48b\" (UID: \"f68408aa-3450-42af-a6f8-b5260973f272\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-ft48b" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.104611 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/5c705c83-efa0-436f-a0b5-9164dbb6b1df-ovs-socket\") pod \"nmstate-handler-b9rcw\" (UID: \"5c705c83-efa0-436f-a0b5-9164dbb6b1df\") " pod="openshift-nmstate/nmstate-handler-b9rcw" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.104674 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvtms\" (UniqueName: \"kubernetes.io/projected/fcdadd73-568f-4ae0-a7bb-9330b2feb835-kube-api-access-hvtms\") pod \"nmstate-console-plugin-7754f76f8b-lgdjc\" (UID: \"fcdadd73-568f-4ae0-a7bb-9330b2feb835\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-lgdjc" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.104710 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/fcdadd73-568f-4ae0-a7bb-9330b2feb835-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-lgdjc\" (UID: \"fcdadd73-568f-4ae0-a7bb-9330b2feb835\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-lgdjc" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.104755 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/fcdadd73-568f-4ae0-a7bb-9330b2feb835-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-lgdjc\" (UID: \"fcdadd73-568f-4ae0-a7bb-9330b2feb835\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-lgdjc" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.104764 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/5c705c83-efa0-436f-a0b5-9164dbb6b1df-ovs-socket\") pod \"nmstate-handler-b9rcw\" (UID: \"5c705c83-efa0-436f-a0b5-9164dbb6b1df\") " pod="openshift-nmstate/nmstate-handler-b9rcw" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.104980 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7mrm\" (UniqueName: \"kubernetes.io/projected/5c705c83-efa0-436f-a0b5-9164dbb6b1df-kube-api-access-h7mrm\") pod \"nmstate-handler-b9rcw\" (UID: \"5c705c83-efa0-436f-a0b5-9164dbb6b1df\") " pod="openshift-nmstate/nmstate-handler-b9rcw" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.105049 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/5c705c83-efa0-436f-a0b5-9164dbb6b1df-nmstate-lock\") pod \"nmstate-handler-b9rcw\" (UID: \"5c705c83-efa0-436f-a0b5-9164dbb6b1df\") " pod="openshift-nmstate/nmstate-handler-b9rcw" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.105139 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/5c705c83-efa0-436f-a0b5-9164dbb6b1df-nmstate-lock\") pod \"nmstate-handler-b9rcw\" (UID: \"5c705c83-efa0-436f-a0b5-9164dbb6b1df\") " pod="openshift-nmstate/nmstate-handler-b9rcw" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.105488 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54gh7\" (UniqueName: \"kubernetes.io/projected/b6262b8c-2531-4008-9bb8-c3beeb66a3ed-kube-api-access-54gh7\") pod \"nmstate-webhook-8474b5b9d8-qmv5k\" (UID: \"b6262b8c-2531-4008-9bb8-c3beeb66a3ed\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qmv5k" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.105538 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/5c705c83-efa0-436f-a0b5-9164dbb6b1df-dbus-socket\") pod \"nmstate-handler-b9rcw\" (UID: \"5c705c83-efa0-436f-a0b5-9164dbb6b1df\") " pod="openshift-nmstate/nmstate-handler-b9rcw" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.105590 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rdkr\" (UniqueName: \"kubernetes.io/projected/f68408aa-3450-42af-a6f8-b5260973f272-kube-api-access-7rdkr\") pod \"nmstate-metrics-54757c584b-ft48b\" (UID: \"f68408aa-3450-42af-a6f8-b5260973f272\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-ft48b" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.105666 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b6262b8c-2531-4008-9bb8-c3beeb66a3ed-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-qmv5k\" (UID: \"b6262b8c-2531-4008-9bb8-c3beeb66a3ed\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qmv5k" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.105912 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/5c705c83-efa0-436f-a0b5-9164dbb6b1df-dbus-socket\") pod \"nmstate-handler-b9rcw\" (UID: \"5c705c83-efa0-436f-a0b5-9164dbb6b1df\") " pod="openshift-nmstate/nmstate-handler-b9rcw" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.119162 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b6262b8c-2531-4008-9bb8-c3beeb66a3ed-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-qmv5k\" (UID: \"b6262b8c-2531-4008-9bb8-c3beeb66a3ed\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qmv5k" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.122732 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7mrm\" (UniqueName: \"kubernetes.io/projected/5c705c83-efa0-436f-a0b5-9164dbb6b1df-kube-api-access-h7mrm\") pod \"nmstate-handler-b9rcw\" (UID: \"5c705c83-efa0-436f-a0b5-9164dbb6b1df\") " pod="openshift-nmstate/nmstate-handler-b9rcw" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.122938 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rdkr\" (UniqueName: \"kubernetes.io/projected/f68408aa-3450-42af-a6f8-b5260973f272-kube-api-access-7rdkr\") pod \"nmstate-metrics-54757c584b-ft48b\" (UID: \"f68408aa-3450-42af-a6f8-b5260973f272\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-ft48b" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.123231 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54gh7\" (UniqueName: \"kubernetes.io/projected/b6262b8c-2531-4008-9bb8-c3beeb66a3ed-kube-api-access-54gh7\") pod \"nmstate-webhook-8474b5b9d8-qmv5k\" (UID: \"b6262b8c-2531-4008-9bb8-c3beeb66a3ed\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qmv5k" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.140773 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-ft48b" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.164967 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qmv5k" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.180310 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5948d4cb5-h9dr6"] Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.188999 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5948d4cb5-h9dr6" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.201851 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-b9rcw" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.206516 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvtms\" (UniqueName: \"kubernetes.io/projected/fcdadd73-568f-4ae0-a7bb-9330b2feb835-kube-api-access-hvtms\") pod \"nmstate-console-plugin-7754f76f8b-lgdjc\" (UID: \"fcdadd73-568f-4ae0-a7bb-9330b2feb835\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-lgdjc" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.206569 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/fcdadd73-568f-4ae0-a7bb-9330b2feb835-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-lgdjc\" (UID: \"fcdadd73-568f-4ae0-a7bb-9330b2feb835\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-lgdjc" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.206592 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/fcdadd73-568f-4ae0-a7bb-9330b2feb835-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-lgdjc\" (UID: \"fcdadd73-568f-4ae0-a7bb-9330b2feb835\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-lgdjc" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.207671 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5948d4cb5-h9dr6"] Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.208913 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/fcdadd73-568f-4ae0-a7bb-9330b2feb835-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-lgdjc\" (UID: \"fcdadd73-568f-4ae0-a7bb-9330b2feb835\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-lgdjc" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.210220 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/fcdadd73-568f-4ae0-a7bb-9330b2feb835-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-lgdjc\" (UID: \"fcdadd73-568f-4ae0-a7bb-9330b2feb835\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-lgdjc" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.234508 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvtms\" (UniqueName: \"kubernetes.io/projected/fcdadd73-568f-4ae0-a7bb-9330b2feb835-kube-api-access-hvtms\") pod \"nmstate-console-plugin-7754f76f8b-lgdjc\" (UID: \"fcdadd73-568f-4ae0-a7bb-9330b2feb835\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-lgdjc" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.289201 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-lgdjc" Jan 21 11:11:22 crc kubenswrapper[4881]: W0121 11:11:22.300678 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c705c83_efa0_436f_a0b5_9164dbb6b1df.slice/crio-91f221e0efb5cb7df51ed985fb369e978bfbe0e46f415631ebbcb58009bf1cea WatchSource:0}: Error finding container 91f221e0efb5cb7df51ed985fb369e978bfbe0e46f415631ebbcb58009bf1cea: Status 404 returned error can't find the container with id 91f221e0efb5cb7df51ed985fb369e978bfbe0e46f415631ebbcb58009bf1cea Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.308149 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e9935505-550d-4eed-9bda-72ec999ff529-console-oauth-config\") pod \"console-5948d4cb5-h9dr6\" (UID: \"e9935505-550d-4eed-9bda-72ec999ff529\") " pod="openshift-console/console-5948d4cb5-h9dr6" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.308257 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e9935505-550d-4eed-9bda-72ec999ff529-console-serving-cert\") pod \"console-5948d4cb5-h9dr6\" (UID: \"e9935505-550d-4eed-9bda-72ec999ff529\") " pod="openshift-console/console-5948d4cb5-h9dr6" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.308299 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2j6p\" (UniqueName: \"kubernetes.io/projected/e9935505-550d-4eed-9bda-72ec999ff529-kube-api-access-x2j6p\") pod \"console-5948d4cb5-h9dr6\" (UID: \"e9935505-550d-4eed-9bda-72ec999ff529\") " pod="openshift-console/console-5948d4cb5-h9dr6" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.308336 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e9935505-550d-4eed-9bda-72ec999ff529-console-config\") pod \"console-5948d4cb5-h9dr6\" (UID: \"e9935505-550d-4eed-9bda-72ec999ff529\") " pod="openshift-console/console-5948d4cb5-h9dr6" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.308373 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e9935505-550d-4eed-9bda-72ec999ff529-oauth-serving-cert\") pod \"console-5948d4cb5-h9dr6\" (UID: \"e9935505-550d-4eed-9bda-72ec999ff529\") " pod="openshift-console/console-5948d4cb5-h9dr6" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.308405 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e9935505-550d-4eed-9bda-72ec999ff529-service-ca\") pod \"console-5948d4cb5-h9dr6\" (UID: \"e9935505-550d-4eed-9bda-72ec999ff529\") " pod="openshift-console/console-5948d4cb5-h9dr6" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.308433 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9935505-550d-4eed-9bda-72ec999ff529-trusted-ca-bundle\") pod \"console-5948d4cb5-h9dr6\" (UID: \"e9935505-550d-4eed-9bda-72ec999ff529\") " pod="openshift-console/console-5948d4cb5-h9dr6" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.411677 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e9935505-550d-4eed-9bda-72ec999ff529-oauth-serving-cert\") pod \"console-5948d4cb5-h9dr6\" (UID: \"e9935505-550d-4eed-9bda-72ec999ff529\") " pod="openshift-console/console-5948d4cb5-h9dr6" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.412022 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e9935505-550d-4eed-9bda-72ec999ff529-service-ca\") pod \"console-5948d4cb5-h9dr6\" (UID: \"e9935505-550d-4eed-9bda-72ec999ff529\") " pod="openshift-console/console-5948d4cb5-h9dr6" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.412051 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9935505-550d-4eed-9bda-72ec999ff529-trusted-ca-bundle\") pod \"console-5948d4cb5-h9dr6\" (UID: \"e9935505-550d-4eed-9bda-72ec999ff529\") " pod="openshift-console/console-5948d4cb5-h9dr6" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.412073 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e9935505-550d-4eed-9bda-72ec999ff529-console-oauth-config\") pod \"console-5948d4cb5-h9dr6\" (UID: \"e9935505-550d-4eed-9bda-72ec999ff529\") " pod="openshift-console/console-5948d4cb5-h9dr6" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.412095 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e9935505-550d-4eed-9bda-72ec999ff529-console-serving-cert\") pod \"console-5948d4cb5-h9dr6\" (UID: \"e9935505-550d-4eed-9bda-72ec999ff529\") " pod="openshift-console/console-5948d4cb5-h9dr6" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.412126 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2j6p\" (UniqueName: \"kubernetes.io/projected/e9935505-550d-4eed-9bda-72ec999ff529-kube-api-access-x2j6p\") pod \"console-5948d4cb5-h9dr6\" (UID: \"e9935505-550d-4eed-9bda-72ec999ff529\") " pod="openshift-console/console-5948d4cb5-h9dr6" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.412158 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e9935505-550d-4eed-9bda-72ec999ff529-console-config\") pod \"console-5948d4cb5-h9dr6\" (UID: \"e9935505-550d-4eed-9bda-72ec999ff529\") " pod="openshift-console/console-5948d4cb5-h9dr6" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.412886 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e9935505-550d-4eed-9bda-72ec999ff529-oauth-serving-cert\") pod \"console-5948d4cb5-h9dr6\" (UID: \"e9935505-550d-4eed-9bda-72ec999ff529\") " pod="openshift-console/console-5948d4cb5-h9dr6" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.414564 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e9935505-550d-4eed-9bda-72ec999ff529-service-ca\") pod \"console-5948d4cb5-h9dr6\" (UID: \"e9935505-550d-4eed-9bda-72ec999ff529\") " pod="openshift-console/console-5948d4cb5-h9dr6" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.416237 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e9935505-550d-4eed-9bda-72ec999ff529-console-config\") pod \"console-5948d4cb5-h9dr6\" (UID: \"e9935505-550d-4eed-9bda-72ec999ff529\") " pod="openshift-console/console-5948d4cb5-h9dr6" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.421004 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9935505-550d-4eed-9bda-72ec999ff529-trusted-ca-bundle\") pod \"console-5948d4cb5-h9dr6\" (UID: \"e9935505-550d-4eed-9bda-72ec999ff529\") " pod="openshift-console/console-5948d4cb5-h9dr6" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.427625 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e9935505-550d-4eed-9bda-72ec999ff529-console-oauth-config\") pod \"console-5948d4cb5-h9dr6\" (UID: \"e9935505-550d-4eed-9bda-72ec999ff529\") " pod="openshift-console/console-5948d4cb5-h9dr6" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.431257 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e9935505-550d-4eed-9bda-72ec999ff529-console-serving-cert\") pod \"console-5948d4cb5-h9dr6\" (UID: \"e9935505-550d-4eed-9bda-72ec999ff529\") " pod="openshift-console/console-5948d4cb5-h9dr6" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.443852 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-b9rcw" event={"ID":"5c705c83-efa0-436f-a0b5-9164dbb6b1df","Type":"ContainerStarted","Data":"91f221e0efb5cb7df51ed985fb369e978bfbe0e46f415631ebbcb58009bf1cea"} Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.473012 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2j6p\" (UniqueName: \"kubernetes.io/projected/e9935505-550d-4eed-9bda-72ec999ff529-kube-api-access-x2j6p\") pod \"console-5948d4cb5-h9dr6\" (UID: \"e9935505-550d-4eed-9bda-72ec999ff529\") " pod="openshift-console/console-5948d4cb5-h9dr6" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.580399 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5948d4cb5-h9dr6" Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.603279 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-ft48b"] Jan 21 11:11:22 crc kubenswrapper[4881]: W0121 11:11:22.604010 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf68408aa_3450_42af_a6f8_b5260973f272.slice/crio-75263f711a01743da2eda02df172618a1f70bf3f71d4552a680f3f08dba4b6d1 WatchSource:0}: Error finding container 75263f711a01743da2eda02df172618a1f70bf3f71d4552a680f3f08dba4b6d1: Status 404 returned error can't find the container with id 75263f711a01743da2eda02df172618a1f70bf3f71d4552a680f3f08dba4b6d1 Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.712137 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-lgdjc"] Jan 21 11:11:22 crc kubenswrapper[4881]: W0121 11:11:22.720203 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfcdadd73_568f_4ae0_a7bb_9330b2feb835.slice/crio-2f1c3a1b1622749132028b49619365e095b0384d7bf38678f2f951e18082dadb WatchSource:0}: Error finding container 2f1c3a1b1622749132028b49619365e095b0384d7bf38678f2f951e18082dadb: Status 404 returned error can't find the container with id 2f1c3a1b1622749132028b49619365e095b0384d7bf38678f2f951e18082dadb Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.826348 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5948d4cb5-h9dr6"] Jan 21 11:11:22 crc kubenswrapper[4881]: W0121 11:11:22.831660 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9935505_550d_4eed_9bda_72ec999ff529.slice/crio-497d3c7ee19f1d5d0fadd00346965ee64957bc419f14d1e5b93a9b9599deadf7 WatchSource:0}: Error finding container 497d3c7ee19f1d5d0fadd00346965ee64957bc419f14d1e5b93a9b9599deadf7: Status 404 returned error can't find the container with id 497d3c7ee19f1d5d0fadd00346965ee64957bc419f14d1e5b93a9b9599deadf7 Jan 21 11:11:22 crc kubenswrapper[4881]: I0121 11:11:22.864950 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-qmv5k"] Jan 21 11:11:22 crc kubenswrapper[4881]: W0121 11:11:22.872418 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6262b8c_2531_4008_9bb8_c3beeb66a3ed.slice/crio-3eadffafaaf64fa656cd418fb82245ba3b843b292288022e7307308625165420 WatchSource:0}: Error finding container 3eadffafaaf64fa656cd418fb82245ba3b843b292288022e7307308625165420: Status 404 returned error can't find the container with id 3eadffafaaf64fa656cd418fb82245ba3b843b292288022e7307308625165420 Jan 21 11:11:23 crc kubenswrapper[4881]: I0121 11:11:23.454463 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-lgdjc" event={"ID":"fcdadd73-568f-4ae0-a7bb-9330b2feb835","Type":"ContainerStarted","Data":"2f1c3a1b1622749132028b49619365e095b0384d7bf38678f2f951e18082dadb"} Jan 21 11:11:23 crc kubenswrapper[4881]: I0121 11:11:23.456261 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qmv5k" event={"ID":"b6262b8c-2531-4008-9bb8-c3beeb66a3ed","Type":"ContainerStarted","Data":"3eadffafaaf64fa656cd418fb82245ba3b843b292288022e7307308625165420"} Jan 21 11:11:23 crc kubenswrapper[4881]: I0121 11:11:23.458297 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5948d4cb5-h9dr6" event={"ID":"e9935505-550d-4eed-9bda-72ec999ff529","Type":"ContainerStarted","Data":"206fe5f53965f9042b6d06482e1063a81c74186cfba2d8c918d9f50cbcc3a46a"} Jan 21 11:11:23 crc kubenswrapper[4881]: I0121 11:11:23.458428 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5948d4cb5-h9dr6" event={"ID":"e9935505-550d-4eed-9bda-72ec999ff529","Type":"ContainerStarted","Data":"497d3c7ee19f1d5d0fadd00346965ee64957bc419f14d1e5b93a9b9599deadf7"} Jan 21 11:11:23 crc kubenswrapper[4881]: I0121 11:11:23.460564 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-ft48b" event={"ID":"f68408aa-3450-42af-a6f8-b5260973f272","Type":"ContainerStarted","Data":"75263f711a01743da2eda02df172618a1f70bf3f71d4552a680f3f08dba4b6d1"} Jan 21 11:11:23 crc kubenswrapper[4881]: I0121 11:11:23.476414 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5948d4cb5-h9dr6" podStartSLOduration=1.47638972 podStartE2EDuration="1.47638972s" podCreationTimestamp="2026-01-21 11:11:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:11:23.474993145 +0000 UTC m=+870.734949624" watchObservedRunningTime="2026-01-21 11:11:23.47638972 +0000 UTC m=+870.736346199" Jan 21 11:11:27 crc kubenswrapper[4881]: I0121 11:11:27.552230 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-lgdjc" event={"ID":"fcdadd73-568f-4ae0-a7bb-9330b2feb835","Type":"ContainerStarted","Data":"262c061c6c6cee551071d338125204388b4e9ec2038d211196eb84e0c1b73988"} Jan 21 11:11:27 crc kubenswrapper[4881]: I0121 11:11:27.554092 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-b9rcw" event={"ID":"5c705c83-efa0-436f-a0b5-9164dbb6b1df","Type":"ContainerStarted","Data":"ca467ceadddb4897cca8c993245e98b429120425d599d31934c93bd2c9009863"} Jan 21 11:11:27 crc kubenswrapper[4881]: I0121 11:11:27.554148 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-b9rcw" Jan 21 11:11:27 crc kubenswrapper[4881]: I0121 11:11:27.555574 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qmv5k" event={"ID":"b6262b8c-2531-4008-9bb8-c3beeb66a3ed","Type":"ContainerStarted","Data":"d414a51aeff912ae63db4bdd3d121d4297af4d1fb98e61e5d54ceef0eb082f61"} Jan 21 11:11:27 crc kubenswrapper[4881]: I0121 11:11:27.556122 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qmv5k" Jan 21 11:11:27 crc kubenswrapper[4881]: I0121 11:11:27.557959 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-ft48b" event={"ID":"f68408aa-3450-42af-a6f8-b5260973f272","Type":"ContainerStarted","Data":"2695bc9cb695c6c9736deb95547df532c72d2cbde492fc714f3bcb49af8077c8"} Jan 21 11:11:27 crc kubenswrapper[4881]: I0121 11:11:27.572808 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-lgdjc" podStartSLOduration=2.134906912 podStartE2EDuration="6.572772851s" podCreationTimestamp="2026-01-21 11:11:21 +0000 UTC" firstStartedPulling="2026-01-21 11:11:22.722684529 +0000 UTC m=+869.982640988" lastFinishedPulling="2026-01-21 11:11:27.160550458 +0000 UTC m=+874.420506927" observedRunningTime="2026-01-21 11:11:27.567028581 +0000 UTC m=+874.826985050" watchObservedRunningTime="2026-01-21 11:11:27.572772851 +0000 UTC m=+874.832729320" Jan 21 11:11:27 crc kubenswrapper[4881]: I0121 11:11:27.614778 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-b9rcw" podStartSLOduration=1.7590103400000001 podStartE2EDuration="6.614760081s" podCreationTimestamp="2026-01-21 11:11:21 +0000 UTC" firstStartedPulling="2026-01-21 11:11:22.306898609 +0000 UTC m=+869.566855078" lastFinishedPulling="2026-01-21 11:11:27.16264835 +0000 UTC m=+874.422604819" observedRunningTime="2026-01-21 11:11:27.588928187 +0000 UTC m=+874.848884656" watchObservedRunningTime="2026-01-21 11:11:27.614760081 +0000 UTC m=+874.874716550" Jan 21 11:11:27 crc kubenswrapper[4881]: I0121 11:11:27.618708 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qmv5k" podStartSLOduration=2.307992592 podStartE2EDuration="6.618694736s" podCreationTimestamp="2026-01-21 11:11:21 +0000 UTC" firstStartedPulling="2026-01-21 11:11:22.877307078 +0000 UTC m=+870.137263547" lastFinishedPulling="2026-01-21 11:11:27.188009222 +0000 UTC m=+874.447965691" observedRunningTime="2026-01-21 11:11:27.613053068 +0000 UTC m=+874.873009547" watchObservedRunningTime="2026-01-21 11:11:27.618694736 +0000 UTC m=+874.878651205" Jan 21 11:11:31 crc kubenswrapper[4881]: I0121 11:11:31.589379 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-ft48b" event={"ID":"f68408aa-3450-42af-a6f8-b5260973f272","Type":"ContainerStarted","Data":"9fb0de26b7e3a70f0d133614bd136b283ec44db245b6c99779b899a0d4dae022"} Jan 21 11:11:31 crc kubenswrapper[4881]: I0121 11:11:31.611818 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-ft48b" podStartSLOduration=2.813951692 podStartE2EDuration="10.611794906s" podCreationTimestamp="2026-01-21 11:11:21 +0000 UTC" firstStartedPulling="2026-01-21 11:11:22.6068887 +0000 UTC m=+869.866845169" lastFinishedPulling="2026-01-21 11:11:30.404731914 +0000 UTC m=+877.664688383" observedRunningTime="2026-01-21 11:11:31.605850621 +0000 UTC m=+878.865807120" watchObservedRunningTime="2026-01-21 11:11:31.611794906 +0000 UTC m=+878.871751375" Jan 21 11:11:32 crc kubenswrapper[4881]: I0121 11:11:32.226160 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-b9rcw" Jan 21 11:11:32 crc kubenswrapper[4881]: I0121 11:11:32.581914 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5948d4cb5-h9dr6" Jan 21 11:11:32 crc kubenswrapper[4881]: I0121 11:11:32.582063 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5948d4cb5-h9dr6" Jan 21 11:11:32 crc kubenswrapper[4881]: I0121 11:11:32.586876 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5948d4cb5-h9dr6" Jan 21 11:11:32 crc kubenswrapper[4881]: I0121 11:11:32.599729 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5948d4cb5-h9dr6" Jan 21 11:11:32 crc kubenswrapper[4881]: I0121 11:11:32.662460 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-qxzd9"] Jan 21 11:11:42 crc kubenswrapper[4881]: I0121 11:11:42.171059 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-qmv5k" Jan 21 11:11:57 crc kubenswrapper[4881]: I0121 11:11:57.709342 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-qxzd9" podUID="bb8fc8b3-9818-40e2-afb2-860e2d1efae1" containerName="console" containerID="cri-o://8f2ac82a3ce8ce5983172b3cbd1e9a6aa27d2f48fa81d54ee2ef2ad283fa8d47" gracePeriod=15 Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.159712 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-qxzd9_bb8fc8b3-9818-40e2-afb2-860e2d1efae1/console/0.log" Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.160226 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-qxzd9" Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.263177 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-service-ca\") pod \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\" (UID: \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\") " Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.263657 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-console-config\") pod \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\" (UID: \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\") " Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.264003 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blg69\" (UniqueName: \"kubernetes.io/projected/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-kube-api-access-blg69\") pod \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\" (UID: \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\") " Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.264043 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-console-oauth-config\") pod \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\" (UID: \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\") " Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.264240 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-console-config" (OuterVolumeSpecName: "console-config") pod "bb8fc8b3-9818-40e2-afb2-860e2d1efae1" (UID: "bb8fc8b3-9818-40e2-afb2-860e2d1efae1"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.264263 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-service-ca" (OuterVolumeSpecName: "service-ca") pod "bb8fc8b3-9818-40e2-afb2-860e2d1efae1" (UID: "bb8fc8b3-9818-40e2-afb2-860e2d1efae1"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.264604 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-oauth-serving-cert\") pod \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\" (UID: \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\") " Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.264849 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "bb8fc8b3-9818-40e2-afb2-860e2d1efae1" (UID: "bb8fc8b3-9818-40e2-afb2-860e2d1efae1"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.265061 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-trusted-ca-bundle\") pod \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\" (UID: \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\") " Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.265274 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-console-serving-cert\") pod \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\" (UID: \"bb8fc8b3-9818-40e2-afb2-860e2d1efae1\") " Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.265520 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "bb8fc8b3-9818-40e2-afb2-860e2d1efae1" (UID: "bb8fc8b3-9818-40e2-afb2-860e2d1efae1"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.266051 4881 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.266076 4881 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.266088 4881 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.266098 4881 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-console-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.271856 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-kube-api-access-blg69" (OuterVolumeSpecName: "kube-api-access-blg69") pod "bb8fc8b3-9818-40e2-afb2-860e2d1efae1" (UID: "bb8fc8b3-9818-40e2-afb2-860e2d1efae1"). InnerVolumeSpecName "kube-api-access-blg69". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.275369 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "bb8fc8b3-9818-40e2-afb2-860e2d1efae1" (UID: "bb8fc8b3-9818-40e2-afb2-860e2d1efae1"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.278269 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "bb8fc8b3-9818-40e2-afb2-860e2d1efae1" (UID: "bb8fc8b3-9818-40e2-afb2-860e2d1efae1"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.329335 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-qxzd9_bb8fc8b3-9818-40e2-afb2-860e2d1efae1/console/0.log" Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.330083 4881 generic.go:334] "Generic (PLEG): container finished" podID="bb8fc8b3-9818-40e2-afb2-860e2d1efae1" containerID="8f2ac82a3ce8ce5983172b3cbd1e9a6aa27d2f48fa81d54ee2ef2ad283fa8d47" exitCode=2 Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.330131 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-qxzd9" event={"ID":"bb8fc8b3-9818-40e2-afb2-860e2d1efae1","Type":"ContainerDied","Data":"8f2ac82a3ce8ce5983172b3cbd1e9a6aa27d2f48fa81d54ee2ef2ad283fa8d47"} Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.330168 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-qxzd9" event={"ID":"bb8fc8b3-9818-40e2-afb2-860e2d1efae1","Type":"ContainerDied","Data":"d060bd9f87ed03936c0be9ee17418f9087722140490e6ad49375f3c789b2e023"} Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.330192 4881 scope.go:117] "RemoveContainer" containerID="8f2ac82a3ce8ce5983172b3cbd1e9a6aa27d2f48fa81d54ee2ef2ad283fa8d47" Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.330360 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-qxzd9" Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.365485 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-qxzd9"] Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.368010 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blg69\" (UniqueName: \"kubernetes.io/projected/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-kube-api-access-blg69\") on node \"crc\" DevicePath \"\"" Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.368045 4881 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.368058 4881 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bb8fc8b3-9818-40e2-afb2-860e2d1efae1-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.369645 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-qxzd9"] Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.370827 4881 scope.go:117] "RemoveContainer" containerID="8f2ac82a3ce8ce5983172b3cbd1e9a6aa27d2f48fa81d54ee2ef2ad283fa8d47" Jan 21 11:11:58 crc kubenswrapper[4881]: E0121 11:11:58.371407 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f2ac82a3ce8ce5983172b3cbd1e9a6aa27d2f48fa81d54ee2ef2ad283fa8d47\": container with ID starting with 8f2ac82a3ce8ce5983172b3cbd1e9a6aa27d2f48fa81d54ee2ef2ad283fa8d47 not found: ID does not exist" containerID="8f2ac82a3ce8ce5983172b3cbd1e9a6aa27d2f48fa81d54ee2ef2ad283fa8d47" Jan 21 11:11:58 crc kubenswrapper[4881]: I0121 11:11:58.371467 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f2ac82a3ce8ce5983172b3cbd1e9a6aa27d2f48fa81d54ee2ef2ad283fa8d47"} err="failed to get container status \"8f2ac82a3ce8ce5983172b3cbd1e9a6aa27d2f48fa81d54ee2ef2ad283fa8d47\": rpc error: code = NotFound desc = could not find container \"8f2ac82a3ce8ce5983172b3cbd1e9a6aa27d2f48fa81d54ee2ef2ad283fa8d47\": container with ID starting with 8f2ac82a3ce8ce5983172b3cbd1e9a6aa27d2f48fa81d54ee2ef2ad283fa8d47 not found: ID does not exist" Jan 21 11:11:59 crc kubenswrapper[4881]: I0121 11:11:59.329361 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb8fc8b3-9818-40e2-afb2-860e2d1efae1" path="/var/lib/kubelet/pods/bb8fc8b3-9818-40e2-afb2-860e2d1efae1/volumes" Jan 21 11:12:00 crc kubenswrapper[4881]: I0121 11:12:00.525552 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hd2w7"] Jan 21 11:12:00 crc kubenswrapper[4881]: E0121 11:12:00.526869 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb8fc8b3-9818-40e2-afb2-860e2d1efae1" containerName="console" Jan 21 11:12:00 crc kubenswrapper[4881]: I0121 11:12:00.526893 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb8fc8b3-9818-40e2-afb2-860e2d1efae1" containerName="console" Jan 21 11:12:00 crc kubenswrapper[4881]: I0121 11:12:00.527100 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb8fc8b3-9818-40e2-afb2-860e2d1efae1" containerName="console" Jan 21 11:12:00 crc kubenswrapper[4881]: I0121 11:12:00.529978 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hd2w7" Jan 21 11:12:00 crc kubenswrapper[4881]: I0121 11:12:00.540765 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hd2w7"] Jan 21 11:12:00 crc kubenswrapper[4881]: I0121 11:12:00.607943 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcd7r\" (UniqueName: \"kubernetes.io/projected/9873ada5-628e-4b25-b739-4478cbe17296-kube-api-access-xcd7r\") pod \"certified-operators-hd2w7\" (UID: \"9873ada5-628e-4b25-b739-4478cbe17296\") " pod="openshift-marketplace/certified-operators-hd2w7" Jan 21 11:12:00 crc kubenswrapper[4881]: I0121 11:12:00.608007 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9873ada5-628e-4b25-b739-4478cbe17296-catalog-content\") pod \"certified-operators-hd2w7\" (UID: \"9873ada5-628e-4b25-b739-4478cbe17296\") " pod="openshift-marketplace/certified-operators-hd2w7" Jan 21 11:12:00 crc kubenswrapper[4881]: I0121 11:12:00.608046 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9873ada5-628e-4b25-b739-4478cbe17296-utilities\") pod \"certified-operators-hd2w7\" (UID: \"9873ada5-628e-4b25-b739-4478cbe17296\") " pod="openshift-marketplace/certified-operators-hd2w7" Jan 21 11:12:00 crc kubenswrapper[4881]: I0121 11:12:00.709400 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcd7r\" (UniqueName: \"kubernetes.io/projected/9873ada5-628e-4b25-b739-4478cbe17296-kube-api-access-xcd7r\") pod \"certified-operators-hd2w7\" (UID: \"9873ada5-628e-4b25-b739-4478cbe17296\") " pod="openshift-marketplace/certified-operators-hd2w7" Jan 21 11:12:00 crc kubenswrapper[4881]: I0121 11:12:00.709975 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9873ada5-628e-4b25-b739-4478cbe17296-catalog-content\") pod \"certified-operators-hd2w7\" (UID: \"9873ada5-628e-4b25-b739-4478cbe17296\") " pod="openshift-marketplace/certified-operators-hd2w7" Jan 21 11:12:00 crc kubenswrapper[4881]: I0121 11:12:00.710008 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9873ada5-628e-4b25-b739-4478cbe17296-utilities\") pod \"certified-operators-hd2w7\" (UID: \"9873ada5-628e-4b25-b739-4478cbe17296\") " pod="openshift-marketplace/certified-operators-hd2w7" Jan 21 11:12:00 crc kubenswrapper[4881]: I0121 11:12:00.710547 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9873ada5-628e-4b25-b739-4478cbe17296-catalog-content\") pod \"certified-operators-hd2w7\" (UID: \"9873ada5-628e-4b25-b739-4478cbe17296\") " pod="openshift-marketplace/certified-operators-hd2w7" Jan 21 11:12:00 crc kubenswrapper[4881]: I0121 11:12:00.710829 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9873ada5-628e-4b25-b739-4478cbe17296-utilities\") pod \"certified-operators-hd2w7\" (UID: \"9873ada5-628e-4b25-b739-4478cbe17296\") " pod="openshift-marketplace/certified-operators-hd2w7" Jan 21 11:12:00 crc kubenswrapper[4881]: I0121 11:12:00.734843 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcd7r\" (UniqueName: \"kubernetes.io/projected/9873ada5-628e-4b25-b739-4478cbe17296-kube-api-access-xcd7r\") pod \"certified-operators-hd2w7\" (UID: \"9873ada5-628e-4b25-b739-4478cbe17296\") " pod="openshift-marketplace/certified-operators-hd2w7" Jan 21 11:12:00 crc kubenswrapper[4881]: I0121 11:12:00.851928 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hd2w7" Jan 21 11:12:01 crc kubenswrapper[4881]: I0121 11:12:01.636672 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hd2w7"] Jan 21 11:12:02 crc kubenswrapper[4881]: I0121 11:12:02.364030 4881 generic.go:334] "Generic (PLEG): container finished" podID="9873ada5-628e-4b25-b739-4478cbe17296" containerID="62a4996a49fa7e70025c2e6c3982db1575edae9d0df4fbdfdba74d92ed4e5ed6" exitCode=0 Jan 21 11:12:02 crc kubenswrapper[4881]: I0121 11:12:02.364175 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hd2w7" event={"ID":"9873ada5-628e-4b25-b739-4478cbe17296","Type":"ContainerDied","Data":"62a4996a49fa7e70025c2e6c3982db1575edae9d0df4fbdfdba74d92ed4e5ed6"} Jan 21 11:12:02 crc kubenswrapper[4881]: I0121 11:12:02.364419 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hd2w7" event={"ID":"9873ada5-628e-4b25-b739-4478cbe17296","Type":"ContainerStarted","Data":"03078ae198c32837e1314238531f4e1b4ba354a1768bb3f9c6c3700512d7bdc0"} Jan 21 11:12:03 crc kubenswrapper[4881]: I0121 11:12:03.373491 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hd2w7" event={"ID":"9873ada5-628e-4b25-b739-4478cbe17296","Type":"ContainerStarted","Data":"4449fd6347c9d97dfe10f3a25b7f401eba4d4ff908fbcb0731b3a3e709b1d7fd"} Jan 21 11:12:05 crc kubenswrapper[4881]: I0121 11:12:05.411414 4881 generic.go:334] "Generic (PLEG): container finished" podID="9873ada5-628e-4b25-b739-4478cbe17296" containerID="4449fd6347c9d97dfe10f3a25b7f401eba4d4ff908fbcb0731b3a3e709b1d7fd" exitCode=0 Jan 21 11:12:05 crc kubenswrapper[4881]: I0121 11:12:05.411504 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hd2w7" event={"ID":"9873ada5-628e-4b25-b739-4478cbe17296","Type":"ContainerDied","Data":"4449fd6347c9d97dfe10f3a25b7f401eba4d4ff908fbcb0731b3a3e709b1d7fd"} Jan 21 11:12:06 crc kubenswrapper[4881]: I0121 11:12:06.493163 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hd2w7" event={"ID":"9873ada5-628e-4b25-b739-4478cbe17296","Type":"ContainerStarted","Data":"ad72f3f3b967cd853e49b723fe72a51bb988f34ee4eb1f5a8162feb15abaf823"} Jan 21 11:12:06 crc kubenswrapper[4881]: I0121 11:12:06.516978 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hd2w7" podStartSLOduration=3.069096171 podStartE2EDuration="6.516958127s" podCreationTimestamp="2026-01-21 11:12:00 +0000 UTC" firstStartedPulling="2026-01-21 11:12:02.366625405 +0000 UTC m=+909.626581874" lastFinishedPulling="2026-01-21 11:12:05.814487351 +0000 UTC m=+913.074443830" observedRunningTime="2026-01-21 11:12:06.516253821 +0000 UTC m=+913.776210300" watchObservedRunningTime="2026-01-21 11:12:06.516958127 +0000 UTC m=+913.776914596" Jan 21 11:12:07 crc kubenswrapper[4881]: I0121 11:12:07.537893 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6"] Jan 21 11:12:07 crc kubenswrapper[4881]: I0121 11:12:07.539963 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6" Jan 21 11:12:07 crc kubenswrapper[4881]: I0121 11:12:07.541864 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 21 11:12:07 crc kubenswrapper[4881]: I0121 11:12:07.545526 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6"] Jan 21 11:12:07 crc kubenswrapper[4881]: I0121 11:12:07.680088 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c9dc897-764d-4f6c-ade8-99d7aa2d8d60-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6\" (UID: \"5c9dc897-764d-4f6c-ade8-99d7aa2d8d60\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6" Jan 21 11:12:07 crc kubenswrapper[4881]: I0121 11:12:07.680167 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c9dc897-764d-4f6c-ade8-99d7aa2d8d60-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6\" (UID: \"5c9dc897-764d-4f6c-ade8-99d7aa2d8d60\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6" Jan 21 11:12:07 crc kubenswrapper[4881]: I0121 11:12:07.680253 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpcpk\" (UniqueName: \"kubernetes.io/projected/5c9dc897-764d-4f6c-ade8-99d7aa2d8d60-kube-api-access-bpcpk\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6\" (UID: \"5c9dc897-764d-4f6c-ade8-99d7aa2d8d60\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6" Jan 21 11:12:07 crc kubenswrapper[4881]: I0121 11:12:07.781613 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c9dc897-764d-4f6c-ade8-99d7aa2d8d60-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6\" (UID: \"5c9dc897-764d-4f6c-ade8-99d7aa2d8d60\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6" Jan 21 11:12:07 crc kubenswrapper[4881]: I0121 11:12:07.781708 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c9dc897-764d-4f6c-ade8-99d7aa2d8d60-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6\" (UID: \"5c9dc897-764d-4f6c-ade8-99d7aa2d8d60\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6" Jan 21 11:12:07 crc kubenswrapper[4881]: I0121 11:12:07.781803 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpcpk\" (UniqueName: \"kubernetes.io/projected/5c9dc897-764d-4f6c-ade8-99d7aa2d8d60-kube-api-access-bpcpk\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6\" (UID: \"5c9dc897-764d-4f6c-ade8-99d7aa2d8d60\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6" Jan 21 11:12:07 crc kubenswrapper[4881]: I0121 11:12:07.782374 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c9dc897-764d-4f6c-ade8-99d7aa2d8d60-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6\" (UID: \"5c9dc897-764d-4f6c-ade8-99d7aa2d8d60\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6" Jan 21 11:12:07 crc kubenswrapper[4881]: I0121 11:12:07.782420 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c9dc897-764d-4f6c-ade8-99d7aa2d8d60-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6\" (UID: \"5c9dc897-764d-4f6c-ade8-99d7aa2d8d60\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6" Jan 21 11:12:07 crc kubenswrapper[4881]: I0121 11:12:07.805187 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpcpk\" (UniqueName: \"kubernetes.io/projected/5c9dc897-764d-4f6c-ade8-99d7aa2d8d60-kube-api-access-bpcpk\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6\" (UID: \"5c9dc897-764d-4f6c-ade8-99d7aa2d8d60\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6" Jan 21 11:12:07 crc kubenswrapper[4881]: I0121 11:12:07.859543 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6" Jan 21 11:12:08 crc kubenswrapper[4881]: I0121 11:12:08.399579 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6"] Jan 21 11:12:08 crc kubenswrapper[4881]: I0121 11:12:08.508009 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6" event={"ID":"5c9dc897-764d-4f6c-ade8-99d7aa2d8d60","Type":"ContainerStarted","Data":"f6649bc9fafdaf55ba8ea4b9308d5ba6f3cee44fcd008de9d317c8c9bf19faaa"} Jan 21 11:12:10 crc kubenswrapper[4881]: I0121 11:12:10.523062 4881 generic.go:334] "Generic (PLEG): container finished" podID="5c9dc897-764d-4f6c-ade8-99d7aa2d8d60" containerID="1882de17e4ad10c71734e26a15c796980da2428ebe8ae69676e484978869d6a9" exitCode=0 Jan 21 11:12:10 crc kubenswrapper[4881]: I0121 11:12:10.523184 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6" event={"ID":"5c9dc897-764d-4f6c-ade8-99d7aa2d8d60","Type":"ContainerDied","Data":"1882de17e4ad10c71734e26a15c796980da2428ebe8ae69676e484978869d6a9"} Jan 21 11:12:10 crc kubenswrapper[4881]: I0121 11:12:10.853529 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hd2w7" Jan 21 11:12:10 crc kubenswrapper[4881]: I0121 11:12:10.853592 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hd2w7" Jan 21 11:12:10 crc kubenswrapper[4881]: I0121 11:12:10.911313 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hd2w7" Jan 21 11:12:11 crc kubenswrapper[4881]: I0121 11:12:11.585609 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hd2w7" Jan 21 11:12:12 crc kubenswrapper[4881]: I0121 11:12:12.538796 4881 generic.go:334] "Generic (PLEG): container finished" podID="5c9dc897-764d-4f6c-ade8-99d7aa2d8d60" containerID="10a548534daadca5f848109f059acff5c67d2840c1dc7cb3bda7e203f29a597a" exitCode=0 Jan 21 11:12:12 crc kubenswrapper[4881]: I0121 11:12:12.538844 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6" event={"ID":"5c9dc897-764d-4f6c-ade8-99d7aa2d8d60","Type":"ContainerDied","Data":"10a548534daadca5f848109f059acff5c67d2840c1dc7cb3bda7e203f29a597a"} Jan 21 11:12:12 crc kubenswrapper[4881]: I0121 11:12:12.878365 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-g5nz8"] Jan 21 11:12:12 crc kubenswrapper[4881]: I0121 11:12:12.880254 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g5nz8" Jan 21 11:12:12 crc kubenswrapper[4881]: I0121 11:12:12.897947 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g5nz8"] Jan 21 11:12:13 crc kubenswrapper[4881]: I0121 11:12:13.062123 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6e4d311-b0fc-4051-a125-a6bf330b7f8a-catalog-content\") pod \"community-operators-g5nz8\" (UID: \"c6e4d311-b0fc-4051-a125-a6bf330b7f8a\") " pod="openshift-marketplace/community-operators-g5nz8" Jan 21 11:12:13 crc kubenswrapper[4881]: I0121 11:12:13.062523 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w2kj\" (UniqueName: \"kubernetes.io/projected/c6e4d311-b0fc-4051-a125-a6bf330b7f8a-kube-api-access-5w2kj\") pod \"community-operators-g5nz8\" (UID: \"c6e4d311-b0fc-4051-a125-a6bf330b7f8a\") " pod="openshift-marketplace/community-operators-g5nz8" Jan 21 11:12:13 crc kubenswrapper[4881]: I0121 11:12:13.062586 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6e4d311-b0fc-4051-a125-a6bf330b7f8a-utilities\") pod \"community-operators-g5nz8\" (UID: \"c6e4d311-b0fc-4051-a125-a6bf330b7f8a\") " pod="openshift-marketplace/community-operators-g5nz8" Jan 21 11:12:13 crc kubenswrapper[4881]: I0121 11:12:13.164256 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6e4d311-b0fc-4051-a125-a6bf330b7f8a-catalog-content\") pod \"community-operators-g5nz8\" (UID: \"c6e4d311-b0fc-4051-a125-a6bf330b7f8a\") " pod="openshift-marketplace/community-operators-g5nz8" Jan 21 11:12:13 crc kubenswrapper[4881]: I0121 11:12:13.164337 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5w2kj\" (UniqueName: \"kubernetes.io/projected/c6e4d311-b0fc-4051-a125-a6bf330b7f8a-kube-api-access-5w2kj\") pod \"community-operators-g5nz8\" (UID: \"c6e4d311-b0fc-4051-a125-a6bf330b7f8a\") " pod="openshift-marketplace/community-operators-g5nz8" Jan 21 11:12:13 crc kubenswrapper[4881]: I0121 11:12:13.164396 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6e4d311-b0fc-4051-a125-a6bf330b7f8a-utilities\") pod \"community-operators-g5nz8\" (UID: \"c6e4d311-b0fc-4051-a125-a6bf330b7f8a\") " pod="openshift-marketplace/community-operators-g5nz8" Jan 21 11:12:13 crc kubenswrapper[4881]: I0121 11:12:13.164886 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6e4d311-b0fc-4051-a125-a6bf330b7f8a-catalog-content\") pod \"community-operators-g5nz8\" (UID: \"c6e4d311-b0fc-4051-a125-a6bf330b7f8a\") " pod="openshift-marketplace/community-operators-g5nz8" Jan 21 11:12:13 crc kubenswrapper[4881]: I0121 11:12:13.164911 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6e4d311-b0fc-4051-a125-a6bf330b7f8a-utilities\") pod \"community-operators-g5nz8\" (UID: \"c6e4d311-b0fc-4051-a125-a6bf330b7f8a\") " pod="openshift-marketplace/community-operators-g5nz8" Jan 21 11:12:13 crc kubenswrapper[4881]: I0121 11:12:13.190564 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5w2kj\" (UniqueName: \"kubernetes.io/projected/c6e4d311-b0fc-4051-a125-a6bf330b7f8a-kube-api-access-5w2kj\") pod \"community-operators-g5nz8\" (UID: \"c6e4d311-b0fc-4051-a125-a6bf330b7f8a\") " pod="openshift-marketplace/community-operators-g5nz8" Jan 21 11:12:13 crc kubenswrapper[4881]: I0121 11:12:13.194890 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g5nz8" Jan 21 11:12:13 crc kubenswrapper[4881]: I0121 11:12:13.553502 4881 generic.go:334] "Generic (PLEG): container finished" podID="5c9dc897-764d-4f6c-ade8-99d7aa2d8d60" containerID="d9653f5446680e2092e8263cde31db5cc02cb9168f0736fc9b45955301e3269c" exitCode=0 Jan 21 11:12:13 crc kubenswrapper[4881]: I0121 11:12:13.553543 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6" event={"ID":"5c9dc897-764d-4f6c-ade8-99d7aa2d8d60","Type":"ContainerDied","Data":"d9653f5446680e2092e8263cde31db5cc02cb9168f0736fc9b45955301e3269c"} Jan 21 11:12:13 crc kubenswrapper[4881]: I0121 11:12:13.597925 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g5nz8"] Jan 21 11:12:13 crc kubenswrapper[4881]: W0121 11:12:13.629911 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6e4d311_b0fc_4051_a125_a6bf330b7f8a.slice/crio-74ed369f908f9a46be16b0bf5cbde512da30460d84d754009e5d06f649c85ef9 WatchSource:0}: Error finding container 74ed369f908f9a46be16b0bf5cbde512da30460d84d754009e5d06f649c85ef9: Status 404 returned error can't find the container with id 74ed369f908f9a46be16b0bf5cbde512da30460d84d754009e5d06f649c85ef9 Jan 21 11:12:14 crc kubenswrapper[4881]: I0121 11:12:14.281812 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hd2w7"] Jan 21 11:12:14 crc kubenswrapper[4881]: I0121 11:12:14.282173 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hd2w7" podUID="9873ada5-628e-4b25-b739-4478cbe17296" containerName="registry-server" containerID="cri-o://ad72f3f3b967cd853e49b723fe72a51bb988f34ee4eb1f5a8162feb15abaf823" gracePeriod=2 Jan 21 11:12:14 crc kubenswrapper[4881]: I0121 11:12:14.565749 4881 generic.go:334] "Generic (PLEG): container finished" podID="c6e4d311-b0fc-4051-a125-a6bf330b7f8a" containerID="df3854cc5438f398248beabb77d60eeb96def4d61790e2ebbf7c22c19efc8536" exitCode=0 Jan 21 11:12:14 crc kubenswrapper[4881]: I0121 11:12:14.566170 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g5nz8" event={"ID":"c6e4d311-b0fc-4051-a125-a6bf330b7f8a","Type":"ContainerDied","Data":"df3854cc5438f398248beabb77d60eeb96def4d61790e2ebbf7c22c19efc8536"} Jan 21 11:12:14 crc kubenswrapper[4881]: I0121 11:12:14.566205 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g5nz8" event={"ID":"c6e4d311-b0fc-4051-a125-a6bf330b7f8a","Type":"ContainerStarted","Data":"74ed369f908f9a46be16b0bf5cbde512da30460d84d754009e5d06f649c85ef9"} Jan 21 11:12:14 crc kubenswrapper[4881]: I0121 11:12:14.570182 4881 generic.go:334] "Generic (PLEG): container finished" podID="9873ada5-628e-4b25-b739-4478cbe17296" containerID="ad72f3f3b967cd853e49b723fe72a51bb988f34ee4eb1f5a8162feb15abaf823" exitCode=0 Jan 21 11:12:14 crc kubenswrapper[4881]: I0121 11:12:14.570435 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hd2w7" event={"ID":"9873ada5-628e-4b25-b739-4478cbe17296","Type":"ContainerDied","Data":"ad72f3f3b967cd853e49b723fe72a51bb988f34ee4eb1f5a8162feb15abaf823"} Jan 21 11:12:14 crc kubenswrapper[4881]: I0121 11:12:14.695101 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hd2w7" Jan 21 11:12:14 crc kubenswrapper[4881]: I0121 11:12:14.790585 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9873ada5-628e-4b25-b739-4478cbe17296-catalog-content\") pod \"9873ada5-628e-4b25-b739-4478cbe17296\" (UID: \"9873ada5-628e-4b25-b739-4478cbe17296\") " Jan 21 11:12:14 crc kubenswrapper[4881]: I0121 11:12:14.790719 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcd7r\" (UniqueName: \"kubernetes.io/projected/9873ada5-628e-4b25-b739-4478cbe17296-kube-api-access-xcd7r\") pod \"9873ada5-628e-4b25-b739-4478cbe17296\" (UID: \"9873ada5-628e-4b25-b739-4478cbe17296\") " Jan 21 11:12:14 crc kubenswrapper[4881]: I0121 11:12:14.790744 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9873ada5-628e-4b25-b739-4478cbe17296-utilities\") pod \"9873ada5-628e-4b25-b739-4478cbe17296\" (UID: \"9873ada5-628e-4b25-b739-4478cbe17296\") " Jan 21 11:12:14 crc kubenswrapper[4881]: I0121 11:12:14.791837 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9873ada5-628e-4b25-b739-4478cbe17296-utilities" (OuterVolumeSpecName: "utilities") pod "9873ada5-628e-4b25-b739-4478cbe17296" (UID: "9873ada5-628e-4b25-b739-4478cbe17296"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:12:14 crc kubenswrapper[4881]: I0121 11:12:14.808592 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9873ada5-628e-4b25-b739-4478cbe17296-kube-api-access-xcd7r" (OuterVolumeSpecName: "kube-api-access-xcd7r") pod "9873ada5-628e-4b25-b739-4478cbe17296" (UID: "9873ada5-628e-4b25-b739-4478cbe17296"). InnerVolumeSpecName "kube-api-access-xcd7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:12:14 crc kubenswrapper[4881]: I0121 11:12:14.843263 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9873ada5-628e-4b25-b739-4478cbe17296-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9873ada5-628e-4b25-b739-4478cbe17296" (UID: "9873ada5-628e-4b25-b739-4478cbe17296"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:12:14 crc kubenswrapper[4881]: I0121 11:12:14.892854 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcd7r\" (UniqueName: \"kubernetes.io/projected/9873ada5-628e-4b25-b739-4478cbe17296-kube-api-access-xcd7r\") on node \"crc\" DevicePath \"\"" Jan 21 11:12:14 crc kubenswrapper[4881]: I0121 11:12:14.892900 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9873ada5-628e-4b25-b739-4478cbe17296-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:12:14 crc kubenswrapper[4881]: I0121 11:12:14.892915 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9873ada5-628e-4b25-b739-4478cbe17296-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:12:14 crc kubenswrapper[4881]: I0121 11:12:14.908292 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6" Jan 21 11:12:14 crc kubenswrapper[4881]: I0121 11:12:14.994459 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c9dc897-764d-4f6c-ade8-99d7aa2d8d60-bundle\") pod \"5c9dc897-764d-4f6c-ade8-99d7aa2d8d60\" (UID: \"5c9dc897-764d-4f6c-ade8-99d7aa2d8d60\") " Jan 21 11:12:14 crc kubenswrapper[4881]: I0121 11:12:14.994617 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c9dc897-764d-4f6c-ade8-99d7aa2d8d60-util\") pod \"5c9dc897-764d-4f6c-ade8-99d7aa2d8d60\" (UID: \"5c9dc897-764d-4f6c-ade8-99d7aa2d8d60\") " Jan 21 11:12:14 crc kubenswrapper[4881]: I0121 11:12:14.994678 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpcpk\" (UniqueName: \"kubernetes.io/projected/5c9dc897-764d-4f6c-ade8-99d7aa2d8d60-kube-api-access-bpcpk\") pod \"5c9dc897-764d-4f6c-ade8-99d7aa2d8d60\" (UID: \"5c9dc897-764d-4f6c-ade8-99d7aa2d8d60\") " Jan 21 11:12:14 crc kubenswrapper[4881]: I0121 11:12:14.995856 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c9dc897-764d-4f6c-ade8-99d7aa2d8d60-bundle" (OuterVolumeSpecName: "bundle") pod "5c9dc897-764d-4f6c-ade8-99d7aa2d8d60" (UID: "5c9dc897-764d-4f6c-ade8-99d7aa2d8d60"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:12:14 crc kubenswrapper[4881]: I0121 11:12:14.998017 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c9dc897-764d-4f6c-ade8-99d7aa2d8d60-kube-api-access-bpcpk" (OuterVolumeSpecName: "kube-api-access-bpcpk") pod "5c9dc897-764d-4f6c-ade8-99d7aa2d8d60" (UID: "5c9dc897-764d-4f6c-ade8-99d7aa2d8d60"). InnerVolumeSpecName "kube-api-access-bpcpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:12:15 crc kubenswrapper[4881]: I0121 11:12:15.010595 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c9dc897-764d-4f6c-ade8-99d7aa2d8d60-util" (OuterVolumeSpecName: "util") pod "5c9dc897-764d-4f6c-ade8-99d7aa2d8d60" (UID: "5c9dc897-764d-4f6c-ade8-99d7aa2d8d60"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:12:15 crc kubenswrapper[4881]: I0121 11:12:15.095938 4881 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c9dc897-764d-4f6c-ade8-99d7aa2d8d60-util\") on node \"crc\" DevicePath \"\"" Jan 21 11:12:15 crc kubenswrapper[4881]: I0121 11:12:15.095997 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpcpk\" (UniqueName: \"kubernetes.io/projected/5c9dc897-764d-4f6c-ade8-99d7aa2d8d60-kube-api-access-bpcpk\") on node \"crc\" DevicePath \"\"" Jan 21 11:12:15 crc kubenswrapper[4881]: I0121 11:12:15.096011 4881 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c9dc897-764d-4f6c-ade8-99d7aa2d8d60-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:12:15 crc kubenswrapper[4881]: I0121 11:12:15.583651 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6" Jan 21 11:12:15 crc kubenswrapper[4881]: I0121 11:12:15.583660 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6" event={"ID":"5c9dc897-764d-4f6c-ade8-99d7aa2d8d60","Type":"ContainerDied","Data":"f6649bc9fafdaf55ba8ea4b9308d5ba6f3cee44fcd008de9d317c8c9bf19faaa"} Jan 21 11:12:15 crc kubenswrapper[4881]: I0121 11:12:15.584066 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6649bc9fafdaf55ba8ea4b9308d5ba6f3cee44fcd008de9d317c8c9bf19faaa" Jan 21 11:12:15 crc kubenswrapper[4881]: I0121 11:12:15.589217 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hd2w7" Jan 21 11:12:15 crc kubenswrapper[4881]: I0121 11:12:15.589240 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hd2w7" event={"ID":"9873ada5-628e-4b25-b739-4478cbe17296","Type":"ContainerDied","Data":"03078ae198c32837e1314238531f4e1b4ba354a1768bb3f9c6c3700512d7bdc0"} Jan 21 11:12:15 crc kubenswrapper[4881]: I0121 11:12:15.589317 4881 scope.go:117] "RemoveContainer" containerID="ad72f3f3b967cd853e49b723fe72a51bb988f34ee4eb1f5a8162feb15abaf823" Jan 21 11:12:15 crc kubenswrapper[4881]: I0121 11:12:15.594004 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g5nz8" event={"ID":"c6e4d311-b0fc-4051-a125-a6bf330b7f8a","Type":"ContainerStarted","Data":"4e6577a7360cb44c2d5f3b476fb5769c5dfcb1d89663a17a2c75099c7b82351e"} Jan 21 11:12:15 crc kubenswrapper[4881]: I0121 11:12:15.613072 4881 scope.go:117] "RemoveContainer" containerID="4449fd6347c9d97dfe10f3a25b7f401eba4d4ff908fbcb0731b3a3e709b1d7fd" Jan 21 11:12:15 crc kubenswrapper[4881]: I0121 11:12:15.613543 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hd2w7"] Jan 21 11:12:15 crc kubenswrapper[4881]: I0121 11:12:15.621473 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hd2w7"] Jan 21 11:12:15 crc kubenswrapper[4881]: I0121 11:12:15.637621 4881 scope.go:117] "RemoveContainer" containerID="62a4996a49fa7e70025c2e6c3982db1575edae9d0df4fbdfdba74d92ed4e5ed6" Jan 21 11:12:16 crc kubenswrapper[4881]: I0121 11:12:16.602772 4881 generic.go:334] "Generic (PLEG): container finished" podID="c6e4d311-b0fc-4051-a125-a6bf330b7f8a" containerID="4e6577a7360cb44c2d5f3b476fb5769c5dfcb1d89663a17a2c75099c7b82351e" exitCode=0 Jan 21 11:12:16 crc kubenswrapper[4881]: I0121 11:12:16.603552 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g5nz8" event={"ID":"c6e4d311-b0fc-4051-a125-a6bf330b7f8a","Type":"ContainerDied","Data":"4e6577a7360cb44c2d5f3b476fb5769c5dfcb1d89663a17a2c75099c7b82351e"} Jan 21 11:12:17 crc kubenswrapper[4881]: I0121 11:12:17.318892 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9873ada5-628e-4b25-b739-4478cbe17296" path="/var/lib/kubelet/pods/9873ada5-628e-4b25-b739-4478cbe17296/volumes" Jan 21 11:12:17 crc kubenswrapper[4881]: I0121 11:12:17.613191 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g5nz8" event={"ID":"c6e4d311-b0fc-4051-a125-a6bf330b7f8a","Type":"ContainerStarted","Data":"d310d8932ee76007b52918c753c9b1348a7685ba6e304db302f41aad72fcd953"} Jan 21 11:12:17 crc kubenswrapper[4881]: I0121 11:12:17.654019 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-g5nz8" podStartSLOduration=3.071560317 podStartE2EDuration="5.653992396s" podCreationTimestamp="2026-01-21 11:12:12 +0000 UTC" firstStartedPulling="2026-01-21 11:12:14.567803422 +0000 UTC m=+921.827759891" lastFinishedPulling="2026-01-21 11:12:17.150235511 +0000 UTC m=+924.410191970" observedRunningTime="2026-01-21 11:12:17.649751022 +0000 UTC m=+924.909707491" watchObservedRunningTime="2026-01-21 11:12:17.653992396 +0000 UTC m=+924.913948865" Jan 21 11:12:23 crc kubenswrapper[4881]: I0121 11:12:23.197425 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-g5nz8" Jan 21 11:12:23 crc kubenswrapper[4881]: I0121 11:12:23.198706 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-g5nz8" Jan 21 11:12:23 crc kubenswrapper[4881]: I0121 11:12:23.267678 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-g5nz8" Jan 21 11:12:23 crc kubenswrapper[4881]: I0121 11:12:23.583364 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-58bd8f8bd-8k4c9"] Jan 21 11:12:23 crc kubenswrapper[4881]: E0121 11:12:23.583617 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9873ada5-628e-4b25-b739-4478cbe17296" containerName="extract-content" Jan 21 11:12:23 crc kubenswrapper[4881]: I0121 11:12:23.583631 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="9873ada5-628e-4b25-b739-4478cbe17296" containerName="extract-content" Jan 21 11:12:23 crc kubenswrapper[4881]: E0121 11:12:23.583643 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9873ada5-628e-4b25-b739-4478cbe17296" containerName="registry-server" Jan 21 11:12:23 crc kubenswrapper[4881]: I0121 11:12:23.583649 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="9873ada5-628e-4b25-b739-4478cbe17296" containerName="registry-server" Jan 21 11:12:23 crc kubenswrapper[4881]: E0121 11:12:23.583661 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c9dc897-764d-4f6c-ade8-99d7aa2d8d60" containerName="pull" Jan 21 11:12:23 crc kubenswrapper[4881]: I0121 11:12:23.583667 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c9dc897-764d-4f6c-ade8-99d7aa2d8d60" containerName="pull" Jan 21 11:12:23 crc kubenswrapper[4881]: E0121 11:12:23.583686 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c9dc897-764d-4f6c-ade8-99d7aa2d8d60" containerName="util" Jan 21 11:12:23 crc kubenswrapper[4881]: I0121 11:12:23.583695 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c9dc897-764d-4f6c-ade8-99d7aa2d8d60" containerName="util" Jan 21 11:12:23 crc kubenswrapper[4881]: E0121 11:12:23.583705 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c9dc897-764d-4f6c-ade8-99d7aa2d8d60" containerName="extract" Jan 21 11:12:23 crc kubenswrapper[4881]: I0121 11:12:23.583711 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c9dc897-764d-4f6c-ade8-99d7aa2d8d60" containerName="extract" Jan 21 11:12:23 crc kubenswrapper[4881]: E0121 11:12:23.583750 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9873ada5-628e-4b25-b739-4478cbe17296" containerName="extract-utilities" Jan 21 11:12:23 crc kubenswrapper[4881]: I0121 11:12:23.583757 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="9873ada5-628e-4b25-b739-4478cbe17296" containerName="extract-utilities" Jan 21 11:12:23 crc kubenswrapper[4881]: I0121 11:12:23.583876 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="9873ada5-628e-4b25-b739-4478cbe17296" containerName="registry-server" Jan 21 11:12:23 crc kubenswrapper[4881]: I0121 11:12:23.583889 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c9dc897-764d-4f6c-ade8-99d7aa2d8d60" containerName="extract" Jan 21 11:12:23 crc kubenswrapper[4881]: I0121 11:12:23.584323 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-58bd8f8bd-8k4c9" Jan 21 11:12:23 crc kubenswrapper[4881]: I0121 11:12:23.587024 4881 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 21 11:12:23 crc kubenswrapper[4881]: I0121 11:12:23.592635 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 21 11:12:23 crc kubenswrapper[4881]: I0121 11:12:23.593003 4881 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 21 11:12:23 crc kubenswrapper[4881]: I0121 11:12:23.593278 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 21 11:12:23 crc kubenswrapper[4881]: I0121 11:12:23.593445 4881 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-gkkls" Jan 21 11:12:23 crc kubenswrapper[4881]: I0121 11:12:23.619268 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-58bd8f8bd-8k4c9"] Jan 21 11:12:23 crc kubenswrapper[4881]: I0121 11:12:23.628523 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5kxv\" (UniqueName: \"kubernetes.io/projected/769e47b6-bd47-489d-9b99-4f2f0e30c4fd-kube-api-access-f5kxv\") pod \"metallb-operator-controller-manager-58bd8f8bd-8k4c9\" (UID: \"769e47b6-bd47-489d-9b99-4f2f0e30c4fd\") " pod="metallb-system/metallb-operator-controller-manager-58bd8f8bd-8k4c9" Jan 21 11:12:23 crc kubenswrapper[4881]: I0121 11:12:23.628977 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/769e47b6-bd47-489d-9b99-4f2f0e30c4fd-apiservice-cert\") pod \"metallb-operator-controller-manager-58bd8f8bd-8k4c9\" (UID: \"769e47b6-bd47-489d-9b99-4f2f0e30c4fd\") " pod="metallb-system/metallb-operator-controller-manager-58bd8f8bd-8k4c9" Jan 21 11:12:23 crc kubenswrapper[4881]: I0121 11:12:23.629201 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/769e47b6-bd47-489d-9b99-4f2f0e30c4fd-webhook-cert\") pod \"metallb-operator-controller-manager-58bd8f8bd-8k4c9\" (UID: \"769e47b6-bd47-489d-9b99-4f2f0e30c4fd\") " pod="metallb-system/metallb-operator-controller-manager-58bd8f8bd-8k4c9" Jan 21 11:12:23 crc kubenswrapper[4881]: I0121 11:12:23.721804 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-g5nz8" Jan 21 11:12:23 crc kubenswrapper[4881]: I0121 11:12:23.730516 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/769e47b6-bd47-489d-9b99-4f2f0e30c4fd-apiservice-cert\") pod \"metallb-operator-controller-manager-58bd8f8bd-8k4c9\" (UID: \"769e47b6-bd47-489d-9b99-4f2f0e30c4fd\") " pod="metallb-system/metallb-operator-controller-manager-58bd8f8bd-8k4c9" Jan 21 11:12:23 crc kubenswrapper[4881]: I0121 11:12:23.730619 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/769e47b6-bd47-489d-9b99-4f2f0e30c4fd-webhook-cert\") pod \"metallb-operator-controller-manager-58bd8f8bd-8k4c9\" (UID: \"769e47b6-bd47-489d-9b99-4f2f0e30c4fd\") " pod="metallb-system/metallb-operator-controller-manager-58bd8f8bd-8k4c9" Jan 21 11:12:23 crc kubenswrapper[4881]: I0121 11:12:23.730654 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5kxv\" (UniqueName: \"kubernetes.io/projected/769e47b6-bd47-489d-9b99-4f2f0e30c4fd-kube-api-access-f5kxv\") pod \"metallb-operator-controller-manager-58bd8f8bd-8k4c9\" (UID: \"769e47b6-bd47-489d-9b99-4f2f0e30c4fd\") " pod="metallb-system/metallb-operator-controller-manager-58bd8f8bd-8k4c9" Jan 21 11:12:23 crc kubenswrapper[4881]: I0121 11:12:23.740875 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/769e47b6-bd47-489d-9b99-4f2f0e30c4fd-apiservice-cert\") pod \"metallb-operator-controller-manager-58bd8f8bd-8k4c9\" (UID: \"769e47b6-bd47-489d-9b99-4f2f0e30c4fd\") " pod="metallb-system/metallb-operator-controller-manager-58bd8f8bd-8k4c9" Jan 21 11:12:23 crc kubenswrapper[4881]: I0121 11:12:23.753839 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/769e47b6-bd47-489d-9b99-4f2f0e30c4fd-webhook-cert\") pod \"metallb-operator-controller-manager-58bd8f8bd-8k4c9\" (UID: \"769e47b6-bd47-489d-9b99-4f2f0e30c4fd\") " pod="metallb-system/metallb-operator-controller-manager-58bd8f8bd-8k4c9" Jan 21 11:12:23 crc kubenswrapper[4881]: I0121 11:12:23.759395 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5kxv\" (UniqueName: \"kubernetes.io/projected/769e47b6-bd47-489d-9b99-4f2f0e30c4fd-kube-api-access-f5kxv\") pod \"metallb-operator-controller-manager-58bd8f8bd-8k4c9\" (UID: \"769e47b6-bd47-489d-9b99-4f2f0e30c4fd\") " pod="metallb-system/metallb-operator-controller-manager-58bd8f8bd-8k4c9" Jan 21 11:12:23 crc kubenswrapper[4881]: I0121 11:12:23.907045 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-58bd8f8bd-8k4c9" Jan 21 11:12:24 crc kubenswrapper[4881]: I0121 11:12:24.064128 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-5cd4664cfc-6lg4r"] Jan 21 11:12:24 crc kubenswrapper[4881]: I0121 11:12:24.065274 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5cd4664cfc-6lg4r" Jan 21 11:12:24 crc kubenswrapper[4881]: I0121 11:12:24.074711 4881 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 21 11:12:24 crc kubenswrapper[4881]: I0121 11:12:24.075152 4881 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 21 11:12:24 crc kubenswrapper[4881]: I0121 11:12:24.077165 4881 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-k6r4l" Jan 21 11:12:24 crc kubenswrapper[4881]: I0121 11:12:24.078499 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5cd4664cfc-6lg4r"] Jan 21 11:12:24 crc kubenswrapper[4881]: I0121 11:12:24.136902 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a194c95e-cbcb-4d7e-a631-d4a14989e985-webhook-cert\") pod \"metallb-operator-webhook-server-5cd4664cfc-6lg4r\" (UID: \"a194c95e-cbcb-4d7e-a631-d4a14989e985\") " pod="metallb-system/metallb-operator-webhook-server-5cd4664cfc-6lg4r" Jan 21 11:12:24 crc kubenswrapper[4881]: I0121 11:12:24.137019 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a194c95e-cbcb-4d7e-a631-d4a14989e985-apiservice-cert\") pod \"metallb-operator-webhook-server-5cd4664cfc-6lg4r\" (UID: \"a194c95e-cbcb-4d7e-a631-d4a14989e985\") " pod="metallb-system/metallb-operator-webhook-server-5cd4664cfc-6lg4r" Jan 21 11:12:24 crc kubenswrapper[4881]: I0121 11:12:24.137045 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgkn4\" (UniqueName: \"kubernetes.io/projected/a194c95e-cbcb-4d7e-a631-d4a14989e985-kube-api-access-pgkn4\") pod \"metallb-operator-webhook-server-5cd4664cfc-6lg4r\" (UID: \"a194c95e-cbcb-4d7e-a631-d4a14989e985\") " pod="metallb-system/metallb-operator-webhook-server-5cd4664cfc-6lg4r" Jan 21 11:12:24 crc kubenswrapper[4881]: I0121 11:12:24.239246 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a194c95e-cbcb-4d7e-a631-d4a14989e985-webhook-cert\") pod \"metallb-operator-webhook-server-5cd4664cfc-6lg4r\" (UID: \"a194c95e-cbcb-4d7e-a631-d4a14989e985\") " pod="metallb-system/metallb-operator-webhook-server-5cd4664cfc-6lg4r" Jan 21 11:12:24 crc kubenswrapper[4881]: I0121 11:12:24.239404 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a194c95e-cbcb-4d7e-a631-d4a14989e985-apiservice-cert\") pod \"metallb-operator-webhook-server-5cd4664cfc-6lg4r\" (UID: \"a194c95e-cbcb-4d7e-a631-d4a14989e985\") " pod="metallb-system/metallb-operator-webhook-server-5cd4664cfc-6lg4r" Jan 21 11:12:24 crc kubenswrapper[4881]: I0121 11:12:24.239437 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgkn4\" (UniqueName: \"kubernetes.io/projected/a194c95e-cbcb-4d7e-a631-d4a14989e985-kube-api-access-pgkn4\") pod \"metallb-operator-webhook-server-5cd4664cfc-6lg4r\" (UID: \"a194c95e-cbcb-4d7e-a631-d4a14989e985\") " pod="metallb-system/metallb-operator-webhook-server-5cd4664cfc-6lg4r" Jan 21 11:12:24 crc kubenswrapper[4881]: I0121 11:12:24.246041 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a194c95e-cbcb-4d7e-a631-d4a14989e985-apiservice-cert\") pod \"metallb-operator-webhook-server-5cd4664cfc-6lg4r\" (UID: \"a194c95e-cbcb-4d7e-a631-d4a14989e985\") " pod="metallb-system/metallb-operator-webhook-server-5cd4664cfc-6lg4r" Jan 21 11:12:24 crc kubenswrapper[4881]: I0121 11:12:24.260345 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a194c95e-cbcb-4d7e-a631-d4a14989e985-webhook-cert\") pod \"metallb-operator-webhook-server-5cd4664cfc-6lg4r\" (UID: \"a194c95e-cbcb-4d7e-a631-d4a14989e985\") " pod="metallb-system/metallb-operator-webhook-server-5cd4664cfc-6lg4r" Jan 21 11:12:24 crc kubenswrapper[4881]: I0121 11:12:24.264633 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgkn4\" (UniqueName: \"kubernetes.io/projected/a194c95e-cbcb-4d7e-a631-d4a14989e985-kube-api-access-pgkn4\") pod \"metallb-operator-webhook-server-5cd4664cfc-6lg4r\" (UID: \"a194c95e-cbcb-4d7e-a631-d4a14989e985\") " pod="metallb-system/metallb-operator-webhook-server-5cd4664cfc-6lg4r" Jan 21 11:12:24 crc kubenswrapper[4881]: I0121 11:12:24.410654 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5cd4664cfc-6lg4r" Jan 21 11:12:24 crc kubenswrapper[4881]: I0121 11:12:24.482957 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-58bd8f8bd-8k4c9"] Jan 21 11:12:24 crc kubenswrapper[4881]: I0121 11:12:24.672254 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-58bd8f8bd-8k4c9" event={"ID":"769e47b6-bd47-489d-9b99-4f2f0e30c4fd","Type":"ContainerStarted","Data":"20e8f1b52592529f288c94fb5f111cf2cc975c6b295bf0d52fff52c2eb16673e"} Jan 21 11:12:24 crc kubenswrapper[4881]: I0121 11:12:24.848337 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5cd4664cfc-6lg4r"] Jan 21 11:12:24 crc kubenswrapper[4881]: W0121 11:12:24.865850 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda194c95e_cbcb_4d7e_a631_d4a14989e985.slice/crio-427169d05c633df7c1574e0313ec71f4482be0ad8692d2b529198fdb6de67c46 WatchSource:0}: Error finding container 427169d05c633df7c1574e0313ec71f4482be0ad8692d2b529198fdb6de67c46: Status 404 returned error can't find the container with id 427169d05c633df7c1574e0313ec71f4482be0ad8692d2b529198fdb6de67c46 Jan 21 11:12:25 crc kubenswrapper[4881]: I0121 11:12:25.682331 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5cd4664cfc-6lg4r" event={"ID":"a194c95e-cbcb-4d7e-a631-d4a14989e985","Type":"ContainerStarted","Data":"427169d05c633df7c1574e0313ec71f4482be0ad8692d2b529198fdb6de67c46"} Jan 21 11:12:25 crc kubenswrapper[4881]: I0121 11:12:25.872767 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g5nz8"] Jan 21 11:12:25 crc kubenswrapper[4881]: I0121 11:12:25.873153 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-g5nz8" podUID="c6e4d311-b0fc-4051-a125-a6bf330b7f8a" containerName="registry-server" containerID="cri-o://d310d8932ee76007b52918c753c9b1348a7685ba6e304db302f41aad72fcd953" gracePeriod=2 Jan 21 11:12:26 crc kubenswrapper[4881]: I0121 11:12:26.691323 4881 generic.go:334] "Generic (PLEG): container finished" podID="c6e4d311-b0fc-4051-a125-a6bf330b7f8a" containerID="d310d8932ee76007b52918c753c9b1348a7685ba6e304db302f41aad72fcd953" exitCode=0 Jan 21 11:12:26 crc kubenswrapper[4881]: I0121 11:12:26.691395 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g5nz8" event={"ID":"c6e4d311-b0fc-4051-a125-a6bf330b7f8a","Type":"ContainerDied","Data":"d310d8932ee76007b52918c753c9b1348a7685ba6e304db302f41aad72fcd953"} Jan 21 11:12:29 crc kubenswrapper[4881]: I0121 11:12:29.054778 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g5nz8" Jan 21 11:12:29 crc kubenswrapper[4881]: I0121 11:12:29.155289 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5w2kj\" (UniqueName: \"kubernetes.io/projected/c6e4d311-b0fc-4051-a125-a6bf330b7f8a-kube-api-access-5w2kj\") pod \"c6e4d311-b0fc-4051-a125-a6bf330b7f8a\" (UID: \"c6e4d311-b0fc-4051-a125-a6bf330b7f8a\") " Jan 21 11:12:29 crc kubenswrapper[4881]: I0121 11:12:29.155396 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6e4d311-b0fc-4051-a125-a6bf330b7f8a-catalog-content\") pod \"c6e4d311-b0fc-4051-a125-a6bf330b7f8a\" (UID: \"c6e4d311-b0fc-4051-a125-a6bf330b7f8a\") " Jan 21 11:12:29 crc kubenswrapper[4881]: I0121 11:12:29.155443 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6e4d311-b0fc-4051-a125-a6bf330b7f8a-utilities\") pod \"c6e4d311-b0fc-4051-a125-a6bf330b7f8a\" (UID: \"c6e4d311-b0fc-4051-a125-a6bf330b7f8a\") " Jan 21 11:12:29 crc kubenswrapper[4881]: I0121 11:12:29.156634 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6e4d311-b0fc-4051-a125-a6bf330b7f8a-utilities" (OuterVolumeSpecName: "utilities") pod "c6e4d311-b0fc-4051-a125-a6bf330b7f8a" (UID: "c6e4d311-b0fc-4051-a125-a6bf330b7f8a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:12:29 crc kubenswrapper[4881]: I0121 11:12:29.176664 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6e4d311-b0fc-4051-a125-a6bf330b7f8a-kube-api-access-5w2kj" (OuterVolumeSpecName: "kube-api-access-5w2kj") pod "c6e4d311-b0fc-4051-a125-a6bf330b7f8a" (UID: "c6e4d311-b0fc-4051-a125-a6bf330b7f8a"). InnerVolumeSpecName "kube-api-access-5w2kj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:12:29 crc kubenswrapper[4881]: I0121 11:12:29.228365 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6e4d311-b0fc-4051-a125-a6bf330b7f8a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c6e4d311-b0fc-4051-a125-a6bf330b7f8a" (UID: "c6e4d311-b0fc-4051-a125-a6bf330b7f8a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:12:29 crc kubenswrapper[4881]: I0121 11:12:29.256845 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6e4d311-b0fc-4051-a125-a6bf330b7f8a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:12:29 crc kubenswrapper[4881]: I0121 11:12:29.256902 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6e4d311-b0fc-4051-a125-a6bf330b7f8a-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:12:29 crc kubenswrapper[4881]: I0121 11:12:29.256917 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5w2kj\" (UniqueName: \"kubernetes.io/projected/c6e4d311-b0fc-4051-a125-a6bf330b7f8a-kube-api-access-5w2kj\") on node \"crc\" DevicePath \"\"" Jan 21 11:12:29 crc kubenswrapper[4881]: I0121 11:12:29.859510 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g5nz8" event={"ID":"c6e4d311-b0fc-4051-a125-a6bf330b7f8a","Type":"ContainerDied","Data":"74ed369f908f9a46be16b0bf5cbde512da30460d84d754009e5d06f649c85ef9"} Jan 21 11:12:29 crc kubenswrapper[4881]: I0121 11:12:29.859870 4881 scope.go:117] "RemoveContainer" containerID="d310d8932ee76007b52918c753c9b1348a7685ba6e304db302f41aad72fcd953" Jan 21 11:12:29 crc kubenswrapper[4881]: I0121 11:12:29.859611 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g5nz8" Jan 21 11:12:29 crc kubenswrapper[4881]: I0121 11:12:29.890117 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g5nz8"] Jan 21 11:12:29 crc kubenswrapper[4881]: I0121 11:12:29.903294 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-g5nz8"] Jan 21 11:12:31 crc kubenswrapper[4881]: I0121 11:12:31.320814 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6e4d311-b0fc-4051-a125-a6bf330b7f8a" path="/var/lib/kubelet/pods/c6e4d311-b0fc-4051-a125-a6bf330b7f8a/volumes" Jan 21 11:12:31 crc kubenswrapper[4881]: I0121 11:12:31.453465 4881 scope.go:117] "RemoveContainer" containerID="4e6577a7360cb44c2d5f3b476fb5769c5dfcb1d89663a17a2c75099c7b82351e" Jan 21 11:12:31 crc kubenswrapper[4881]: I0121 11:12:31.477123 4881 scope.go:117] "RemoveContainer" containerID="df3854cc5438f398248beabb77d60eeb96def4d61790e2ebbf7c22c19efc8536" Jan 21 11:12:32 crc kubenswrapper[4881]: I0121 11:12:32.001147 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-58bd8f8bd-8k4c9" event={"ID":"769e47b6-bd47-489d-9b99-4f2f0e30c4fd","Type":"ContainerStarted","Data":"469d1a84fb7e1143a635a0f240ac0d81c15df0f6c6c64f3850c3a77fe34829fa"} Jan 21 11:12:32 crc kubenswrapper[4881]: I0121 11:12:32.001520 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-58bd8f8bd-8k4c9" Jan 21 11:12:32 crc kubenswrapper[4881]: I0121 11:12:32.002914 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5cd4664cfc-6lg4r" event={"ID":"a194c95e-cbcb-4d7e-a631-d4a14989e985","Type":"ContainerStarted","Data":"6411a36a2b5fe0479760caffbd2a44059e4f587e831cd6f791fa64032702af1d"} Jan 21 11:12:32 crc kubenswrapper[4881]: I0121 11:12:32.003318 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-5cd4664cfc-6lg4r" Jan 21 11:12:32 crc kubenswrapper[4881]: I0121 11:12:32.033279 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-58bd8f8bd-8k4c9" podStartSLOduration=2.104741474 podStartE2EDuration="9.033263944s" podCreationTimestamp="2026-01-21 11:12:23 +0000 UTC" firstStartedPulling="2026-01-21 11:12:24.527332943 +0000 UTC m=+931.787289412" lastFinishedPulling="2026-01-21 11:12:31.455855403 +0000 UTC m=+938.715811882" observedRunningTime="2026-01-21 11:12:32.027924842 +0000 UTC m=+939.287881311" watchObservedRunningTime="2026-01-21 11:12:32.033263944 +0000 UTC m=+939.293220413" Jan 21 11:12:32 crc kubenswrapper[4881]: I0121 11:12:32.057872 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-5cd4664cfc-6lg4r" podStartSLOduration=1.2838363529999999 podStartE2EDuration="8.057850206s" podCreationTimestamp="2026-01-21 11:12:24 +0000 UTC" firstStartedPulling="2026-01-21 11:12:24.869605622 +0000 UTC m=+932.129562091" lastFinishedPulling="2026-01-21 11:12:31.643619475 +0000 UTC m=+938.903575944" observedRunningTime="2026-01-21 11:12:32.055058538 +0000 UTC m=+939.315015007" watchObservedRunningTime="2026-01-21 11:12:32.057850206 +0000 UTC m=+939.317806705" Jan 21 11:12:44 crc kubenswrapper[4881]: I0121 11:12:44.420365 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-5cd4664cfc-6lg4r" Jan 21 11:13:03 crc kubenswrapper[4881]: I0121 11:13:03.278699 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rvptz"] Jan 21 11:13:03 crc kubenswrapper[4881]: E0121 11:13:03.279514 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6e4d311-b0fc-4051-a125-a6bf330b7f8a" containerName="registry-server" Jan 21 11:13:03 crc kubenswrapper[4881]: I0121 11:13:03.279533 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6e4d311-b0fc-4051-a125-a6bf330b7f8a" containerName="registry-server" Jan 21 11:13:03 crc kubenswrapper[4881]: E0121 11:13:03.279561 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6e4d311-b0fc-4051-a125-a6bf330b7f8a" containerName="extract-utilities" Jan 21 11:13:03 crc kubenswrapper[4881]: I0121 11:13:03.279570 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6e4d311-b0fc-4051-a125-a6bf330b7f8a" containerName="extract-utilities" Jan 21 11:13:03 crc kubenswrapper[4881]: E0121 11:13:03.279578 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6e4d311-b0fc-4051-a125-a6bf330b7f8a" containerName="extract-content" Jan 21 11:13:03 crc kubenswrapper[4881]: I0121 11:13:03.279584 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6e4d311-b0fc-4051-a125-a6bf330b7f8a" containerName="extract-content" Jan 21 11:13:03 crc kubenswrapper[4881]: I0121 11:13:03.279700 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6e4d311-b0fc-4051-a125-a6bf330b7f8a" containerName="registry-server" Jan 21 11:13:03 crc kubenswrapper[4881]: I0121 11:13:03.280678 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rvptz" Jan 21 11:13:03 crc kubenswrapper[4881]: I0121 11:13:03.405490 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rvptz"] Jan 21 11:13:03 crc kubenswrapper[4881]: I0121 11:13:03.455747 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/998c47dc-b621-4357-86b9-f6d08cac4799-utilities\") pod \"redhat-marketplace-rvptz\" (UID: \"998c47dc-b621-4357-86b9-f6d08cac4799\") " pod="openshift-marketplace/redhat-marketplace-rvptz" Jan 21 11:13:03 crc kubenswrapper[4881]: I0121 11:13:03.455845 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/998c47dc-b621-4357-86b9-f6d08cac4799-catalog-content\") pod \"redhat-marketplace-rvptz\" (UID: \"998c47dc-b621-4357-86b9-f6d08cac4799\") " pod="openshift-marketplace/redhat-marketplace-rvptz" Jan 21 11:13:03 crc kubenswrapper[4881]: I0121 11:13:03.455933 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tqdx\" (UniqueName: \"kubernetes.io/projected/998c47dc-b621-4357-86b9-f6d08cac4799-kube-api-access-7tqdx\") pod \"redhat-marketplace-rvptz\" (UID: \"998c47dc-b621-4357-86b9-f6d08cac4799\") " pod="openshift-marketplace/redhat-marketplace-rvptz" Jan 21 11:13:03 crc kubenswrapper[4881]: I0121 11:13:03.558030 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/998c47dc-b621-4357-86b9-f6d08cac4799-utilities\") pod \"redhat-marketplace-rvptz\" (UID: \"998c47dc-b621-4357-86b9-f6d08cac4799\") " pod="openshift-marketplace/redhat-marketplace-rvptz" Jan 21 11:13:03 crc kubenswrapper[4881]: I0121 11:13:03.558081 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/998c47dc-b621-4357-86b9-f6d08cac4799-catalog-content\") pod \"redhat-marketplace-rvptz\" (UID: \"998c47dc-b621-4357-86b9-f6d08cac4799\") " pod="openshift-marketplace/redhat-marketplace-rvptz" Jan 21 11:13:03 crc kubenswrapper[4881]: I0121 11:13:03.558102 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tqdx\" (UniqueName: \"kubernetes.io/projected/998c47dc-b621-4357-86b9-f6d08cac4799-kube-api-access-7tqdx\") pod \"redhat-marketplace-rvptz\" (UID: \"998c47dc-b621-4357-86b9-f6d08cac4799\") " pod="openshift-marketplace/redhat-marketplace-rvptz" Jan 21 11:13:03 crc kubenswrapper[4881]: I0121 11:13:03.558685 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/998c47dc-b621-4357-86b9-f6d08cac4799-utilities\") pod \"redhat-marketplace-rvptz\" (UID: \"998c47dc-b621-4357-86b9-f6d08cac4799\") " pod="openshift-marketplace/redhat-marketplace-rvptz" Jan 21 11:13:03 crc kubenswrapper[4881]: I0121 11:13:03.558752 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/998c47dc-b621-4357-86b9-f6d08cac4799-catalog-content\") pod \"redhat-marketplace-rvptz\" (UID: \"998c47dc-b621-4357-86b9-f6d08cac4799\") " pod="openshift-marketplace/redhat-marketplace-rvptz" Jan 21 11:13:03 crc kubenswrapper[4881]: I0121 11:13:03.586709 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tqdx\" (UniqueName: \"kubernetes.io/projected/998c47dc-b621-4357-86b9-f6d08cac4799-kube-api-access-7tqdx\") pod \"redhat-marketplace-rvptz\" (UID: \"998c47dc-b621-4357-86b9-f6d08cac4799\") " pod="openshift-marketplace/redhat-marketplace-rvptz" Jan 21 11:13:03 crc kubenswrapper[4881]: I0121 11:13:03.700626 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rvptz" Jan 21 11:13:03 crc kubenswrapper[4881]: I0121 11:13:03.910373 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-58bd8f8bd-8k4c9" Jan 21 11:13:04 crc kubenswrapper[4881]: I0121 11:13:04.471035 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rvptz"] Jan 21 11:13:04 crc kubenswrapper[4881]: I0121 11:13:04.680877 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rvptz" event={"ID":"998c47dc-b621-4357-86b9-f6d08cac4799","Type":"ContainerStarted","Data":"185d2460e59c873ad3336643088425b69bafc3d60d4435c226adb50269ff2c1b"} Jan 21 11:13:04 crc kubenswrapper[4881]: I0121 11:13:04.888384 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-lm54h"] Jan 21 11:13:04 crc kubenswrapper[4881]: I0121 11:13:04.891405 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-lm54h" Jan 21 11:13:04 crc kubenswrapper[4881]: I0121 11:13:04.894043 4881 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 21 11:13:04 crc kubenswrapper[4881]: I0121 11:13:04.894455 4881 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-8pmrf" Jan 21 11:13:04 crc kubenswrapper[4881]: I0121 11:13:04.895029 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 21 11:13:04 crc kubenswrapper[4881]: I0121 11:13:04.900659 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-tzxpk"] Jan 21 11:13:04 crc kubenswrapper[4881]: I0121 11:13:04.901885 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-tzxpk" Jan 21 11:13:04 crc kubenswrapper[4881]: I0121 11:13:04.903597 4881 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 21 11:13:04 crc kubenswrapper[4881]: I0121 11:13:04.929862 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-tzxpk"] Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.004729 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-697j4"] Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.006255 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-697j4" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.022804 4881 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.022830 4881 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.023062 4881 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-7hvdd" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.024362 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.035458 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-dmwlt"] Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.036434 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-dmwlt" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.037772 4881 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.059341 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-dmwlt"] Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.080517 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d055f37b-fab0-4fd0-b683-4a7974b21ad5-metrics\") pod \"frr-k8s-lm54h\" (UID: \"d055f37b-fab0-4fd0-b683-4a7974b21ad5\") " pod="metallb-system/frr-k8s-lm54h" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.080586 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d055f37b-fab0-4fd0-b683-4a7974b21ad5-frr-startup\") pod \"frr-k8s-lm54h\" (UID: \"d055f37b-fab0-4fd0-b683-4a7974b21ad5\") " pod="metallb-system/frr-k8s-lm54h" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.080623 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvjk8\" (UniqueName: \"kubernetes.io/projected/d055f37b-fab0-4fd0-b683-4a7974b21ad5-kube-api-access-hvjk8\") pod \"frr-k8s-lm54h\" (UID: \"d055f37b-fab0-4fd0-b683-4a7974b21ad5\") " pod="metallb-system/frr-k8s-lm54h" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.080652 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/eaaea696-21d8-4963-8364-82fa7bbb0e19-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-tzxpk\" (UID: \"eaaea696-21d8-4963-8364-82fa7bbb0e19\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-tzxpk" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.080677 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d055f37b-fab0-4fd0-b683-4a7974b21ad5-reloader\") pod \"frr-k8s-lm54h\" (UID: \"d055f37b-fab0-4fd0-b683-4a7974b21ad5\") " pod="metallb-system/frr-k8s-lm54h" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.080841 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d055f37b-fab0-4fd0-b683-4a7974b21ad5-frr-sockets\") pod \"frr-k8s-lm54h\" (UID: \"d055f37b-fab0-4fd0-b683-4a7974b21ad5\") " pod="metallb-system/frr-k8s-lm54h" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.080893 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtlk4\" (UniqueName: \"kubernetes.io/projected/eaaea696-21d8-4963-8364-82fa7bbb0e19-kube-api-access-jtlk4\") pod \"frr-k8s-webhook-server-7df86c4f6c-tzxpk\" (UID: \"eaaea696-21d8-4963-8364-82fa7bbb0e19\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-tzxpk" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.080957 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d055f37b-fab0-4fd0-b683-4a7974b21ad5-metrics-certs\") pod \"frr-k8s-lm54h\" (UID: \"d055f37b-fab0-4fd0-b683-4a7974b21ad5\") " pod="metallb-system/frr-k8s-lm54h" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.080989 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d055f37b-fab0-4fd0-b683-4a7974b21ad5-frr-conf\") pod \"frr-k8s-lm54h\" (UID: \"d055f37b-fab0-4fd0-b683-4a7974b21ad5\") " pod="metallb-system/frr-k8s-lm54h" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.183706 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/f265a6e2-ea90-45ea-89c0-178d25243784-metallb-excludel2\") pod \"speaker-697j4\" (UID: \"f265a6e2-ea90-45ea-89c0-178d25243784\") " pod="metallb-system/speaker-697j4" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.183994 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c4a109b4-26ee-4a46-9333-989cf87c0ff7-cert\") pod \"controller-6968d8fdc4-dmwlt\" (UID: \"c4a109b4-26ee-4a46-9333-989cf87c0ff7\") " pod="metallb-system/controller-6968d8fdc4-dmwlt" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.184041 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c4a109b4-26ee-4a46-9333-989cf87c0ff7-metrics-certs\") pod \"controller-6968d8fdc4-dmwlt\" (UID: \"c4a109b4-26ee-4a46-9333-989cf87c0ff7\") " pod="metallb-system/controller-6968d8fdc4-dmwlt" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.184087 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm47n\" (UniqueName: \"kubernetes.io/projected/f265a6e2-ea90-45ea-89c0-178d25243784-kube-api-access-wm47n\") pod \"speaker-697j4\" (UID: \"f265a6e2-ea90-45ea-89c0-178d25243784\") " pod="metallb-system/speaker-697j4" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.184110 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f265a6e2-ea90-45ea-89c0-178d25243784-memberlist\") pod \"speaker-697j4\" (UID: \"f265a6e2-ea90-45ea-89c0-178d25243784\") " pod="metallb-system/speaker-697j4" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.184208 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d055f37b-fab0-4fd0-b683-4a7974b21ad5-metrics\") pod \"frr-k8s-lm54h\" (UID: \"d055f37b-fab0-4fd0-b683-4a7974b21ad5\") " pod="metallb-system/frr-k8s-lm54h" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.184244 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d055f37b-fab0-4fd0-b683-4a7974b21ad5-frr-startup\") pod \"frr-k8s-lm54h\" (UID: \"d055f37b-fab0-4fd0-b683-4a7974b21ad5\") " pod="metallb-system/frr-k8s-lm54h" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.184272 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvjk8\" (UniqueName: \"kubernetes.io/projected/d055f37b-fab0-4fd0-b683-4a7974b21ad5-kube-api-access-hvjk8\") pod \"frr-k8s-lm54h\" (UID: \"d055f37b-fab0-4fd0-b683-4a7974b21ad5\") " pod="metallb-system/frr-k8s-lm54h" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.184289 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f265a6e2-ea90-45ea-89c0-178d25243784-metrics-certs\") pod \"speaker-697j4\" (UID: \"f265a6e2-ea90-45ea-89c0-178d25243784\") " pod="metallb-system/speaker-697j4" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.184321 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/eaaea696-21d8-4963-8364-82fa7bbb0e19-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-tzxpk\" (UID: \"eaaea696-21d8-4963-8364-82fa7bbb0e19\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-tzxpk" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.184343 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d055f37b-fab0-4fd0-b683-4a7974b21ad5-reloader\") pod \"frr-k8s-lm54h\" (UID: \"d055f37b-fab0-4fd0-b683-4a7974b21ad5\") " pod="metallb-system/frr-k8s-lm54h" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.184376 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d055f37b-fab0-4fd0-b683-4a7974b21ad5-frr-sockets\") pod \"frr-k8s-lm54h\" (UID: \"d055f37b-fab0-4fd0-b683-4a7974b21ad5\") " pod="metallb-system/frr-k8s-lm54h" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.184420 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtlk4\" (UniqueName: \"kubernetes.io/projected/eaaea696-21d8-4963-8364-82fa7bbb0e19-kube-api-access-jtlk4\") pod \"frr-k8s-webhook-server-7df86c4f6c-tzxpk\" (UID: \"eaaea696-21d8-4963-8364-82fa7bbb0e19\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-tzxpk" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.184456 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6d7k\" (UniqueName: \"kubernetes.io/projected/c4a109b4-26ee-4a46-9333-989cf87c0ff7-kube-api-access-b6d7k\") pod \"controller-6968d8fdc4-dmwlt\" (UID: \"c4a109b4-26ee-4a46-9333-989cf87c0ff7\") " pod="metallb-system/controller-6968d8fdc4-dmwlt" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.184473 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d055f37b-fab0-4fd0-b683-4a7974b21ad5-metrics-certs\") pod \"frr-k8s-lm54h\" (UID: \"d055f37b-fab0-4fd0-b683-4a7974b21ad5\") " pod="metallb-system/frr-k8s-lm54h" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.184501 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d055f37b-fab0-4fd0-b683-4a7974b21ad5-frr-conf\") pod \"frr-k8s-lm54h\" (UID: \"d055f37b-fab0-4fd0-b683-4a7974b21ad5\") " pod="metallb-system/frr-k8s-lm54h" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.185001 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/d055f37b-fab0-4fd0-b683-4a7974b21ad5-frr-conf\") pod \"frr-k8s-lm54h\" (UID: \"d055f37b-fab0-4fd0-b683-4a7974b21ad5\") " pod="metallb-system/frr-k8s-lm54h" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.185253 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/d055f37b-fab0-4fd0-b683-4a7974b21ad5-metrics\") pod \"frr-k8s-lm54h\" (UID: \"d055f37b-fab0-4fd0-b683-4a7974b21ad5\") " pod="metallb-system/frr-k8s-lm54h" Jan 21 11:13:05 crc kubenswrapper[4881]: E0121 11:13:05.185363 4881 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 21 11:13:05 crc kubenswrapper[4881]: E0121 11:13:05.185439 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d055f37b-fab0-4fd0-b683-4a7974b21ad5-metrics-certs podName:d055f37b-fab0-4fd0-b683-4a7974b21ad5 nodeName:}" failed. No retries permitted until 2026-01-21 11:13:05.685397302 +0000 UTC m=+972.945353771 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d055f37b-fab0-4fd0-b683-4a7974b21ad5-metrics-certs") pod "frr-k8s-lm54h" (UID: "d055f37b-fab0-4fd0-b683-4a7974b21ad5") : secret "frr-k8s-certs-secret" not found Jan 21 11:13:05 crc kubenswrapper[4881]: E0121 11:13:05.185464 4881 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.185492 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/d055f37b-fab0-4fd0-b683-4a7974b21ad5-reloader\") pod \"frr-k8s-lm54h\" (UID: \"d055f37b-fab0-4fd0-b683-4a7974b21ad5\") " pod="metallb-system/frr-k8s-lm54h" Jan 21 11:13:05 crc kubenswrapper[4881]: E0121 11:13:05.185559 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eaaea696-21d8-4963-8364-82fa7bbb0e19-cert podName:eaaea696-21d8-4963-8364-82fa7bbb0e19 nodeName:}" failed. No retries permitted until 2026-01-21 11:13:05.685536235 +0000 UTC m=+972.945492704 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/eaaea696-21d8-4963-8364-82fa7bbb0e19-cert") pod "frr-k8s-webhook-server-7df86c4f6c-tzxpk" (UID: "eaaea696-21d8-4963-8364-82fa7bbb0e19") : secret "frr-k8s-webhook-server-cert" not found Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.185662 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/d055f37b-fab0-4fd0-b683-4a7974b21ad5-frr-sockets\") pod \"frr-k8s-lm54h\" (UID: \"d055f37b-fab0-4fd0-b683-4a7974b21ad5\") " pod="metallb-system/frr-k8s-lm54h" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.186403 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/d055f37b-fab0-4fd0-b683-4a7974b21ad5-frr-startup\") pod \"frr-k8s-lm54h\" (UID: \"d055f37b-fab0-4fd0-b683-4a7974b21ad5\") " pod="metallb-system/frr-k8s-lm54h" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.212497 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvjk8\" (UniqueName: \"kubernetes.io/projected/d055f37b-fab0-4fd0-b683-4a7974b21ad5-kube-api-access-hvjk8\") pod \"frr-k8s-lm54h\" (UID: \"d055f37b-fab0-4fd0-b683-4a7974b21ad5\") " pod="metallb-system/frr-k8s-lm54h" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.213462 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtlk4\" (UniqueName: \"kubernetes.io/projected/eaaea696-21d8-4963-8364-82fa7bbb0e19-kube-api-access-jtlk4\") pod \"frr-k8s-webhook-server-7df86c4f6c-tzxpk\" (UID: \"eaaea696-21d8-4963-8364-82fa7bbb0e19\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-tzxpk" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.285746 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6d7k\" (UniqueName: \"kubernetes.io/projected/c4a109b4-26ee-4a46-9333-989cf87c0ff7-kube-api-access-b6d7k\") pod \"controller-6968d8fdc4-dmwlt\" (UID: \"c4a109b4-26ee-4a46-9333-989cf87c0ff7\") " pod="metallb-system/controller-6968d8fdc4-dmwlt" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.285902 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/f265a6e2-ea90-45ea-89c0-178d25243784-metallb-excludel2\") pod \"speaker-697j4\" (UID: \"f265a6e2-ea90-45ea-89c0-178d25243784\") " pod="metallb-system/speaker-697j4" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.285930 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c4a109b4-26ee-4a46-9333-989cf87c0ff7-cert\") pod \"controller-6968d8fdc4-dmwlt\" (UID: \"c4a109b4-26ee-4a46-9333-989cf87c0ff7\") " pod="metallb-system/controller-6968d8fdc4-dmwlt" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.285952 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c4a109b4-26ee-4a46-9333-989cf87c0ff7-metrics-certs\") pod \"controller-6968d8fdc4-dmwlt\" (UID: \"c4a109b4-26ee-4a46-9333-989cf87c0ff7\") " pod="metallb-system/controller-6968d8fdc4-dmwlt" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.285977 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm47n\" (UniqueName: \"kubernetes.io/projected/f265a6e2-ea90-45ea-89c0-178d25243784-kube-api-access-wm47n\") pod \"speaker-697j4\" (UID: \"f265a6e2-ea90-45ea-89c0-178d25243784\") " pod="metallb-system/speaker-697j4" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.285995 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f265a6e2-ea90-45ea-89c0-178d25243784-memberlist\") pod \"speaker-697j4\" (UID: \"f265a6e2-ea90-45ea-89c0-178d25243784\") " pod="metallb-system/speaker-697j4" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.286036 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f265a6e2-ea90-45ea-89c0-178d25243784-metrics-certs\") pod \"speaker-697j4\" (UID: \"f265a6e2-ea90-45ea-89c0-178d25243784\") " pod="metallb-system/speaker-697j4" Jan 21 11:13:05 crc kubenswrapper[4881]: E0121 11:13:05.286340 4881 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 21 11:13:05 crc kubenswrapper[4881]: E0121 11:13:05.286409 4881 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 21 11:13:05 crc kubenswrapper[4881]: E0121 11:13:05.286416 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c4a109b4-26ee-4a46-9333-989cf87c0ff7-metrics-certs podName:c4a109b4-26ee-4a46-9333-989cf87c0ff7 nodeName:}" failed. No retries permitted until 2026-01-21 11:13:05.786399307 +0000 UTC m=+973.046355776 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c4a109b4-26ee-4a46-9333-989cf87c0ff7-metrics-certs") pod "controller-6968d8fdc4-dmwlt" (UID: "c4a109b4-26ee-4a46-9333-989cf87c0ff7") : secret "controller-certs-secret" not found Jan 21 11:13:05 crc kubenswrapper[4881]: E0121 11:13:05.286497 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f265a6e2-ea90-45ea-89c0-178d25243784-memberlist podName:f265a6e2-ea90-45ea-89c0-178d25243784 nodeName:}" failed. No retries permitted until 2026-01-21 11:13:05.786479189 +0000 UTC m=+973.046435848 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/f265a6e2-ea90-45ea-89c0-178d25243784-memberlist") pod "speaker-697j4" (UID: "f265a6e2-ea90-45ea-89c0-178d25243784") : secret "metallb-memberlist" not found Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.286949 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/f265a6e2-ea90-45ea-89c0-178d25243784-metallb-excludel2\") pod \"speaker-697j4\" (UID: \"f265a6e2-ea90-45ea-89c0-178d25243784\") " pod="metallb-system/speaker-697j4" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.290700 4881 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.291968 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f265a6e2-ea90-45ea-89c0-178d25243784-metrics-certs\") pod \"speaker-697j4\" (UID: \"f265a6e2-ea90-45ea-89c0-178d25243784\") " pod="metallb-system/speaker-697j4" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.300485 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c4a109b4-26ee-4a46-9333-989cf87c0ff7-cert\") pod \"controller-6968d8fdc4-dmwlt\" (UID: \"c4a109b4-26ee-4a46-9333-989cf87c0ff7\") " pod="metallb-system/controller-6968d8fdc4-dmwlt" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.309238 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm47n\" (UniqueName: \"kubernetes.io/projected/f265a6e2-ea90-45ea-89c0-178d25243784-kube-api-access-wm47n\") pod \"speaker-697j4\" (UID: \"f265a6e2-ea90-45ea-89c0-178d25243784\") " pod="metallb-system/speaker-697j4" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.309432 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6d7k\" (UniqueName: \"kubernetes.io/projected/c4a109b4-26ee-4a46-9333-989cf87c0ff7-kube-api-access-b6d7k\") pod \"controller-6968d8fdc4-dmwlt\" (UID: \"c4a109b4-26ee-4a46-9333-989cf87c0ff7\") " pod="metallb-system/controller-6968d8fdc4-dmwlt" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.689494 4881 generic.go:334] "Generic (PLEG): container finished" podID="998c47dc-b621-4357-86b9-f6d08cac4799" containerID="00ecec7a68182aee750726e487cfdfc0600f11f9060a5afa0e042e40441982a2" exitCode=0 Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.689565 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rvptz" event={"ID":"998c47dc-b621-4357-86b9-f6d08cac4799","Type":"ContainerDied","Data":"00ecec7a68182aee750726e487cfdfc0600f11f9060a5afa0e042e40441982a2"} Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.692340 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/eaaea696-21d8-4963-8364-82fa7bbb0e19-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-tzxpk\" (UID: \"eaaea696-21d8-4963-8364-82fa7bbb0e19\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-tzxpk" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.692471 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d055f37b-fab0-4fd0-b683-4a7974b21ad5-metrics-certs\") pod \"frr-k8s-lm54h\" (UID: \"d055f37b-fab0-4fd0-b683-4a7974b21ad5\") " pod="metallb-system/frr-k8s-lm54h" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.697809 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/eaaea696-21d8-4963-8364-82fa7bbb0e19-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-tzxpk\" (UID: \"eaaea696-21d8-4963-8364-82fa7bbb0e19\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-tzxpk" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.697870 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d055f37b-fab0-4fd0-b683-4a7974b21ad5-metrics-certs\") pod \"frr-k8s-lm54h\" (UID: \"d055f37b-fab0-4fd0-b683-4a7974b21ad5\") " pod="metallb-system/frr-k8s-lm54h" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.793652 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c4a109b4-26ee-4a46-9333-989cf87c0ff7-metrics-certs\") pod \"controller-6968d8fdc4-dmwlt\" (UID: \"c4a109b4-26ee-4a46-9333-989cf87c0ff7\") " pod="metallb-system/controller-6968d8fdc4-dmwlt" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.793727 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f265a6e2-ea90-45ea-89c0-178d25243784-memberlist\") pod \"speaker-697j4\" (UID: \"f265a6e2-ea90-45ea-89c0-178d25243784\") " pod="metallb-system/speaker-697j4" Jan 21 11:13:05 crc kubenswrapper[4881]: E0121 11:13:05.794113 4881 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 21 11:13:05 crc kubenswrapper[4881]: E0121 11:13:05.794292 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f265a6e2-ea90-45ea-89c0-178d25243784-memberlist podName:f265a6e2-ea90-45ea-89c0-178d25243784 nodeName:}" failed. No retries permitted until 2026-01-21 11:13:06.794257503 +0000 UTC m=+974.054213972 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/f265a6e2-ea90-45ea-89c0-178d25243784-memberlist") pod "speaker-697j4" (UID: "f265a6e2-ea90-45ea-89c0-178d25243784") : secret "metallb-memberlist" not found Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.797461 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c4a109b4-26ee-4a46-9333-989cf87c0ff7-metrics-certs\") pod \"controller-6968d8fdc4-dmwlt\" (UID: \"c4a109b4-26ee-4a46-9333-989cf87c0ff7\") " pod="metallb-system/controller-6968d8fdc4-dmwlt" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.811611 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-lm54h" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.830380 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-tzxpk" Jan 21 11:13:05 crc kubenswrapper[4881]: I0121 11:13:05.957898 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-dmwlt" Jan 21 11:13:06 crc kubenswrapper[4881]: I0121 11:13:06.160506 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-tzxpk"] Jan 21 11:13:06 crc kubenswrapper[4881]: I0121 11:13:06.416418 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-dmwlt"] Jan 21 11:13:06 crc kubenswrapper[4881]: W0121 11:13:06.430428 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4a109b4_26ee_4a46_9333_989cf87c0ff7.slice/crio-24f6f21f9b2e4dd8131b07c5470a9e16b9dfebe17a0d82d12012117bced5092e WatchSource:0}: Error finding container 24f6f21f9b2e4dd8131b07c5470a9e16b9dfebe17a0d82d12012117bced5092e: Status 404 returned error can't find the container with id 24f6f21f9b2e4dd8131b07c5470a9e16b9dfebe17a0d82d12012117bced5092e Jan 21 11:13:06 crc kubenswrapper[4881]: I0121 11:13:06.697035 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-dmwlt" event={"ID":"c4a109b4-26ee-4a46-9333-989cf87c0ff7","Type":"ContainerStarted","Data":"a91bd133ef7136c69b92dec15f0d672ed0deb342d0d1dae3dfb907b1b16ba47b"} Jan 21 11:13:06 crc kubenswrapper[4881]: I0121 11:13:06.697406 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-dmwlt" event={"ID":"c4a109b4-26ee-4a46-9333-989cf87c0ff7","Type":"ContainerStarted","Data":"24f6f21f9b2e4dd8131b07c5470a9e16b9dfebe17a0d82d12012117bced5092e"} Jan 21 11:13:06 crc kubenswrapper[4881]: I0121 11:13:06.698057 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lm54h" event={"ID":"d055f37b-fab0-4fd0-b683-4a7974b21ad5","Type":"ContainerStarted","Data":"ba3b897ddc85e913095024b0a90e493360ed4e2ec3bcac8b299171b6eee171f1"} Jan 21 11:13:06 crc kubenswrapper[4881]: I0121 11:13:06.698841 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-tzxpk" event={"ID":"eaaea696-21d8-4963-8364-82fa7bbb0e19","Type":"ContainerStarted","Data":"fc0338162f9b9cd0a25a9ae9f7c0651b7e1179bdd0e328740478ee12dbddf32f"} Jan 21 11:13:06 crc kubenswrapper[4881]: I0121 11:13:06.810958 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f265a6e2-ea90-45ea-89c0-178d25243784-memberlist\") pod \"speaker-697j4\" (UID: \"f265a6e2-ea90-45ea-89c0-178d25243784\") " pod="metallb-system/speaker-697j4" Jan 21 11:13:06 crc kubenswrapper[4881]: I0121 11:13:06.817049 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f265a6e2-ea90-45ea-89c0-178d25243784-memberlist\") pod \"speaker-697j4\" (UID: \"f265a6e2-ea90-45ea-89c0-178d25243784\") " pod="metallb-system/speaker-697j4" Jan 21 11:13:06 crc kubenswrapper[4881]: I0121 11:13:06.850763 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-697j4" Jan 21 11:13:06 crc kubenswrapper[4881]: W0121 11:13:06.883025 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf265a6e2_ea90_45ea_89c0_178d25243784.slice/crio-1b6464c369f82f6432ae53745f95d29b0241cc9ac91966100f6f1b57a49ed3db WatchSource:0}: Error finding container 1b6464c369f82f6432ae53745f95d29b0241cc9ac91966100f6f1b57a49ed3db: Status 404 returned error can't find the container with id 1b6464c369f82f6432ae53745f95d29b0241cc9ac91966100f6f1b57a49ed3db Jan 21 11:13:07 crc kubenswrapper[4881]: I0121 11:13:07.755576 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-697j4" event={"ID":"f265a6e2-ea90-45ea-89c0-178d25243784","Type":"ContainerStarted","Data":"cf6e40113ac1676c1cf69f9415032710d03dc03be9ba5f02d85ea035ca382bd5"} Jan 21 11:13:07 crc kubenswrapper[4881]: I0121 11:13:07.755848 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-697j4" event={"ID":"f265a6e2-ea90-45ea-89c0-178d25243784","Type":"ContainerStarted","Data":"1b6464c369f82f6432ae53745f95d29b0241cc9ac91966100f6f1b57a49ed3db"} Jan 21 11:13:07 crc kubenswrapper[4881]: I0121 11:13:07.760487 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rvptz" event={"ID":"998c47dc-b621-4357-86b9-f6d08cac4799","Type":"ContainerStarted","Data":"c2f36538556042a4c3ef112ac5ba0181ebb2721edcd599559000130ae467ead0"} Jan 21 11:13:07 crc kubenswrapper[4881]: I0121 11:13:07.775898 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-dmwlt" event={"ID":"c4a109b4-26ee-4a46-9333-989cf87c0ff7","Type":"ContainerStarted","Data":"93878269955d9d98c70f249b3d5011b15157e9e8047207419b5ef1c476a12239"} Jan 21 11:13:07 crc kubenswrapper[4881]: I0121 11:13:07.776243 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-dmwlt" Jan 21 11:13:07 crc kubenswrapper[4881]: I0121 11:13:07.832472 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-dmwlt" podStartSLOduration=3.832452704 podStartE2EDuration="3.832452704s" podCreationTimestamp="2026-01-21 11:13:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:13:07.826962769 +0000 UTC m=+975.086919258" watchObservedRunningTime="2026-01-21 11:13:07.832452704 +0000 UTC m=+975.092409163" Jan 21 11:13:08 crc kubenswrapper[4881]: I0121 11:13:08.996318 4881 generic.go:334] "Generic (PLEG): container finished" podID="998c47dc-b621-4357-86b9-f6d08cac4799" containerID="c2f36538556042a4c3ef112ac5ba0181ebb2721edcd599559000130ae467ead0" exitCode=0 Jan 21 11:13:08 crc kubenswrapper[4881]: I0121 11:13:08.996409 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rvptz" event={"ID":"998c47dc-b621-4357-86b9-f6d08cac4799","Type":"ContainerDied","Data":"c2f36538556042a4c3ef112ac5ba0181ebb2721edcd599559000130ae467ead0"} Jan 21 11:13:08 crc kubenswrapper[4881]: I0121 11:13:08.998988 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-697j4" event={"ID":"f265a6e2-ea90-45ea-89c0-178d25243784","Type":"ContainerStarted","Data":"118a20b6920d9027d3f333741d5e78a878cf93b17bbd2a13df0fb533425784f2"} Jan 21 11:13:09 crc kubenswrapper[4881]: I0121 11:13:09.039035 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-697j4" podStartSLOduration=5.039017192 podStartE2EDuration="5.039017192s" podCreationTimestamp="2026-01-21 11:13:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:13:09.035007164 +0000 UTC m=+976.294963623" watchObservedRunningTime="2026-01-21 11:13:09.039017192 +0000 UTC m=+976.298973661" Jan 21 11:13:10 crc kubenswrapper[4881]: I0121 11:13:10.356841 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-697j4" Jan 21 11:13:11 crc kubenswrapper[4881]: I0121 11:13:11.400092 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rvptz" event={"ID":"998c47dc-b621-4357-86b9-f6d08cac4799","Type":"ContainerStarted","Data":"d0bb3056956d79836bd57985c9844270d4cb4c95a3ec04cb84f31deaf080579b"} Jan 21 11:13:11 crc kubenswrapper[4881]: I0121 11:13:11.443174 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rvptz" podStartSLOduration=5.818267851 podStartE2EDuration="8.44314499s" podCreationTimestamp="2026-01-21 11:13:03 +0000 UTC" firstStartedPulling="2026-01-21 11:13:05.693731189 +0000 UTC m=+972.953687658" lastFinishedPulling="2026-01-21 11:13:08.318608338 +0000 UTC m=+975.578564797" observedRunningTime="2026-01-21 11:13:11.42720851 +0000 UTC m=+978.687164979" watchObservedRunningTime="2026-01-21 11:13:11.44314499 +0000 UTC m=+978.703101459" Jan 21 11:13:13 crc kubenswrapper[4881]: I0121 11:13:13.701678 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rvptz" Jan 21 11:13:13 crc kubenswrapper[4881]: I0121 11:13:13.701742 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rvptz" Jan 21 11:13:13 crc kubenswrapper[4881]: I0121 11:13:13.758652 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rvptz" Jan 21 11:13:15 crc kubenswrapper[4881]: I0121 11:13:15.158700 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rvptz" Jan 21 11:13:15 crc kubenswrapper[4881]: I0121 11:13:15.224766 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rvptz"] Jan 21 11:13:17 crc kubenswrapper[4881]: I0121 11:13:17.061136 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rvptz" podUID="998c47dc-b621-4357-86b9-f6d08cac4799" containerName="registry-server" containerID="cri-o://d0bb3056956d79836bd57985c9844270d4cb4c95a3ec04cb84f31deaf080579b" gracePeriod=2 Jan 21 11:13:18 crc kubenswrapper[4881]: I0121 11:13:18.075514 4881 generic.go:334] "Generic (PLEG): container finished" podID="998c47dc-b621-4357-86b9-f6d08cac4799" containerID="d0bb3056956d79836bd57985c9844270d4cb4c95a3ec04cb84f31deaf080579b" exitCode=0 Jan 21 11:13:18 crc kubenswrapper[4881]: I0121 11:13:18.075562 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rvptz" event={"ID":"998c47dc-b621-4357-86b9-f6d08cac4799","Type":"ContainerDied","Data":"d0bb3056956d79836bd57985c9844270d4cb4c95a3ec04cb84f31deaf080579b"} Jan 21 11:13:20 crc kubenswrapper[4881]: I0121 11:13:20.427148 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rvptz" Jan 21 11:13:20 crc kubenswrapper[4881]: I0121 11:13:20.545616 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/998c47dc-b621-4357-86b9-f6d08cac4799-catalog-content\") pod \"998c47dc-b621-4357-86b9-f6d08cac4799\" (UID: \"998c47dc-b621-4357-86b9-f6d08cac4799\") " Jan 21 11:13:20 crc kubenswrapper[4881]: I0121 11:13:20.545677 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tqdx\" (UniqueName: \"kubernetes.io/projected/998c47dc-b621-4357-86b9-f6d08cac4799-kube-api-access-7tqdx\") pod \"998c47dc-b621-4357-86b9-f6d08cac4799\" (UID: \"998c47dc-b621-4357-86b9-f6d08cac4799\") " Jan 21 11:13:20 crc kubenswrapper[4881]: I0121 11:13:20.545813 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/998c47dc-b621-4357-86b9-f6d08cac4799-utilities\") pod \"998c47dc-b621-4357-86b9-f6d08cac4799\" (UID: \"998c47dc-b621-4357-86b9-f6d08cac4799\") " Jan 21 11:13:20 crc kubenswrapper[4881]: I0121 11:13:20.547132 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/998c47dc-b621-4357-86b9-f6d08cac4799-utilities" (OuterVolumeSpecName: "utilities") pod "998c47dc-b621-4357-86b9-f6d08cac4799" (UID: "998c47dc-b621-4357-86b9-f6d08cac4799"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:13:20 crc kubenswrapper[4881]: I0121 11:13:20.558680 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/998c47dc-b621-4357-86b9-f6d08cac4799-kube-api-access-7tqdx" (OuterVolumeSpecName: "kube-api-access-7tqdx") pod "998c47dc-b621-4357-86b9-f6d08cac4799" (UID: "998c47dc-b621-4357-86b9-f6d08cac4799"). InnerVolumeSpecName "kube-api-access-7tqdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:13:20 crc kubenswrapper[4881]: I0121 11:13:20.574837 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/998c47dc-b621-4357-86b9-f6d08cac4799-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "998c47dc-b621-4357-86b9-f6d08cac4799" (UID: "998c47dc-b621-4357-86b9-f6d08cac4799"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:13:20 crc kubenswrapper[4881]: I0121 11:13:20.647551 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/998c47dc-b621-4357-86b9-f6d08cac4799-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:13:20 crc kubenswrapper[4881]: I0121 11:13:20.647622 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tqdx\" (UniqueName: \"kubernetes.io/projected/998c47dc-b621-4357-86b9-f6d08cac4799-kube-api-access-7tqdx\") on node \"crc\" DevicePath \"\"" Jan 21 11:13:20 crc kubenswrapper[4881]: I0121 11:13:20.647636 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/998c47dc-b621-4357-86b9-f6d08cac4799-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:13:21 crc kubenswrapper[4881]: I0121 11:13:21.100469 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-tzxpk" event={"ID":"eaaea696-21d8-4963-8364-82fa7bbb0e19","Type":"ContainerStarted","Data":"d43e06f6fdfda916124c7f45ddca7862ea152d5ecb818596e3705da2a15518d1"} Jan 21 11:13:21 crc kubenswrapper[4881]: I0121 11:13:21.100879 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-tzxpk" Jan 21 11:13:21 crc kubenswrapper[4881]: I0121 11:13:21.102488 4881 generic.go:334] "Generic (PLEG): container finished" podID="d055f37b-fab0-4fd0-b683-4a7974b21ad5" containerID="3746b5b9f53d7fdfe487182eb76a95aae4a70045e175b2a0be1c96278628b944" exitCode=0 Jan 21 11:13:21 crc kubenswrapper[4881]: I0121 11:13:21.102691 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lm54h" event={"ID":"d055f37b-fab0-4fd0-b683-4a7974b21ad5","Type":"ContainerDied","Data":"3746b5b9f53d7fdfe487182eb76a95aae4a70045e175b2a0be1c96278628b944"} Jan 21 11:13:21 crc kubenswrapper[4881]: I0121 11:13:21.105269 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rvptz" event={"ID":"998c47dc-b621-4357-86b9-f6d08cac4799","Type":"ContainerDied","Data":"185d2460e59c873ad3336643088425b69bafc3d60d4435c226adb50269ff2c1b"} Jan 21 11:13:21 crc kubenswrapper[4881]: I0121 11:13:21.105371 4881 scope.go:117] "RemoveContainer" containerID="d0bb3056956d79836bd57985c9844270d4cb4c95a3ec04cb84f31deaf080579b" Jan 21 11:13:21 crc kubenswrapper[4881]: I0121 11:13:21.105343 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rvptz" Jan 21 11:13:21 crc kubenswrapper[4881]: I0121 11:13:21.132873 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-tzxpk" podStartSLOduration=3.011279018 podStartE2EDuration="17.132838009s" podCreationTimestamp="2026-01-21 11:13:04 +0000 UTC" firstStartedPulling="2026-01-21 11:13:06.172747348 +0000 UTC m=+973.432703817" lastFinishedPulling="2026-01-21 11:13:20.294306339 +0000 UTC m=+987.554262808" observedRunningTime="2026-01-21 11:13:21.126589385 +0000 UTC m=+988.386545854" watchObservedRunningTime="2026-01-21 11:13:21.132838009 +0000 UTC m=+988.392794488" Jan 21 11:13:21 crc kubenswrapper[4881]: I0121 11:13:21.149548 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rvptz"] Jan 21 11:13:21 crc kubenswrapper[4881]: I0121 11:13:21.155102 4881 scope.go:117] "RemoveContainer" containerID="c2f36538556042a4c3ef112ac5ba0181ebb2721edcd599559000130ae467ead0" Jan 21 11:13:21 crc kubenswrapper[4881]: I0121 11:13:21.156363 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rvptz"] Jan 21 11:13:21 crc kubenswrapper[4881]: I0121 11:13:21.219751 4881 scope.go:117] "RemoveContainer" containerID="00ecec7a68182aee750726e487cfdfc0600f11f9060a5afa0e042e40441982a2" Jan 21 11:13:21 crc kubenswrapper[4881]: I0121 11:13:21.414307 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="998c47dc-b621-4357-86b9-f6d08cac4799" path="/var/lib/kubelet/pods/998c47dc-b621-4357-86b9-f6d08cac4799/volumes" Jan 21 11:13:22 crc kubenswrapper[4881]: I0121 11:13:22.113172 4881 generic.go:334] "Generic (PLEG): container finished" podID="d055f37b-fab0-4fd0-b683-4a7974b21ad5" containerID="cc533ffdf1fe3cc98221465f5f7fa5ec0769b8130e1ee2c7bcec6655e3618f56" exitCode=0 Jan 21 11:13:22 crc kubenswrapper[4881]: I0121 11:13:22.113260 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lm54h" event={"ID":"d055f37b-fab0-4fd0-b683-4a7974b21ad5","Type":"ContainerDied","Data":"cc533ffdf1fe3cc98221465f5f7fa5ec0769b8130e1ee2c7bcec6655e3618f56"} Jan 21 11:13:23 crc kubenswrapper[4881]: I0121 11:13:23.126638 4881 generic.go:334] "Generic (PLEG): container finished" podID="d055f37b-fab0-4fd0-b683-4a7974b21ad5" containerID="a68669dfd67af511bc056281db7a5556d9a70faa9d9b9116e660ec6356a708d9" exitCode=0 Jan 21 11:13:23 crc kubenswrapper[4881]: I0121 11:13:23.126751 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lm54h" event={"ID":"d055f37b-fab0-4fd0-b683-4a7974b21ad5","Type":"ContainerDied","Data":"a68669dfd67af511bc056281db7a5556d9a70faa9d9b9116e660ec6356a708d9"} Jan 21 11:13:24 crc kubenswrapper[4881]: I0121 11:13:24.140948 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lm54h" event={"ID":"d055f37b-fab0-4fd0-b683-4a7974b21ad5","Type":"ContainerStarted","Data":"677e4b3919eac7c3150478c52ae85bbe28623e8af9b17d6d1436d08620cb3123"} Jan 21 11:13:24 crc kubenswrapper[4881]: I0121 11:13:24.141244 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lm54h" event={"ID":"d055f37b-fab0-4fd0-b683-4a7974b21ad5","Type":"ContainerStarted","Data":"7879ae745d39cd51daf63d47f3f53004e405e3baca350d1c1c59a026d40cde2a"} Jan 21 11:13:24 crc kubenswrapper[4881]: I0121 11:13:24.141254 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lm54h" event={"ID":"d055f37b-fab0-4fd0-b683-4a7974b21ad5","Type":"ContainerStarted","Data":"1c339abc1a01b23b06dd105a1305c5d3b86b4f64ea15b284aca2debb9a62ffe4"} Jan 21 11:13:24 crc kubenswrapper[4881]: I0121 11:13:24.141263 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lm54h" event={"ID":"d055f37b-fab0-4fd0-b683-4a7974b21ad5","Type":"ContainerStarted","Data":"230354f80d8522c72349de08951f7edb532da33e2c1091edcaf49a586219b704"} Jan 21 11:13:24 crc kubenswrapper[4881]: I0121 11:13:24.141271 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lm54h" event={"ID":"d055f37b-fab0-4fd0-b683-4a7974b21ad5","Type":"ContainerStarted","Data":"27a254648b2c6070da76d6cb8b28bdbbae1cab2c6167b35b9c1f026d61a91c19"} Jan 21 11:13:25 crc kubenswrapper[4881]: I0121 11:13:25.153867 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lm54h" event={"ID":"d055f37b-fab0-4fd0-b683-4a7974b21ad5","Type":"ContainerStarted","Data":"b41c533276ceeb71e3f4e8063c94eb323347149a9bda0bd23a2f44435925439a"} Jan 21 11:13:25 crc kubenswrapper[4881]: I0121 11:13:25.154158 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-lm54h" Jan 21 11:13:25 crc kubenswrapper[4881]: I0121 11:13:25.812179 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-lm54h" Jan 21 11:13:25 crc kubenswrapper[4881]: I0121 11:13:25.872874 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-lm54h" Jan 21 11:13:25 crc kubenswrapper[4881]: I0121 11:13:25.897452 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-lm54h" podStartSLOduration=8.087414374 podStartE2EDuration="21.897419289s" podCreationTimestamp="2026-01-21 11:13:04 +0000 UTC" firstStartedPulling="2026-01-21 11:13:06.513024257 +0000 UTC m=+973.772980726" lastFinishedPulling="2026-01-21 11:13:20.323029172 +0000 UTC m=+987.582985641" observedRunningTime="2026-01-21 11:13:25.207602632 +0000 UTC m=+992.467559101" watchObservedRunningTime="2026-01-21 11:13:25.897419289 +0000 UTC m=+993.157375758" Jan 21 11:13:25 crc kubenswrapper[4881]: I0121 11:13:25.966602 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-dmwlt" Jan 21 11:13:26 crc kubenswrapper[4881]: I0121 11:13:26.864372 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-697j4" Jan 21 11:13:29 crc kubenswrapper[4881]: I0121 11:13:29.851432 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:13:29 crc kubenswrapper[4881]: I0121 11:13:29.851759 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:13:30 crc kubenswrapper[4881]: I0121 11:13:30.280745 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-67hkt"] Jan 21 11:13:30 crc kubenswrapper[4881]: E0121 11:13:30.281070 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="998c47dc-b621-4357-86b9-f6d08cac4799" containerName="extract-content" Jan 21 11:13:30 crc kubenswrapper[4881]: I0121 11:13:30.281084 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="998c47dc-b621-4357-86b9-f6d08cac4799" containerName="extract-content" Jan 21 11:13:30 crc kubenswrapper[4881]: E0121 11:13:30.281100 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="998c47dc-b621-4357-86b9-f6d08cac4799" containerName="extract-utilities" Jan 21 11:13:30 crc kubenswrapper[4881]: I0121 11:13:30.281107 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="998c47dc-b621-4357-86b9-f6d08cac4799" containerName="extract-utilities" Jan 21 11:13:30 crc kubenswrapper[4881]: E0121 11:13:30.281133 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="998c47dc-b621-4357-86b9-f6d08cac4799" containerName="registry-server" Jan 21 11:13:30 crc kubenswrapper[4881]: I0121 11:13:30.281140 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="998c47dc-b621-4357-86b9-f6d08cac4799" containerName="registry-server" Jan 21 11:13:30 crc kubenswrapper[4881]: I0121 11:13:30.281261 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="998c47dc-b621-4357-86b9-f6d08cac4799" containerName="registry-server" Jan 21 11:13:30 crc kubenswrapper[4881]: I0121 11:13:30.281873 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-67hkt" Jan 21 11:13:30 crc kubenswrapper[4881]: I0121 11:13:30.285465 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 21 11:13:30 crc kubenswrapper[4881]: I0121 11:13:30.285568 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 21 11:13:30 crc kubenswrapper[4881]: I0121 11:13:30.287765 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-tq8v2" Jan 21 11:13:30 crc kubenswrapper[4881]: I0121 11:13:30.297277 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-67hkt"] Jan 21 11:13:30 crc kubenswrapper[4881]: I0121 11:13:30.445158 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lnht\" (UniqueName: \"kubernetes.io/projected/7e121e55-2150-44d1-befa-4b94a3103b31-kube-api-access-2lnht\") pod \"openstack-operator-index-67hkt\" (UID: \"7e121e55-2150-44d1-befa-4b94a3103b31\") " pod="openstack-operators/openstack-operator-index-67hkt" Jan 21 11:13:30 crc kubenswrapper[4881]: I0121 11:13:30.547298 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lnht\" (UniqueName: \"kubernetes.io/projected/7e121e55-2150-44d1-befa-4b94a3103b31-kube-api-access-2lnht\") pod \"openstack-operator-index-67hkt\" (UID: \"7e121e55-2150-44d1-befa-4b94a3103b31\") " pod="openstack-operators/openstack-operator-index-67hkt" Jan 21 11:13:30 crc kubenswrapper[4881]: I0121 11:13:30.575849 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lnht\" (UniqueName: \"kubernetes.io/projected/7e121e55-2150-44d1-befa-4b94a3103b31-kube-api-access-2lnht\") pod \"openstack-operator-index-67hkt\" (UID: \"7e121e55-2150-44d1-befa-4b94a3103b31\") " pod="openstack-operators/openstack-operator-index-67hkt" Jan 21 11:13:30 crc kubenswrapper[4881]: I0121 11:13:30.608321 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-67hkt" Jan 21 11:13:31 crc kubenswrapper[4881]: I0121 11:13:31.133359 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-67hkt"] Jan 21 11:13:31 crc kubenswrapper[4881]: I0121 11:13:31.196448 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-67hkt" event={"ID":"7e121e55-2150-44d1-befa-4b94a3103b31","Type":"ContainerStarted","Data":"0521691acf8b75de45ecf22882ef2ca1bdfabc44c0c161991c4d6c423318f707"} Jan 21 11:13:33 crc kubenswrapper[4881]: I0121 11:13:33.661553 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-67hkt"] Jan 21 11:13:34 crc kubenswrapper[4881]: I0121 11:13:34.268289 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-7vz4j"] Jan 21 11:13:34 crc kubenswrapper[4881]: I0121 11:13:34.270264 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-7vz4j" Jan 21 11:13:34 crc kubenswrapper[4881]: I0121 11:13:34.276065 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-7vz4j"] Jan 21 11:13:34 crc kubenswrapper[4881]: I0121 11:13:34.321675 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgsrv\" (UniqueName: \"kubernetes.io/projected/0a051fc2-b6e4-463c-bb0a-b565d12b21b4-kube-api-access-pgsrv\") pod \"openstack-operator-index-7vz4j\" (UID: \"0a051fc2-b6e4-463c-bb0a-b565d12b21b4\") " pod="openstack-operators/openstack-operator-index-7vz4j" Jan 21 11:13:34 crc kubenswrapper[4881]: I0121 11:13:34.422694 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgsrv\" (UniqueName: \"kubernetes.io/projected/0a051fc2-b6e4-463c-bb0a-b565d12b21b4-kube-api-access-pgsrv\") pod \"openstack-operator-index-7vz4j\" (UID: \"0a051fc2-b6e4-463c-bb0a-b565d12b21b4\") " pod="openstack-operators/openstack-operator-index-7vz4j" Jan 21 11:13:34 crc kubenswrapper[4881]: I0121 11:13:34.445701 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgsrv\" (UniqueName: \"kubernetes.io/projected/0a051fc2-b6e4-463c-bb0a-b565d12b21b4-kube-api-access-pgsrv\") pod \"openstack-operator-index-7vz4j\" (UID: \"0a051fc2-b6e4-463c-bb0a-b565d12b21b4\") " pod="openstack-operators/openstack-operator-index-7vz4j" Jan 21 11:13:34 crc kubenswrapper[4881]: I0121 11:13:34.597020 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-7vz4j" Jan 21 11:13:35 crc kubenswrapper[4881]: I0121 11:13:35.229160 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-67hkt" event={"ID":"7e121e55-2150-44d1-befa-4b94a3103b31","Type":"ContainerStarted","Data":"eff4e9e5eb99949ed0d7b8357150a3132009be33bd7064176c801124401c2a5c"} Jan 21 11:13:35 crc kubenswrapper[4881]: I0121 11:13:35.229334 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-67hkt" podUID="7e121e55-2150-44d1-befa-4b94a3103b31" containerName="registry-server" containerID="cri-o://eff4e9e5eb99949ed0d7b8357150a3132009be33bd7064176c801124401c2a5c" gracePeriod=2 Jan 21 11:13:35 crc kubenswrapper[4881]: I0121 11:13:35.272127 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-67hkt" podStartSLOduration=1.475895634 podStartE2EDuration="5.272102327s" podCreationTimestamp="2026-01-21 11:13:30 +0000 UTC" firstStartedPulling="2026-01-21 11:13:31.147204295 +0000 UTC m=+998.407160764" lastFinishedPulling="2026-01-21 11:13:34.943410988 +0000 UTC m=+1002.203367457" observedRunningTime="2026-01-21 11:13:35.270513848 +0000 UTC m=+1002.530470317" watchObservedRunningTime="2026-01-21 11:13:35.272102327 +0000 UTC m=+1002.532058796" Jan 21 11:13:35 crc kubenswrapper[4881]: I0121 11:13:35.328978 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-7vz4j"] Jan 21 11:13:35 crc kubenswrapper[4881]: I0121 11:13:35.616614 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-67hkt_7e121e55-2150-44d1-befa-4b94a3103b31/registry-server/0.log" Jan 21 11:13:35 crc kubenswrapper[4881]: I0121 11:13:35.617037 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-67hkt" Jan 21 11:13:35 crc kubenswrapper[4881]: I0121 11:13:35.644015 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lnht\" (UniqueName: \"kubernetes.io/projected/7e121e55-2150-44d1-befa-4b94a3103b31-kube-api-access-2lnht\") pod \"7e121e55-2150-44d1-befa-4b94a3103b31\" (UID: \"7e121e55-2150-44d1-befa-4b94a3103b31\") " Jan 21 11:13:35 crc kubenswrapper[4881]: I0121 11:13:35.656054 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e121e55-2150-44d1-befa-4b94a3103b31-kube-api-access-2lnht" (OuterVolumeSpecName: "kube-api-access-2lnht") pod "7e121e55-2150-44d1-befa-4b94a3103b31" (UID: "7e121e55-2150-44d1-befa-4b94a3103b31"). InnerVolumeSpecName "kube-api-access-2lnht". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:13:35 crc kubenswrapper[4881]: I0121 11:13:35.745996 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lnht\" (UniqueName: \"kubernetes.io/projected/7e121e55-2150-44d1-befa-4b94a3103b31-kube-api-access-2lnht\") on node \"crc\" DevicePath \"\"" Jan 21 11:13:35 crc kubenswrapper[4881]: I0121 11:13:35.815388 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-lm54h" Jan 21 11:13:35 crc kubenswrapper[4881]: I0121 11:13:35.840121 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-tzxpk" Jan 21 11:13:36 crc kubenswrapper[4881]: I0121 11:13:36.238775 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-67hkt_7e121e55-2150-44d1-befa-4b94a3103b31/registry-server/0.log" Jan 21 11:13:36 crc kubenswrapper[4881]: I0121 11:13:36.238862 4881 generic.go:334] "Generic (PLEG): container finished" podID="7e121e55-2150-44d1-befa-4b94a3103b31" containerID="eff4e9e5eb99949ed0d7b8357150a3132009be33bd7064176c801124401c2a5c" exitCode=2 Jan 21 11:13:36 crc kubenswrapper[4881]: I0121 11:13:36.238933 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-67hkt" event={"ID":"7e121e55-2150-44d1-befa-4b94a3103b31","Type":"ContainerDied","Data":"eff4e9e5eb99949ed0d7b8357150a3132009be33bd7064176c801124401c2a5c"} Jan 21 11:13:36 crc kubenswrapper[4881]: I0121 11:13:36.238955 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-67hkt" Jan 21 11:13:36 crc kubenswrapper[4881]: I0121 11:13:36.238967 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-67hkt" event={"ID":"7e121e55-2150-44d1-befa-4b94a3103b31","Type":"ContainerDied","Data":"0521691acf8b75de45ecf22882ef2ca1bdfabc44c0c161991c4d6c423318f707"} Jan 21 11:13:36 crc kubenswrapper[4881]: I0121 11:13:36.238992 4881 scope.go:117] "RemoveContainer" containerID="eff4e9e5eb99949ed0d7b8357150a3132009be33bd7064176c801124401c2a5c" Jan 21 11:13:36 crc kubenswrapper[4881]: I0121 11:13:36.241390 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-7vz4j" event={"ID":"0a051fc2-b6e4-463c-bb0a-b565d12b21b4","Type":"ContainerStarted","Data":"1b649bce78bf889841cb871a4ee4082eda5d5cc10688bb8f702507dc432c51ae"} Jan 21 11:13:36 crc kubenswrapper[4881]: I0121 11:13:36.241421 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-7vz4j" event={"ID":"0a051fc2-b6e4-463c-bb0a-b565d12b21b4","Type":"ContainerStarted","Data":"fc784969ca98acbbed6abcceecefb978ca22b1208b7ed890aa07ebbb725298a5"} Jan 21 11:13:36 crc kubenswrapper[4881]: I0121 11:13:36.263108 4881 scope.go:117] "RemoveContainer" containerID="eff4e9e5eb99949ed0d7b8357150a3132009be33bd7064176c801124401c2a5c" Jan 21 11:13:36 crc kubenswrapper[4881]: E0121 11:13:36.266538 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eff4e9e5eb99949ed0d7b8357150a3132009be33bd7064176c801124401c2a5c\": container with ID starting with eff4e9e5eb99949ed0d7b8357150a3132009be33bd7064176c801124401c2a5c not found: ID does not exist" containerID="eff4e9e5eb99949ed0d7b8357150a3132009be33bd7064176c801124401c2a5c" Jan 21 11:13:36 crc kubenswrapper[4881]: I0121 11:13:36.266605 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eff4e9e5eb99949ed0d7b8357150a3132009be33bd7064176c801124401c2a5c"} err="failed to get container status \"eff4e9e5eb99949ed0d7b8357150a3132009be33bd7064176c801124401c2a5c\": rpc error: code = NotFound desc = could not find container \"eff4e9e5eb99949ed0d7b8357150a3132009be33bd7064176c801124401c2a5c\": container with ID starting with eff4e9e5eb99949ed0d7b8357150a3132009be33bd7064176c801124401c2a5c not found: ID does not exist" Jan 21 11:13:36 crc kubenswrapper[4881]: I0121 11:13:36.269101 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-7vz4j" podStartSLOduration=2.212353746 podStartE2EDuration="2.269068697s" podCreationTimestamp="2026-01-21 11:13:34 +0000 UTC" firstStartedPulling="2026-01-21 11:13:35.353981685 +0000 UTC m=+1002.613938154" lastFinishedPulling="2026-01-21 11:13:35.410696636 +0000 UTC m=+1002.670653105" observedRunningTime="2026-01-21 11:13:36.265240943 +0000 UTC m=+1003.525197432" watchObservedRunningTime="2026-01-21 11:13:36.269068697 +0000 UTC m=+1003.529025166" Jan 21 11:13:36 crc kubenswrapper[4881]: I0121 11:13:36.286007 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-67hkt"] Jan 21 11:13:36 crc kubenswrapper[4881]: I0121 11:13:36.292673 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-67hkt"] Jan 21 11:13:37 crc kubenswrapper[4881]: I0121 11:13:37.326182 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e121e55-2150-44d1-befa-4b94a3103b31" path="/var/lib/kubelet/pods/7e121e55-2150-44d1-befa-4b94a3103b31/volumes" Jan 21 11:13:44 crc kubenswrapper[4881]: I0121 11:13:44.598239 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-7vz4j" Jan 21 11:13:44 crc kubenswrapper[4881]: I0121 11:13:44.598920 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-7vz4j" Jan 21 11:13:44 crc kubenswrapper[4881]: I0121 11:13:44.628240 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-7vz4j" Jan 21 11:13:45 crc kubenswrapper[4881]: I0121 11:13:45.340447 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-7vz4j" Jan 21 11:13:46 crc kubenswrapper[4881]: I0121 11:13:46.502812 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l"] Jan 21 11:13:46 crc kubenswrapper[4881]: E0121 11:13:46.503320 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e121e55-2150-44d1-befa-4b94a3103b31" containerName="registry-server" Jan 21 11:13:46 crc kubenswrapper[4881]: I0121 11:13:46.503343 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e121e55-2150-44d1-befa-4b94a3103b31" containerName="registry-server" Jan 21 11:13:46 crc kubenswrapper[4881]: I0121 11:13:46.503539 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e121e55-2150-44d1-befa-4b94a3103b31" containerName="registry-server" Jan 21 11:13:46 crc kubenswrapper[4881]: I0121 11:13:46.505051 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l" Jan 21 11:13:46 crc kubenswrapper[4881]: I0121 11:13:46.508135 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-9qzn5" Jan 21 11:13:46 crc kubenswrapper[4881]: I0121 11:13:46.511524 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l"] Jan 21 11:13:46 crc kubenswrapper[4881]: I0121 11:13:46.515029 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxcp4\" (UniqueName: \"kubernetes.io/projected/1c737afe-a2ad-4075-acd6-9f73aada0e4b-kube-api-access-lxcp4\") pod \"23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l\" (UID: \"1c737afe-a2ad-4075-acd6-9f73aada0e4b\") " pod="openstack-operators/23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l" Jan 21 11:13:46 crc kubenswrapper[4881]: I0121 11:13:46.515130 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1c737afe-a2ad-4075-acd6-9f73aada0e4b-bundle\") pod \"23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l\" (UID: \"1c737afe-a2ad-4075-acd6-9f73aada0e4b\") " pod="openstack-operators/23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l" Jan 21 11:13:46 crc kubenswrapper[4881]: I0121 11:13:46.515248 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1c737afe-a2ad-4075-acd6-9f73aada0e4b-util\") pod \"23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l\" (UID: \"1c737afe-a2ad-4075-acd6-9f73aada0e4b\") " pod="openstack-operators/23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l" Jan 21 11:13:46 crc kubenswrapper[4881]: I0121 11:13:46.616408 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxcp4\" (UniqueName: \"kubernetes.io/projected/1c737afe-a2ad-4075-acd6-9f73aada0e4b-kube-api-access-lxcp4\") pod \"23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l\" (UID: \"1c737afe-a2ad-4075-acd6-9f73aada0e4b\") " pod="openstack-operators/23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l" Jan 21 11:13:46 crc kubenswrapper[4881]: I0121 11:13:46.616520 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1c737afe-a2ad-4075-acd6-9f73aada0e4b-bundle\") pod \"23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l\" (UID: \"1c737afe-a2ad-4075-acd6-9f73aada0e4b\") " pod="openstack-operators/23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l" Jan 21 11:13:46 crc kubenswrapper[4881]: I0121 11:13:46.616597 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1c737afe-a2ad-4075-acd6-9f73aada0e4b-util\") pod \"23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l\" (UID: \"1c737afe-a2ad-4075-acd6-9f73aada0e4b\") " pod="openstack-operators/23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l" Jan 21 11:13:46 crc kubenswrapper[4881]: I0121 11:13:46.617323 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1c737afe-a2ad-4075-acd6-9f73aada0e4b-bundle\") pod \"23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l\" (UID: \"1c737afe-a2ad-4075-acd6-9f73aada0e4b\") " pod="openstack-operators/23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l" Jan 21 11:13:46 crc kubenswrapper[4881]: I0121 11:13:46.617369 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1c737afe-a2ad-4075-acd6-9f73aada0e4b-util\") pod \"23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l\" (UID: \"1c737afe-a2ad-4075-acd6-9f73aada0e4b\") " pod="openstack-operators/23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l" Jan 21 11:13:46 crc kubenswrapper[4881]: I0121 11:13:46.641083 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxcp4\" (UniqueName: \"kubernetes.io/projected/1c737afe-a2ad-4075-acd6-9f73aada0e4b-kube-api-access-lxcp4\") pod \"23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l\" (UID: \"1c737afe-a2ad-4075-acd6-9f73aada0e4b\") " pod="openstack-operators/23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l" Jan 21 11:13:46 crc kubenswrapper[4881]: I0121 11:13:46.829635 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l" Jan 21 11:13:47 crc kubenswrapper[4881]: I0121 11:13:47.288724 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l"] Jan 21 11:13:47 crc kubenswrapper[4881]: W0121 11:13:47.297593 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c737afe_a2ad_4075_acd6_9f73aada0e4b.slice/crio-32950c149b73cfc98cb369b7708eaa4070423d894512b36f017ccaec2e114010 WatchSource:0}: Error finding container 32950c149b73cfc98cb369b7708eaa4070423d894512b36f017ccaec2e114010: Status 404 returned error can't find the container with id 32950c149b73cfc98cb369b7708eaa4070423d894512b36f017ccaec2e114010 Jan 21 11:13:47 crc kubenswrapper[4881]: I0121 11:13:47.345487 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l" event={"ID":"1c737afe-a2ad-4075-acd6-9f73aada0e4b","Type":"ContainerStarted","Data":"32950c149b73cfc98cb369b7708eaa4070423d894512b36f017ccaec2e114010"} Jan 21 11:13:50 crc kubenswrapper[4881]: I0121 11:13:50.368942 4881 generic.go:334] "Generic (PLEG): container finished" podID="1c737afe-a2ad-4075-acd6-9f73aada0e4b" containerID="507e5bcd4990d6cae98f2c67f74453ce637d733ec2bab01139b31d40784c1782" exitCode=0 Jan 21 11:13:50 crc kubenswrapper[4881]: I0121 11:13:50.369262 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l" event={"ID":"1c737afe-a2ad-4075-acd6-9f73aada0e4b","Type":"ContainerDied","Data":"507e5bcd4990d6cae98f2c67f74453ce637d733ec2bab01139b31d40784c1782"} Jan 21 11:13:51 crc kubenswrapper[4881]: I0121 11:13:51.383274 4881 generic.go:334] "Generic (PLEG): container finished" podID="1c737afe-a2ad-4075-acd6-9f73aada0e4b" containerID="8070ffff0d68dc11586cc4bdbf539020f6756380dd8f4480fc2534e1e0554f8a" exitCode=0 Jan 21 11:13:51 crc kubenswrapper[4881]: I0121 11:13:51.383945 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l" event={"ID":"1c737afe-a2ad-4075-acd6-9f73aada0e4b","Type":"ContainerDied","Data":"8070ffff0d68dc11586cc4bdbf539020f6756380dd8f4480fc2534e1e0554f8a"} Jan 21 11:13:52 crc kubenswrapper[4881]: I0121 11:13:52.400363 4881 generic.go:334] "Generic (PLEG): container finished" podID="1c737afe-a2ad-4075-acd6-9f73aada0e4b" containerID="0af488c99970619180b117b8819887b079f89bce6ab51b9ed22ffb3bcb2ad111" exitCode=0 Jan 21 11:13:52 crc kubenswrapper[4881]: I0121 11:13:52.400418 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l" event={"ID":"1c737afe-a2ad-4075-acd6-9f73aada0e4b","Type":"ContainerDied","Data":"0af488c99970619180b117b8819887b079f89bce6ab51b9ed22ffb3bcb2ad111"} Jan 21 11:13:53 crc kubenswrapper[4881]: I0121 11:13:53.690068 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l" Jan 21 11:13:53 crc kubenswrapper[4881]: I0121 11:13:53.845167 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1c737afe-a2ad-4075-acd6-9f73aada0e4b-bundle\") pod \"1c737afe-a2ad-4075-acd6-9f73aada0e4b\" (UID: \"1c737afe-a2ad-4075-acd6-9f73aada0e4b\") " Jan 21 11:13:53 crc kubenswrapper[4881]: I0121 11:13:53.845397 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxcp4\" (UniqueName: \"kubernetes.io/projected/1c737afe-a2ad-4075-acd6-9f73aada0e4b-kube-api-access-lxcp4\") pod \"1c737afe-a2ad-4075-acd6-9f73aada0e4b\" (UID: \"1c737afe-a2ad-4075-acd6-9f73aada0e4b\") " Jan 21 11:13:53 crc kubenswrapper[4881]: I0121 11:13:53.845501 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1c737afe-a2ad-4075-acd6-9f73aada0e4b-util\") pod \"1c737afe-a2ad-4075-acd6-9f73aada0e4b\" (UID: \"1c737afe-a2ad-4075-acd6-9f73aada0e4b\") " Jan 21 11:13:53 crc kubenswrapper[4881]: I0121 11:13:53.846251 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c737afe-a2ad-4075-acd6-9f73aada0e4b-bundle" (OuterVolumeSpecName: "bundle") pod "1c737afe-a2ad-4075-acd6-9f73aada0e4b" (UID: "1c737afe-a2ad-4075-acd6-9f73aada0e4b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:13:53 crc kubenswrapper[4881]: I0121 11:13:53.851527 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c737afe-a2ad-4075-acd6-9f73aada0e4b-kube-api-access-lxcp4" (OuterVolumeSpecName: "kube-api-access-lxcp4") pod "1c737afe-a2ad-4075-acd6-9f73aada0e4b" (UID: "1c737afe-a2ad-4075-acd6-9f73aada0e4b"). InnerVolumeSpecName "kube-api-access-lxcp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:13:53 crc kubenswrapper[4881]: I0121 11:13:53.859750 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c737afe-a2ad-4075-acd6-9f73aada0e4b-util" (OuterVolumeSpecName: "util") pod "1c737afe-a2ad-4075-acd6-9f73aada0e4b" (UID: "1c737afe-a2ad-4075-acd6-9f73aada0e4b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:13:53 crc kubenswrapper[4881]: I0121 11:13:53.946828 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxcp4\" (UniqueName: \"kubernetes.io/projected/1c737afe-a2ad-4075-acd6-9f73aada0e4b-kube-api-access-lxcp4\") on node \"crc\" DevicePath \"\"" Jan 21 11:13:53 crc kubenswrapper[4881]: I0121 11:13:53.946934 4881 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1c737afe-a2ad-4075-acd6-9f73aada0e4b-util\") on node \"crc\" DevicePath \"\"" Jan 21 11:13:53 crc kubenswrapper[4881]: I0121 11:13:53.946948 4881 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1c737afe-a2ad-4075-acd6-9f73aada0e4b-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:13:54 crc kubenswrapper[4881]: I0121 11:13:54.416945 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l" event={"ID":"1c737afe-a2ad-4075-acd6-9f73aada0e4b","Type":"ContainerDied","Data":"32950c149b73cfc98cb369b7708eaa4070423d894512b36f017ccaec2e114010"} Jan 21 11:13:54 crc kubenswrapper[4881]: I0121 11:13:54.417020 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32950c149b73cfc98cb369b7708eaa4070423d894512b36f017ccaec2e114010" Jan 21 11:13:54 crc kubenswrapper[4881]: I0121 11:13:54.417024 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l" Jan 21 11:13:58 crc kubenswrapper[4881]: I0121 11:13:58.642157 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-766b56994f-7hsc6"] Jan 21 11:13:58 crc kubenswrapper[4881]: E0121 11:13:58.642844 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c737afe-a2ad-4075-acd6-9f73aada0e4b" containerName="extract" Jan 21 11:13:58 crc kubenswrapper[4881]: I0121 11:13:58.642856 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c737afe-a2ad-4075-acd6-9f73aada0e4b" containerName="extract" Jan 21 11:13:58 crc kubenswrapper[4881]: E0121 11:13:58.642877 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c737afe-a2ad-4075-acd6-9f73aada0e4b" containerName="pull" Jan 21 11:13:58 crc kubenswrapper[4881]: I0121 11:13:58.642883 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c737afe-a2ad-4075-acd6-9f73aada0e4b" containerName="pull" Jan 21 11:13:58 crc kubenswrapper[4881]: E0121 11:13:58.642892 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c737afe-a2ad-4075-acd6-9f73aada0e4b" containerName="util" Jan 21 11:13:58 crc kubenswrapper[4881]: I0121 11:13:58.642898 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c737afe-a2ad-4075-acd6-9f73aada0e4b" containerName="util" Jan 21 11:13:58 crc kubenswrapper[4881]: I0121 11:13:58.643024 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c737afe-a2ad-4075-acd6-9f73aada0e4b" containerName="extract" Jan 21 11:13:58 crc kubenswrapper[4881]: I0121 11:13:58.643475 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-766b56994f-7hsc6" Jan 21 11:13:58 crc kubenswrapper[4881]: I0121 11:13:58.646679 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-8fwv9" Jan 21 11:13:58 crc kubenswrapper[4881]: I0121 11:13:58.682940 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-766b56994f-7hsc6"] Jan 21 11:13:58 crc kubenswrapper[4881]: I0121 11:13:58.863232 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsnnn\" (UniqueName: \"kubernetes.io/projected/3a9a96af-4c4b-45b4-ade0-688a9029cf7b-kube-api-access-jsnnn\") pod \"openstack-operator-controller-init-766b56994f-7hsc6\" (UID: \"3a9a96af-4c4b-45b4-ade0-688a9029cf7b\") " pod="openstack-operators/openstack-operator-controller-init-766b56994f-7hsc6" Jan 21 11:13:58 crc kubenswrapper[4881]: I0121 11:13:58.964602 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsnnn\" (UniqueName: \"kubernetes.io/projected/3a9a96af-4c4b-45b4-ade0-688a9029cf7b-kube-api-access-jsnnn\") pod \"openstack-operator-controller-init-766b56994f-7hsc6\" (UID: \"3a9a96af-4c4b-45b4-ade0-688a9029cf7b\") " pod="openstack-operators/openstack-operator-controller-init-766b56994f-7hsc6" Jan 21 11:13:58 crc kubenswrapper[4881]: I0121 11:13:58.990413 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsnnn\" (UniqueName: \"kubernetes.io/projected/3a9a96af-4c4b-45b4-ade0-688a9029cf7b-kube-api-access-jsnnn\") pod \"openstack-operator-controller-init-766b56994f-7hsc6\" (UID: \"3a9a96af-4c4b-45b4-ade0-688a9029cf7b\") " pod="openstack-operators/openstack-operator-controller-init-766b56994f-7hsc6" Jan 21 11:13:59 crc kubenswrapper[4881]: I0121 11:13:59.262905 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-766b56994f-7hsc6" Jan 21 11:13:59 crc kubenswrapper[4881]: I0121 11:13:59.530731 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-766b56994f-7hsc6"] Jan 21 11:13:59 crc kubenswrapper[4881]: I0121 11:13:59.851581 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:13:59 crc kubenswrapper[4881]: I0121 11:13:59.851966 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:14:00 crc kubenswrapper[4881]: I0121 11:14:00.462983 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-766b56994f-7hsc6" event={"ID":"3a9a96af-4c4b-45b4-ade0-688a9029cf7b","Type":"ContainerStarted","Data":"c3ec15dca0760e651b670417bc72a856967a47424d614b936250fcd519b604ec"} Jan 21 11:14:08 crc kubenswrapper[4881]: I0121 11:14:08.624542 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-766b56994f-7hsc6" event={"ID":"3a9a96af-4c4b-45b4-ade0-688a9029cf7b","Type":"ContainerStarted","Data":"31e53cf03fd9750f0bc0a32053b62a45c1194acd86a68c42b68e667efc242a89"} Jan 21 11:14:08 crc kubenswrapper[4881]: I0121 11:14:08.625286 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-766b56994f-7hsc6" Jan 21 11:14:08 crc kubenswrapper[4881]: I0121 11:14:08.675026 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-766b56994f-7hsc6" podStartSLOduration=3.074652435 podStartE2EDuration="10.675007357s" podCreationTimestamp="2026-01-21 11:13:58 +0000 UTC" firstStartedPulling="2026-01-21 11:13:59.552088025 +0000 UTC m=+1026.812044494" lastFinishedPulling="2026-01-21 11:14:07.152442947 +0000 UTC m=+1034.412399416" observedRunningTime="2026-01-21 11:14:08.67351108 +0000 UTC m=+1035.933467549" watchObservedRunningTime="2026-01-21 11:14:08.675007357 +0000 UTC m=+1035.934963826" Jan 21 11:14:19 crc kubenswrapper[4881]: I0121 11:14:19.266592 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-766b56994f-7hsc6" Jan 21 11:14:29 crc kubenswrapper[4881]: I0121 11:14:29.963439 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:14:29 crc kubenswrapper[4881]: I0121 11:14:29.964023 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:14:29 crc kubenswrapper[4881]: I0121 11:14:29.987779 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 11:14:29 crc kubenswrapper[4881]: I0121 11:14:29.988532 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"abaaf16a1930b4e2e9a1e1d952f2948a8b09bfb0c0f18add47eef44fe07067c5"} pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 11:14:29 crc kubenswrapper[4881]: I0121 11:14:29.988599 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" containerID="cri-o://abaaf16a1930b4e2e9a1e1d952f2948a8b09bfb0c0f18add47eef44fe07067c5" gracePeriod=600 Jan 21 11:14:31 crc kubenswrapper[4881]: I0121 11:14:31.206410 4881 generic.go:334] "Generic (PLEG): container finished" podID="3687b313-1df2-4274-80db-8c758b51bf2d" containerID="abaaf16a1930b4e2e9a1e1d952f2948a8b09bfb0c0f18add47eef44fe07067c5" exitCode=0 Jan 21 11:14:31 crc kubenswrapper[4881]: I0121 11:14:31.206527 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerDied","Data":"abaaf16a1930b4e2e9a1e1d952f2948a8b09bfb0c0f18add47eef44fe07067c5"} Jan 21 11:14:31 crc kubenswrapper[4881]: I0121 11:14:31.206805 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"d0f3ab6355e31b97e337f7f21fb84796e3dea68bac874475991ce7eb43a93a82"} Jan 21 11:14:31 crc kubenswrapper[4881]: I0121 11:14:31.206841 4881 scope.go:117] "RemoveContainer" containerID="c61b3d568dcd0ae9a4c5e1f2de21cf5a0db2cf65652a9e217f03473254856b16" Jan 21 11:14:39 crc kubenswrapper[4881]: I0121 11:14:39.965478 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7ddb5c749-svq8w"] Jan 21 11:14:39 crc kubenswrapper[4881]: I0121 11:14:39.967277 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-svq8w" Jan 21 11:14:39 crc kubenswrapper[4881]: I0121 11:14:39.969749 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-njf4m" Jan 21 11:14:39 crc kubenswrapper[4881]: I0121 11:14:39.978927 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7ddb5c749-svq8w"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:39.992537 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-9b68f5989-7qgck"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:39.993711 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-7qgck" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:39.998385 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-rzgzl" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.011826 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-9f958b845-4wmln"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.012751 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-9f958b845-4wmln" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.016685 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-f8629" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.021646 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-c6994669c-jv7cr"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.022967 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-c6994669c-jv7cr" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.024519 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-58vbs" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.100101 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-zmgll"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.101621 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-zmgll" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.102847 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-bv8wz"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.103676 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-bv8wz" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.109050 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-77c48c7859-klgq4"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.110180 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-klgq4" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.112331 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-9ktfq" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.112557 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-b77kh" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.117999 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.142251 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-m6lch" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.161630 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvpzz\" (UniqueName: \"kubernetes.io/projected/36e5ddfe-67a4-4721-9ef5-b9459c64bf5c-kube-api-access-zvpzz\") pod \"designate-operator-controller-manager-9f958b845-4wmln\" (UID: \"36e5ddfe-67a4-4721-9ef5-b9459c64bf5c\") " pod="openstack-operators/designate-operator-controller-manager-9f958b845-4wmln" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.166246 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7p2p\" (UniqueName: \"kubernetes.io/projected/1f795f92-d385-49bc-bc91-5109734f4d5a-kube-api-access-n7p2p\") pod \"glance-operator-controller-manager-c6994669c-jv7cr\" (UID: \"1f795f92-d385-49bc-bc91-5109734f4d5a\") " pod="openstack-operators/glance-operator-controller-manager-c6994669c-jv7cr" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.190069 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znqn9\" (UniqueName: \"kubernetes.io/projected/848fd8db-3bd5-4e22-96ca-f69b181e48be-kube-api-access-znqn9\") pod \"barbican-operator-controller-manager-7ddb5c749-svq8w\" (UID: \"848fd8db-3bd5-4e22-96ca-f69b181e48be\") " pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-svq8w" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.200261 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z9wn\" (UniqueName: \"kubernetes.io/projected/a028dcae-6b9d-414d-8bab-652f301de541-kube-api-access-8z9wn\") pod \"cinder-operator-controller-manager-9b68f5989-7qgck\" (UID: \"a028dcae-6b9d-414d-8bab-652f301de541\") " pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-7qgck" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.249837 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-78757b4889-5qcms"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.251402 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-5qcms" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.262392 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-767fdc4f47-9zp7h"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.263562 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-9zp7h" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.269568 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-8ghks" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.269726 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-5zkmj" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.294141 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-864f6b75bf-h6dr4"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.295426 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-h6dr4" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.303060 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-9f958b845-4wmln"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.303515 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-d5s42" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.321590 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-zmgll"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.330191 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-9b68f5989-7qgck"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.341116 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7p2p\" (UniqueName: \"kubernetes.io/projected/1f795f92-d385-49bc-bc91-5109734f4d5a-kube-api-access-n7p2p\") pod \"glance-operator-controller-manager-c6994669c-jv7cr\" (UID: \"1f795f92-d385-49bc-bc91-5109734f4d5a\") " pod="openstack-operators/glance-operator-controller-manager-c6994669c-jv7cr" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.341194 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znqn9\" (UniqueName: \"kubernetes.io/projected/848fd8db-3bd5-4e22-96ca-f69b181e48be-kube-api-access-znqn9\") pod \"barbican-operator-controller-manager-7ddb5c749-svq8w\" (UID: \"848fd8db-3bd5-4e22-96ca-f69b181e48be\") " pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-svq8w" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.341238 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw9tk\" (UniqueName: \"kubernetes.io/projected/bb9b2c3f-4f68-44fc-addf-2cf4421be015-kube-api-access-jw9tk\") pod \"horizon-operator-controller-manager-77d5c5b54f-bv8wz\" (UID: \"bb9b2c3f-4f68-44fc-addf-2cf4421be015\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-bv8wz" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.341266 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z9wn\" (UniqueName: \"kubernetes.io/projected/a028dcae-6b9d-414d-8bab-652f301de541-kube-api-access-8z9wn\") pod \"cinder-operator-controller-manager-9b68f5989-7qgck\" (UID: \"a028dcae-6b9d-414d-8bab-652f301de541\") " pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-7qgck" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.341339 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w757\" (UniqueName: \"kubernetes.io/projected/2fe210a4-2adf-4b55-9a43-c1c390f51b0e-kube-api-access-2w757\") pod \"infra-operator-controller-manager-77c48c7859-klgq4\" (UID: \"2fe210a4-2adf-4b55-9a43-c1c390f51b0e\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-klgq4" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.341365 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csk5g\" (UniqueName: \"kubernetes.io/projected/efb259b7-934f-4bc3-b502-633472d1a1c5-kube-api-access-csk5g\") pod \"heat-operator-controller-manager-594c8c9d5d-zmgll\" (UID: \"efb259b7-934f-4bc3-b502-633472d1a1c5\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-zmgll" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.341395 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2fe210a4-2adf-4b55-9a43-c1c390f51b0e-cert\") pod \"infra-operator-controller-manager-77c48c7859-klgq4\" (UID: \"2fe210a4-2adf-4b55-9a43-c1c390f51b0e\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-klgq4" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.341418 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvpzz\" (UniqueName: \"kubernetes.io/projected/36e5ddfe-67a4-4721-9ef5-b9459c64bf5c-kube-api-access-zvpzz\") pod \"designate-operator-controller-manager-9f958b845-4wmln\" (UID: \"36e5ddfe-67a4-4721-9ef5-b9459c64bf5c\") " pod="openstack-operators/designate-operator-controller-manager-9f958b845-4wmln" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.350502 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-77c48c7859-klgq4"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.359741 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-bv8wz"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.372413 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-78757b4889-5qcms"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.374529 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7p2p\" (UniqueName: \"kubernetes.io/projected/1f795f92-d385-49bc-bc91-5109734f4d5a-kube-api-access-n7p2p\") pod \"glance-operator-controller-manager-c6994669c-jv7cr\" (UID: \"1f795f92-d385-49bc-bc91-5109734f4d5a\") " pod="openstack-operators/glance-operator-controller-manager-c6994669c-jv7cr" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.386399 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-c6994669c-jv7cr"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.390552 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-767fdc4f47-9zp7h"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.391316 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvpzz\" (UniqueName: \"kubernetes.io/projected/36e5ddfe-67a4-4721-9ef5-b9459c64bf5c-kube-api-access-zvpzz\") pod \"designate-operator-controller-manager-9f958b845-4wmln\" (UID: \"36e5ddfe-67a4-4721-9ef5-b9459c64bf5c\") " pod="openstack-operators/designate-operator-controller-manager-9f958b845-4wmln" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.394618 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znqn9\" (UniqueName: \"kubernetes.io/projected/848fd8db-3bd5-4e22-96ca-f69b181e48be-kube-api-access-znqn9\") pod \"barbican-operator-controller-manager-7ddb5c749-svq8w\" (UID: \"848fd8db-3bd5-4e22-96ca-f69b181e48be\") " pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-svq8w" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.399451 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-s6gm8"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.400034 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z9wn\" (UniqueName: \"kubernetes.io/projected/a028dcae-6b9d-414d-8bab-652f301de541-kube-api-access-8z9wn\") pod \"cinder-operator-controller-manager-9b68f5989-7qgck\" (UID: \"a028dcae-6b9d-414d-8bab-652f301de541\") " pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-7qgck" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.400435 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-s6gm8" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.413869 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-864f6b75bf-h6dr4"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.416317 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-cb4666565-ncnww"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.417377 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-ncnww" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.417920 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-g26mn" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.421246 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-dklr8" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.443163 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq74t\" (UniqueName: \"kubernetes.io/projected/b72b2323-5329-4145-9cee-b447d9e2a304-kube-api-access-wq74t\") pod \"manila-operator-controller-manager-864f6b75bf-h6dr4\" (UID: \"b72b2323-5329-4145-9cee-b447d9e2a304\") " pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-h6dr4" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.443424 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcnf6\" (UniqueName: \"kubernetes.io/projected/ba9a1249-fc58-4809-a472-d199afa9b52b-kube-api-access-pcnf6\") pod \"keystone-operator-controller-manager-767fdc4f47-9zp7h\" (UID: \"ba9a1249-fc58-4809-a472-d199afa9b52b\") " pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-9zp7h" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.443489 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jw9tk\" (UniqueName: \"kubernetes.io/projected/bb9b2c3f-4f68-44fc-addf-2cf4421be015-kube-api-access-jw9tk\") pod \"horizon-operator-controller-manager-77d5c5b54f-bv8wz\" (UID: \"bb9b2c3f-4f68-44fc-addf-2cf4421be015\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-bv8wz" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.443517 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r2tn\" (UniqueName: \"kubernetes.io/projected/d0cafd1d-5f37-499a-a531-547a137aae21-kube-api-access-8r2tn\") pod \"ironic-operator-controller-manager-78757b4889-5qcms\" (UID: \"d0cafd1d-5f37-499a-a531-547a137aae21\") " pod="openstack-operators/ironic-operator-controller-manager-78757b4889-5qcms" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.443685 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2w757\" (UniqueName: \"kubernetes.io/projected/2fe210a4-2adf-4b55-9a43-c1c390f51b0e-kube-api-access-2w757\") pod \"infra-operator-controller-manager-77c48c7859-klgq4\" (UID: \"2fe210a4-2adf-4b55-9a43-c1c390f51b0e\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-klgq4" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.443717 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csk5g\" (UniqueName: \"kubernetes.io/projected/efb259b7-934f-4bc3-b502-633472d1a1c5-kube-api-access-csk5g\") pod \"heat-operator-controller-manager-594c8c9d5d-zmgll\" (UID: \"efb259b7-934f-4bc3-b502-633472d1a1c5\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-zmgll" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.443763 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2fe210a4-2adf-4b55-9a43-c1c390f51b0e-cert\") pod \"infra-operator-controller-manager-77c48c7859-klgq4\" (UID: \"2fe210a4-2adf-4b55-9a43-c1c390f51b0e\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-klgq4" Jan 21 11:14:40 crc kubenswrapper[4881]: E0121 11:14:40.443986 4881 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 11:14:40 crc kubenswrapper[4881]: E0121 11:14:40.444060 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2fe210a4-2adf-4b55-9a43-c1c390f51b0e-cert podName:2fe210a4-2adf-4b55-9a43-c1c390f51b0e nodeName:}" failed. No retries permitted until 2026-01-21 11:14:40.944036238 +0000 UTC m=+1068.203992697 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2fe210a4-2adf-4b55-9a43-c1c390f51b0e-cert") pod "infra-operator-controller-manager-77c48c7859-klgq4" (UID: "2fe210a4-2adf-4b55-9a43-c1c390f51b0e") : secret "infra-operator-webhook-server-cert" not found Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.446403 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-cb4666565-ncnww"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.460547 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-s6gm8"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.477610 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jw9tk\" (UniqueName: \"kubernetes.io/projected/bb9b2c3f-4f68-44fc-addf-2cf4421be015-kube-api-access-jw9tk\") pod \"horizon-operator-controller-manager-77d5c5b54f-bv8wz\" (UID: \"bb9b2c3f-4f68-44fc-addf-2cf4421be015\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-bv8wz" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.484017 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-65849867d6-798zt"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.485283 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-65849867d6-798zt" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.487370 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csk5g\" (UniqueName: \"kubernetes.io/projected/efb259b7-934f-4bc3-b502-633472d1a1c5-kube-api-access-csk5g\") pod \"heat-operator-controller-manager-594c8c9d5d-zmgll\" (UID: \"efb259b7-934f-4bc3-b502-633472d1a1c5\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-zmgll" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.488928 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-bqsg6" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.501050 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w757\" (UniqueName: \"kubernetes.io/projected/2fe210a4-2adf-4b55-9a43-c1c390f51b0e-kube-api-access-2w757\") pod \"infra-operator-controller-manager-77c48c7859-klgq4\" (UID: \"2fe210a4-2adf-4b55-9a43-c1c390f51b0e\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-klgq4" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.526588 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-n7kgd"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.527944 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-n7kgd" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.529824 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-m9p9v" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.545509 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcnf6\" (UniqueName: \"kubernetes.io/projected/ba9a1249-fc58-4809-a472-d199afa9b52b-kube-api-access-pcnf6\") pod \"keystone-operator-controller-manager-767fdc4f47-9zp7h\" (UID: \"ba9a1249-fc58-4809-a472-d199afa9b52b\") " pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-9zp7h" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.545563 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r2tn\" (UniqueName: \"kubernetes.io/projected/d0cafd1d-5f37-499a-a531-547a137aae21-kube-api-access-8r2tn\") pod \"ironic-operator-controller-manager-78757b4889-5qcms\" (UID: \"d0cafd1d-5f37-499a-a531-547a137aae21\") " pod="openstack-operators/ironic-operator-controller-manager-78757b4889-5qcms" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.545604 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt66j\" (UniqueName: \"kubernetes.io/projected/c3b86204-5389-4b6a-bd45-fb6ee23b784e-kube-api-access-zt66j\") pod \"neutron-operator-controller-manager-cb4666565-ncnww\" (UID: \"c3b86204-5389-4b6a-bd45-fb6ee23b784e\") " pod="openstack-operators/neutron-operator-controller-manager-cb4666565-ncnww" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.545661 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2qhv\" (UniqueName: \"kubernetes.io/projected/4c2550fe-b3eb-4eef-8ffc-ebb4a9ce1b5f-kube-api-access-s2qhv\") pod \"mariadb-operator-controller-manager-c87fff755-s6gm8\" (UID: \"4c2550fe-b3eb-4eef-8ffc-ebb4a9ce1b5f\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-s6gm8" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.545724 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wq74t\" (UniqueName: \"kubernetes.io/projected/b72b2323-5329-4145-9cee-b447d9e2a304-kube-api-access-wq74t\") pod \"manila-operator-controller-manager-864f6b75bf-h6dr4\" (UID: \"b72b2323-5329-4145-9cee-b447d9e2a304\") " pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-h6dr4" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.560089 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-65849867d6-798zt"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.570556 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wq74t\" (UniqueName: \"kubernetes.io/projected/b72b2323-5329-4145-9cee-b447d9e2a304-kube-api-access-wq74t\") pod \"manila-operator-controller-manager-864f6b75bf-h6dr4\" (UID: \"b72b2323-5329-4145-9cee-b447d9e2a304\") " pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-h6dr4" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.571714 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcnf6\" (UniqueName: \"kubernetes.io/projected/ba9a1249-fc58-4809-a472-d199afa9b52b-kube-api-access-pcnf6\") pod \"keystone-operator-controller-manager-767fdc4f47-9zp7h\" (UID: \"ba9a1249-fc58-4809-a472-d199afa9b52b\") " pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-9zp7h" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.572546 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-n7kgd"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.583090 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r2tn\" (UniqueName: \"kubernetes.io/projected/d0cafd1d-5f37-499a-a531-547a137aae21-kube-api-access-8r2tn\") pod \"ironic-operator-controller-manager-78757b4889-5qcms\" (UID: \"d0cafd1d-5f37-499a-a531-547a137aae21\") " pod="openstack-operators/ironic-operator-controller-manager-78757b4889-5qcms" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.595278 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-9zp7h" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.601352 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544795q"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.602396 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-svq8w" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.602465 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544795q" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.610200 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.610335 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-fzcfv" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.619037 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-7qgck" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.622293 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-h6dr4" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.624735 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-686df47fcb-jh4z9"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.626024 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jh4z9" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.627903 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-872n6" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.631066 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-vpqw4"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.632295 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-vpqw4" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.635191 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-7h6dm" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.635941 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-9f958b845-4wmln" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.638633 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-85dd56d4cc-rk8l8"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.639366 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-rk8l8" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.641067 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-j9ww2" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.645739 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-c6994669c-jv7cr" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.647333 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt66j\" (UniqueName: \"kubernetes.io/projected/c3b86204-5389-4b6a-bd45-fb6ee23b784e-kube-api-access-zt66j\") pod \"neutron-operator-controller-manager-cb4666565-ncnww\" (UID: \"c3b86204-5389-4b6a-bd45-fb6ee23b784e\") " pod="openstack-operators/neutron-operator-controller-manager-cb4666565-ncnww" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.647395 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2qhv\" (UniqueName: \"kubernetes.io/projected/4c2550fe-b3eb-4eef-8ffc-ebb4a9ce1b5f-kube-api-access-s2qhv\") pod \"mariadb-operator-controller-manager-c87fff755-s6gm8\" (UID: \"4c2550fe-b3eb-4eef-8ffc-ebb4a9ce1b5f\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-s6gm8" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.647467 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96fmk\" (UniqueName: \"kubernetes.io/projected/761a1a49-e01e-4674-b1f4-da732e1def98-kube-api-access-96fmk\") pod \"nova-operator-controller-manager-65849867d6-798zt\" (UID: \"761a1a49-e01e-4674-b1f4-da732e1def98\") " pod="openstack-operators/nova-operator-controller-manager-65849867d6-798zt" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.647496 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhjc2\" (UniqueName: \"kubernetes.io/projected/340257c4-9218-49b0-8a75-b2a4e0231fe3-kube-api-access-nhjc2\") pod \"octavia-operator-controller-manager-7fc9b76cf6-n7kgd\" (UID: \"340257c4-9218-49b0-8a75-b2a4e0231fe3\") " pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-n7kgd" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.666031 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-vpqw4"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.697830 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt66j\" (UniqueName: \"kubernetes.io/projected/c3b86204-5389-4b6a-bd45-fb6ee23b784e-kube-api-access-zt66j\") pod \"neutron-operator-controller-manager-cb4666565-ncnww\" (UID: \"c3b86204-5389-4b6a-bd45-fb6ee23b784e\") " pod="openstack-operators/neutron-operator-controller-manager-cb4666565-ncnww" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.706833 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-686df47fcb-jh4z9"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.713776 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2qhv\" (UniqueName: \"kubernetes.io/projected/4c2550fe-b3eb-4eef-8ffc-ebb4a9ce1b5f-kube-api-access-s2qhv\") pod \"mariadb-operator-controller-manager-c87fff755-s6gm8\" (UID: \"4c2550fe-b3eb-4eef-8ffc-ebb4a9ce1b5f\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-s6gm8" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.741080 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544795q"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.741847 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-zmgll" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.750563 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b1b17be2-e382-4916-8e53-a68c85b5bfc2-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8544795q\" (UID: \"b1b17be2-e382-4916-8e53-a68c85b5bfc2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544795q" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.751001 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b5pw\" (UniqueName: \"kubernetes.io/projected/50cfdf18-6a7e-4b3c-bb0f-5260fc3d42eb-kube-api-access-9b5pw\") pod \"ovn-operator-controller-manager-55db956ddc-vpqw4\" (UID: \"50cfdf18-6a7e-4b3c-bb0f-5260fc3d42eb\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-vpqw4" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.751044 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvvxc\" (UniqueName: \"kubernetes.io/projected/8c504afd-e4e0-4676-b292-b569b638a7dd-kube-api-access-dvvxc\") pod \"swift-operator-controller-manager-85dd56d4cc-rk8l8\" (UID: \"8c504afd-e4e0-4676-b292-b569b638a7dd\") " pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-rk8l8" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.751097 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x4wn\" (UniqueName: \"kubernetes.io/projected/b1b17be2-e382-4916-8e53-a68c85b5bfc2-kube-api-access-7x4wn\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8544795q\" (UID: \"b1b17be2-e382-4916-8e53-a68c85b5bfc2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544795q" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.751153 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96fmk\" (UniqueName: \"kubernetes.io/projected/761a1a49-e01e-4674-b1f4-da732e1def98-kube-api-access-96fmk\") pod \"nova-operator-controller-manager-65849867d6-798zt\" (UID: \"761a1a49-e01e-4674-b1f4-da732e1def98\") " pod="openstack-operators/nova-operator-controller-manager-65849867d6-798zt" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.751201 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhjc2\" (UniqueName: \"kubernetes.io/projected/340257c4-9218-49b0-8a75-b2a4e0231fe3-kube-api-access-nhjc2\") pod \"octavia-operator-controller-manager-7fc9b76cf6-n7kgd\" (UID: \"340257c4-9218-49b0-8a75-b2a4e0231fe3\") " pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-n7kgd" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.751317 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6hzs\" (UniqueName: \"kubernetes.io/projected/e8e6f423-a07b-4a22-9e39-efa8de22747e-kube-api-access-p6hzs\") pod \"placement-operator-controller-manager-686df47fcb-jh4z9\" (UID: \"e8e6f423-a07b-4a22-9e39-efa8de22747e\") " pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jh4z9" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.785203 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhjc2\" (UniqueName: \"kubernetes.io/projected/340257c4-9218-49b0-8a75-b2a4e0231fe3-kube-api-access-nhjc2\") pod \"octavia-operator-controller-manager-7fc9b76cf6-n7kgd\" (UID: \"340257c4-9218-49b0-8a75-b2a4e0231fe3\") " pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-n7kgd" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.793212 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96fmk\" (UniqueName: \"kubernetes.io/projected/761a1a49-e01e-4674-b1f4-da732e1def98-kube-api-access-96fmk\") pod \"nova-operator-controller-manager-65849867d6-798zt\" (UID: \"761a1a49-e01e-4674-b1f4-da732e1def98\") " pod="openstack-operators/nova-operator-controller-manager-65849867d6-798zt" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.803344 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-bv8wz" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.820572 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-s6gm8" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.826712 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7cd8bc9dbb-tttcz"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.830565 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-tttcz" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.837510 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-gjcsh" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.849908 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-ncnww" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.853336 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b5pw\" (UniqueName: \"kubernetes.io/projected/50cfdf18-6a7e-4b3c-bb0f-5260fc3d42eb-kube-api-access-9b5pw\") pod \"ovn-operator-controller-manager-55db956ddc-vpqw4\" (UID: \"50cfdf18-6a7e-4b3c-bb0f-5260fc3d42eb\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-vpqw4" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.853388 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvvxc\" (UniqueName: \"kubernetes.io/projected/8c504afd-e4e0-4676-b292-b569b638a7dd-kube-api-access-dvvxc\") pod \"swift-operator-controller-manager-85dd56d4cc-rk8l8\" (UID: \"8c504afd-e4e0-4676-b292-b569b638a7dd\") " pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-rk8l8" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.853413 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7x4wn\" (UniqueName: \"kubernetes.io/projected/b1b17be2-e382-4916-8e53-a68c85b5bfc2-kube-api-access-7x4wn\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8544795q\" (UID: \"b1b17be2-e382-4916-8e53-a68c85b5bfc2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544795q" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.853490 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6hzs\" (UniqueName: \"kubernetes.io/projected/e8e6f423-a07b-4a22-9e39-efa8de22747e-kube-api-access-p6hzs\") pod \"placement-operator-controller-manager-686df47fcb-jh4z9\" (UID: \"e8e6f423-a07b-4a22-9e39-efa8de22747e\") " pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jh4z9" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.853542 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b1b17be2-e382-4916-8e53-a68c85b5bfc2-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8544795q\" (UID: \"b1b17be2-e382-4916-8e53-a68c85b5bfc2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544795q" Jan 21 11:14:40 crc kubenswrapper[4881]: E0121 11:14:40.853695 4881 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 11:14:40 crc kubenswrapper[4881]: E0121 11:14:40.853773 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1b17be2-e382-4916-8e53-a68c85b5bfc2-cert podName:b1b17be2-e382-4916-8e53-a68c85b5bfc2 nodeName:}" failed. No retries permitted until 2026-01-21 11:14:41.35375345 +0000 UTC m=+1068.613709919 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b1b17be2-e382-4916-8e53-a68c85b5bfc2-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8544795q" (UID: "b1b17be2-e382-4916-8e53-a68c85b5bfc2") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.856187 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-fcht4"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.857982 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7cd8bc9dbb-tttcz"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.858094 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-fcht4" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.865497 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-65849867d6-798zt" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.874846 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-fcht4"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.883238 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-n7kgd" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.883797 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-85dd56d4cc-rk8l8"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.884820 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-5qcms" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.905032 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-849fd9b886-k9t7q"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.907566 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-849fd9b886-k9t7q" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.926661 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-849fd9b886-k9t7q"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.955106 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxbkl\" (UniqueName: \"kubernetes.io/projected/2aac430e-3ac8-4624-8575-66386b5c2df3-kube-api-access-pxbkl\") pod \"test-operator-controller-manager-7cd8bc9dbb-tttcz\" (UID: \"2aac430e-3ac8-4624-8575-66386b5c2df3\") " pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-tttcz" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.955640 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2fe210a4-2adf-4b55-9a43-c1c390f51b0e-cert\") pod \"infra-operator-controller-manager-77c48c7859-klgq4\" (UID: \"2fe210a4-2adf-4b55-9a43-c1c390f51b0e\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-klgq4" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.955894 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8mtz\" (UniqueName: \"kubernetes.io/projected/1cebbaaf-6189-409a-8f25-43d7fac77f95-kube-api-access-j8mtz\") pod \"watcher-operator-controller-manager-849fd9b886-k9t7q\" (UID: \"1cebbaaf-6189-409a-8f25-43d7fac77f95\") " pod="openstack-operators/watcher-operator-controller-manager-849fd9b886-k9t7q" Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.956080 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm5x4\" (UniqueName: \"kubernetes.io/projected/55ce5ee6-47f4-4874-92dc-6ab78f2ce212-kube-api-access-nm5x4\") pod \"telemetry-operator-controller-manager-5f8f495fcf-fcht4\" (UID: \"55ce5ee6-47f4-4874-92dc-6ab78f2ce212\") " pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-fcht4" Jan 21 11:14:40 crc kubenswrapper[4881]: E0121 11:14:40.956523 4881 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 11:14:40 crc kubenswrapper[4881]: E0121 11:14:40.956806 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2fe210a4-2adf-4b55-9a43-c1c390f51b0e-cert podName:2fe210a4-2adf-4b55-9a43-c1c390f51b0e nodeName:}" failed. No retries permitted until 2026-01-21 11:14:41.956762624 +0000 UTC m=+1069.216719283 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2fe210a4-2adf-4b55-9a43-c1c390f51b0e-cert") pod "infra-operator-controller-manager-77c48c7859-klgq4" (UID: "2fe210a4-2adf-4b55-9a43-c1c390f51b0e") : secret "infra-operator-webhook-server-cert" not found Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.976502 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-87d6d564b-ktcf8"] Jan 21 11:14:40 crc kubenswrapper[4881]: I0121 11:14:40.979202 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-87d6d564b-ktcf8" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.004973 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-87d6d564b-ktcf8"] Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.023024 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-76qxc"] Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.023917 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-76qxc"] Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.023996 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-76qxc" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.075369 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.076345 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-t9k6g" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.079200 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.081284 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x4wn\" (UniqueName: \"kubernetes.io/projected/b1b17be2-e382-4916-8e53-a68c85b5bfc2-kube-api-access-7x4wn\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8544795q\" (UID: \"b1b17be2-e382-4916-8e53-a68c85b5bfc2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544795q" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.083280 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6hzs\" (UniqueName: \"kubernetes.io/projected/e8e6f423-a07b-4a22-9e39-efa8de22747e-kube-api-access-p6hzs\") pod \"placement-operator-controller-manager-686df47fcb-jh4z9\" (UID: \"e8e6f423-a07b-4a22-9e39-efa8de22747e\") " pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jh4z9" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.086908 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b5pw\" (UniqueName: \"kubernetes.io/projected/50cfdf18-6a7e-4b3c-bb0f-5260fc3d42eb-kube-api-access-9b5pw\") pod \"ovn-operator-controller-manager-55db956ddc-vpqw4\" (UID: \"50cfdf18-6a7e-4b3c-bb0f-5260fc3d42eb\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-vpqw4" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.089139 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-jqcjd" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.089566 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-s4m4r" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.089922 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-zjv4z" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.091031 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-metrics-certs\") pod \"openstack-operator-controller-manager-87d6d564b-ktcf8\" (UID: \"a55fdb43-cd6c-4415-8ef6-07f6c7da6272\") " pod="openstack-operators/openstack-operator-controller-manager-87d6d564b-ktcf8" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.091258 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxbkl\" (UniqueName: \"kubernetes.io/projected/2aac430e-3ac8-4624-8575-66386b5c2df3-kube-api-access-pxbkl\") pod \"test-operator-controller-manager-7cd8bc9dbb-tttcz\" (UID: \"2aac430e-3ac8-4624-8575-66386b5c2df3\") " pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-tttcz" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.091395 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfj8h\" (UniqueName: \"kubernetes.io/projected/8c8feeec-377c-499a-b666-895010f8ebeb-kube-api-access-jfj8h\") pod \"rabbitmq-cluster-operator-manager-668c99d594-76qxc\" (UID: \"8c8feeec-377c-499a-b666-895010f8ebeb\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-76qxc" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.091605 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8mtz\" (UniqueName: \"kubernetes.io/projected/1cebbaaf-6189-409a-8f25-43d7fac77f95-kube-api-access-j8mtz\") pod \"watcher-operator-controller-manager-849fd9b886-k9t7q\" (UID: \"1cebbaaf-6189-409a-8f25-43d7fac77f95\") " pod="openstack-operators/watcher-operator-controller-manager-849fd9b886-k9t7q" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.091932 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nm5x4\" (UniqueName: \"kubernetes.io/projected/55ce5ee6-47f4-4874-92dc-6ab78f2ce212-kube-api-access-nm5x4\") pod \"telemetry-operator-controller-manager-5f8f495fcf-fcht4\" (UID: \"55ce5ee6-47f4-4874-92dc-6ab78f2ce212\") " pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-fcht4" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.092134 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67nts\" (UniqueName: \"kubernetes.io/projected/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-kube-api-access-67nts\") pod \"openstack-operator-controller-manager-87d6d564b-ktcf8\" (UID: \"a55fdb43-cd6c-4415-8ef6-07f6c7da6272\") " pod="openstack-operators/openstack-operator-controller-manager-87d6d564b-ktcf8" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.092345 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-webhook-certs\") pod \"openstack-operator-controller-manager-87d6d564b-ktcf8\" (UID: \"a55fdb43-cd6c-4415-8ef6-07f6c7da6272\") " pod="openstack-operators/openstack-operator-controller-manager-87d6d564b-ktcf8" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.113304 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvvxc\" (UniqueName: \"kubernetes.io/projected/8c504afd-e4e0-4676-b292-b569b638a7dd-kube-api-access-dvvxc\") pod \"swift-operator-controller-manager-85dd56d4cc-rk8l8\" (UID: \"8c504afd-e4e0-4676-b292-b569b638a7dd\") " pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-rk8l8" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.145697 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nm5x4\" (UniqueName: \"kubernetes.io/projected/55ce5ee6-47f4-4874-92dc-6ab78f2ce212-kube-api-access-nm5x4\") pod \"telemetry-operator-controller-manager-5f8f495fcf-fcht4\" (UID: \"55ce5ee6-47f4-4874-92dc-6ab78f2ce212\") " pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-fcht4" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.146710 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxbkl\" (UniqueName: \"kubernetes.io/projected/2aac430e-3ac8-4624-8575-66386b5c2df3-kube-api-access-pxbkl\") pod \"test-operator-controller-manager-7cd8bc9dbb-tttcz\" (UID: \"2aac430e-3ac8-4624-8575-66386b5c2df3\") " pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-tttcz" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.147842 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8mtz\" (UniqueName: \"kubernetes.io/projected/1cebbaaf-6189-409a-8f25-43d7fac77f95-kube-api-access-j8mtz\") pod \"watcher-operator-controller-manager-849fd9b886-k9t7q\" (UID: \"1cebbaaf-6189-409a-8f25-43d7fac77f95\") " pod="openstack-operators/watcher-operator-controller-manager-849fd9b886-k9t7q" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.196250 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-849fd9b886-k9t7q" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.227461 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-webhook-certs\") pod \"openstack-operator-controller-manager-87d6d564b-ktcf8\" (UID: \"a55fdb43-cd6c-4415-8ef6-07f6c7da6272\") " pod="openstack-operators/openstack-operator-controller-manager-87d6d564b-ktcf8" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.227515 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-metrics-certs\") pod \"openstack-operator-controller-manager-87d6d564b-ktcf8\" (UID: \"a55fdb43-cd6c-4415-8ef6-07f6c7da6272\") " pod="openstack-operators/openstack-operator-controller-manager-87d6d564b-ktcf8" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.227538 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfj8h\" (UniqueName: \"kubernetes.io/projected/8c8feeec-377c-499a-b666-895010f8ebeb-kube-api-access-jfj8h\") pod \"rabbitmq-cluster-operator-manager-668c99d594-76qxc\" (UID: \"8c8feeec-377c-499a-b666-895010f8ebeb\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-76qxc" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.227598 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67nts\" (UniqueName: \"kubernetes.io/projected/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-kube-api-access-67nts\") pod \"openstack-operator-controller-manager-87d6d564b-ktcf8\" (UID: \"a55fdb43-cd6c-4415-8ef6-07f6c7da6272\") " pod="openstack-operators/openstack-operator-controller-manager-87d6d564b-ktcf8" Jan 21 11:14:41 crc kubenswrapper[4881]: E0121 11:14:41.228050 4881 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 11:14:41 crc kubenswrapper[4881]: E0121 11:14:41.228103 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-webhook-certs podName:a55fdb43-cd6c-4415-8ef6-07f6c7da6272 nodeName:}" failed. No retries permitted until 2026-01-21 11:14:41.72808383 +0000 UTC m=+1068.988040299 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-webhook-certs") pod "openstack-operator-controller-manager-87d6d564b-ktcf8" (UID: "a55fdb43-cd6c-4415-8ef6-07f6c7da6272") : secret "webhook-server-cert" not found Jan 21 11:14:41 crc kubenswrapper[4881]: E0121 11:14:41.228265 4881 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 11:14:41 crc kubenswrapper[4881]: E0121 11:14:41.228294 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-metrics-certs podName:a55fdb43-cd6c-4415-8ef6-07f6c7da6272 nodeName:}" failed. No retries permitted until 2026-01-21 11:14:41.728284875 +0000 UTC m=+1068.988241354 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-metrics-certs") pod "openstack-operator-controller-manager-87d6d564b-ktcf8" (UID: "a55fdb43-cd6c-4415-8ef6-07f6c7da6272") : secret "metrics-server-cert" not found Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.261669 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jh4z9" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.322086 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-vpqw4" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.325750 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-rk8l8" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.361082 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b1b17be2-e382-4916-8e53-a68c85b5bfc2-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8544795q\" (UID: \"b1b17be2-e382-4916-8e53-a68c85b5bfc2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544795q" Jan 21 11:14:41 crc kubenswrapper[4881]: E0121 11:14:41.361980 4881 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 11:14:41 crc kubenswrapper[4881]: E0121 11:14:41.382507 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1b17be2-e382-4916-8e53-a68c85b5bfc2-cert podName:b1b17be2-e382-4916-8e53-a68c85b5bfc2 nodeName:}" failed. No retries permitted until 2026-01-21 11:14:42.382457674 +0000 UTC m=+1069.642414153 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b1b17be2-e382-4916-8e53-a68c85b5bfc2-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8544795q" (UID: "b1b17be2-e382-4916-8e53-a68c85b5bfc2") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.393279 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67nts\" (UniqueName: \"kubernetes.io/projected/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-kube-api-access-67nts\") pod \"openstack-operator-controller-manager-87d6d564b-ktcf8\" (UID: \"a55fdb43-cd6c-4415-8ef6-07f6c7da6272\") " pod="openstack-operators/openstack-operator-controller-manager-87d6d564b-ktcf8" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.395435 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-tttcz" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.396336 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfj8h\" (UniqueName: \"kubernetes.io/projected/8c8feeec-377c-499a-b666-895010f8ebeb-kube-api-access-jfj8h\") pod \"rabbitmq-cluster-operator-manager-668c99d594-76qxc\" (UID: \"8c8feeec-377c-499a-b666-895010f8ebeb\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-76qxc" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.415732 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-fcht4" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.664886 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-76qxc" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.767604 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-webhook-certs\") pod \"openstack-operator-controller-manager-87d6d564b-ktcf8\" (UID: \"a55fdb43-cd6c-4415-8ef6-07f6c7da6272\") " pod="openstack-operators/openstack-operator-controller-manager-87d6d564b-ktcf8" Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.767762 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-metrics-certs\") pod \"openstack-operator-controller-manager-87d6d564b-ktcf8\" (UID: \"a55fdb43-cd6c-4415-8ef6-07f6c7da6272\") " pod="openstack-operators/openstack-operator-controller-manager-87d6d564b-ktcf8" Jan 21 11:14:41 crc kubenswrapper[4881]: E0121 11:14:41.768137 4881 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 11:14:41 crc kubenswrapper[4881]: E0121 11:14:41.768232 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-metrics-certs podName:a55fdb43-cd6c-4415-8ef6-07f6c7da6272 nodeName:}" failed. No retries permitted until 2026-01-21 11:14:42.768202159 +0000 UTC m=+1070.028158628 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-metrics-certs") pod "openstack-operator-controller-manager-87d6d564b-ktcf8" (UID: "a55fdb43-cd6c-4415-8ef6-07f6c7da6272") : secret "metrics-server-cert" not found Jan 21 11:14:41 crc kubenswrapper[4881]: E0121 11:14:41.768818 4881 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 11:14:41 crc kubenswrapper[4881]: E0121 11:14:41.768945 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-webhook-certs podName:a55fdb43-cd6c-4415-8ef6-07f6c7da6272 nodeName:}" failed. No retries permitted until 2026-01-21 11:14:42.768912616 +0000 UTC m=+1070.028869085 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-webhook-certs") pod "openstack-operator-controller-manager-87d6d564b-ktcf8" (UID: "a55fdb43-cd6c-4415-8ef6-07f6c7da6272") : secret "webhook-server-cert" not found Jan 21 11:14:41 crc kubenswrapper[4881]: I0121 11:14:41.970541 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2fe210a4-2adf-4b55-9a43-c1c390f51b0e-cert\") pod \"infra-operator-controller-manager-77c48c7859-klgq4\" (UID: \"2fe210a4-2adf-4b55-9a43-c1c390f51b0e\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-klgq4" Jan 21 11:14:41 crc kubenswrapper[4881]: E0121 11:14:41.970717 4881 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 11:14:41 crc kubenswrapper[4881]: E0121 11:14:41.970765 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2fe210a4-2adf-4b55-9a43-c1c390f51b0e-cert podName:2fe210a4-2adf-4b55-9a43-c1c390f51b0e nodeName:}" failed. No retries permitted until 2026-01-21 11:14:43.970750841 +0000 UTC m=+1071.230707310 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2fe210a4-2adf-4b55-9a43-c1c390f51b0e-cert") pod "infra-operator-controller-manager-77c48c7859-klgq4" (UID: "2fe210a4-2adf-4b55-9a43-c1c390f51b0e") : secret "infra-operator-webhook-server-cert" not found Jan 21 11:14:42 crc kubenswrapper[4881]: I0121 11:14:42.402637 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b1b17be2-e382-4916-8e53-a68c85b5bfc2-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8544795q\" (UID: \"b1b17be2-e382-4916-8e53-a68c85b5bfc2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544795q" Jan 21 11:14:42 crc kubenswrapper[4881]: E0121 11:14:42.404843 4881 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 11:14:42 crc kubenswrapper[4881]: E0121 11:14:42.405231 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1b17be2-e382-4916-8e53-a68c85b5bfc2-cert podName:b1b17be2-e382-4916-8e53-a68c85b5bfc2 nodeName:}" failed. No retries permitted until 2026-01-21 11:14:44.405202449 +0000 UTC m=+1071.665159068 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b1b17be2-e382-4916-8e53-a68c85b5bfc2-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8544795q" (UID: "b1b17be2-e382-4916-8e53-a68c85b5bfc2") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 11:14:42 crc kubenswrapper[4881]: I0121 11:14:42.859129 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-webhook-certs\") pod \"openstack-operator-controller-manager-87d6d564b-ktcf8\" (UID: \"a55fdb43-cd6c-4415-8ef6-07f6c7da6272\") " pod="openstack-operators/openstack-operator-controller-manager-87d6d564b-ktcf8" Jan 21 11:14:42 crc kubenswrapper[4881]: I0121 11:14:42.859238 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-metrics-certs\") pod \"openstack-operator-controller-manager-87d6d564b-ktcf8\" (UID: \"a55fdb43-cd6c-4415-8ef6-07f6c7da6272\") " pod="openstack-operators/openstack-operator-controller-manager-87d6d564b-ktcf8" Jan 21 11:14:42 crc kubenswrapper[4881]: E0121 11:14:42.859426 4881 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 11:14:42 crc kubenswrapper[4881]: E0121 11:14:42.859488 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-metrics-certs podName:a55fdb43-cd6c-4415-8ef6-07f6c7da6272 nodeName:}" failed. No retries permitted until 2026-01-21 11:14:44.85946693 +0000 UTC m=+1072.119423399 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-metrics-certs") pod "openstack-operator-controller-manager-87d6d564b-ktcf8" (UID: "a55fdb43-cd6c-4415-8ef6-07f6c7da6272") : secret "metrics-server-cert" not found Jan 21 11:14:42 crc kubenswrapper[4881]: E0121 11:14:42.860073 4881 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 11:14:42 crc kubenswrapper[4881]: E0121 11:14:42.860125 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-webhook-certs podName:a55fdb43-cd6c-4415-8ef6-07f6c7da6272 nodeName:}" failed. No retries permitted until 2026-01-21 11:14:44.860108066 +0000 UTC m=+1072.120064535 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-webhook-certs") pod "openstack-operator-controller-manager-87d6d564b-ktcf8" (UID: "a55fdb43-cd6c-4415-8ef6-07f6c7da6272") : secret "webhook-server-cert" not found Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.053764 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-bv8wz"] Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.060331 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-9b68f5989-7qgck"] Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.062610 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2fe210a4-2adf-4b55-9a43-c1c390f51b0e-cert\") pod \"infra-operator-controller-manager-77c48c7859-klgq4\" (UID: \"2fe210a4-2adf-4b55-9a43-c1c390f51b0e\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-klgq4" Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.062876 4881 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.062953 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2fe210a4-2adf-4b55-9a43-c1c390f51b0e-cert podName:2fe210a4-2adf-4b55-9a43-c1c390f51b0e nodeName:}" failed. No retries permitted until 2026-01-21 11:14:48.062930616 +0000 UTC m=+1075.322887085 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2fe210a4-2adf-4b55-9a43-c1c390f51b0e-cert") pod "infra-operator-controller-manager-77c48c7859-klgq4" (UID: "2fe210a4-2adf-4b55-9a43-c1c390f51b0e") : secret "infra-operator-webhook-server-cert" not found Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.069223 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-n7kgd"] Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.076564 4881 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 11:14:44 crc kubenswrapper[4881]: W0121 11:14:44.088532 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod340257c4_9218_49b0_8a75_b2a4e0231fe3.slice/crio-705bc8c9961f2a159bfd5194f6f035adc5ac923dbc26dd216480b551db77a558 WatchSource:0}: Error finding container 705bc8c9961f2a159bfd5194f6f035adc5ac923dbc26dd216480b551db77a558: Status 404 returned error can't find the container with id 705bc8c9961f2a159bfd5194f6f035adc5ac923dbc26dd216480b551db77a558 Jan 21 11:14:44 crc kubenswrapper[4881]: W0121 11:14:44.091193 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda028dcae_6b9d_414d_8bab_652f301de541.slice/crio-34e07c33fca9996b71aec285847fc0e1b6313856e5811d2b7e23d11c855ced9a WatchSource:0}: Error finding container 34e07c33fca9996b71aec285847fc0e1b6313856e5811d2b7e23d11c855ced9a: Status 404 returned error can't find the container with id 34e07c33fca9996b71aec285847fc0e1b6313856e5811d2b7e23d11c855ced9a Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.125107 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-c6994669c-jv7cr"] Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.136188 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-9f958b845-4wmln"] Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.155862 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-849fd9b886-k9t7q"] Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.167436 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-767fdc4f47-9zp7h"] Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.180748 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-zmgll"] Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.191562 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-s6gm8"] Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.203191 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7ddb5c749-svq8w"] Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.220830 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-864f6b75bf-h6dr4"] Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.233115 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-76qxc"] Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.242056 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jfj8h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-76qxc_openstack-operators(8c8feeec-377c-499a-b666-895010f8ebeb): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.241918 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:fd2e631e747c35a95f083418f5829d06c4b830f1fdb322368ff6190b9887ea32,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wq74t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-864f6b75bf-h6dr4_openstack-operators(b72b2323-5329-4145-9cee-b447d9e2a304): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.242736 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:0f440bf7dc937ce0135bdd328716686fd2f1320f453a9ac4e11e96383148ad6c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zt66j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-cb4666565-ncnww_openstack-operators(c3b86204-5389-4b6a-bd45-fb6ee23b784e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.243163 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-76qxc" podUID="8c8feeec-377c-499a-b666-895010f8ebeb" Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.243340 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-h6dr4" podUID="b72b2323-5329-4145-9cee-b447d9e2a304" Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.244544 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-ncnww" podUID="c3b86204-5389-4b6a-bd45-fb6ee23b784e" Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.246559 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-cb4666565-ncnww"] Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.452812 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-vpqw4"] Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.469549 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-78757b4889-5qcms"] Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.477192 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-686df47fcb-jh4z9"] Jan 21 11:14:44 crc kubenswrapper[4881]: W0121 11:14:44.488098 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0cafd1d_5f37_499a_a531_547a137aae21.slice/crio-9b78972310c9556c8896a8e1905d8f1256dfa1c5257d16aff20e8e756d472a4c WatchSource:0}: Error finding container 9b78972310c9556c8896a8e1905d8f1256dfa1c5257d16aff20e8e756d472a4c: Status 404 returned error can't find the container with id 9b78972310c9556c8896a8e1905d8f1256dfa1c5257d16aff20e8e756d472a4c Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.488267 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-85dd56d4cc-rk8l8"] Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.490529 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b1b17be2-e382-4916-8e53-a68c85b5bfc2-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8544795q\" (UID: \"b1b17be2-e382-4916-8e53-a68c85b5bfc2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544795q" Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.490730 4881 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.490817 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1b17be2-e382-4916-8e53-a68c85b5bfc2-cert podName:b1b17be2-e382-4916-8e53-a68c85b5bfc2 nodeName:}" failed. No retries permitted until 2026-01-21 11:14:48.490792818 +0000 UTC m=+1075.750749287 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b1b17be2-e382-4916-8e53-a68c85b5bfc2-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8544795q" (UID: "b1b17be2-e382-4916-8e53-a68c85b5bfc2") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 11:14:44 crc kubenswrapper[4881]: W0121 11:14:44.497410 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8e6f423_a07b_4a22_9e39_efa8de22747e.slice/crio-ef05d38ff266728a64eb1d01c6a0ea065a58968faf1ec7d3ee5aed5432d604a4 WatchSource:0}: Error finding container ef05d38ff266728a64eb1d01c6a0ea065a58968faf1ec7d3ee5aed5432d604a4: Status 404 returned error can't find the container with id ef05d38ff266728a64eb1d01c6a0ea065a58968faf1ec7d3ee5aed5432d604a4 Jan 21 11:14:44 crc kubenswrapper[4881]: W0121 11:14:44.498477 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55ce5ee6_47f4_4874_92dc_6ab78f2ce212.slice/crio-c6889bb0a1437b385995f9935900046a8b7e40d8e117c7cf186721da4929aed4 WatchSource:0}: Error finding container c6889bb0a1437b385995f9935900046a8b7e40d8e117c7cf186721da4929aed4: Status 404 returned error can't find the container with id c6889bb0a1437b385995f9935900046a8b7e40d8e117c7cf186721da4929aed4 Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.500416 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:146961cac3291daf96c1ca2bc7bd52bc94d1f4787a0770e23205c2c9beb0d737,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p6hzs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-686df47fcb-jh4z9_openstack-operators(e8e6f423-a07b-4a22-9e39-efa8de22747e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.500579 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-fcht4"] Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.501844 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jh4z9" podUID="e8e6f423-a07b-4a22-9e39-efa8de22747e" Jan 21 11:14:44 crc kubenswrapper[4881]: W0121 11:14:44.502965 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c504afd_e4e0_4676_b292_b569b638a7dd.slice/crio-1b839dbf4409e9315a7364f6fb7c43674c64cedc21438656b9e761c61a2ba388 WatchSource:0}: Error finding container 1b839dbf4409e9315a7364f6fb7c43674c64cedc21438656b9e761c61a2ba388: Status 404 returned error can't find the container with id 1b839dbf4409e9315a7364f6fb7c43674c64cedc21438656b9e761c61a2ba388 Jan 21 11:14:44 crc kubenswrapper[4881]: W0121 11:14:44.506963 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2aac430e_3ac8_4624_8575_66386b5c2df3.slice/crio-62bdcc15a65f1ed35c94ec3dea6a3c543fa7b28dd41b1fdfa362c736c28501c4 WatchSource:0}: Error finding container 62bdcc15a65f1ed35c94ec3dea6a3c543fa7b28dd41b1fdfa362c736c28501c4: Status 404 returned error can't find the container with id 62bdcc15a65f1ed35c94ec3dea6a3c543fa7b28dd41b1fdfa362c736c28501c4 Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.509085 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nm5x4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-5f8f495fcf-fcht4_openstack-operators(55ce5ee6-47f4-4874-92dc-6ab78f2ce212): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.510215 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7cd8bc9dbb-tttcz"] Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.510286 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-fcht4" podUID="55ce5ee6-47f4-4874-92dc-6ab78f2ce212" Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.513323 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pxbkl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7cd8bc9dbb-tttcz_openstack-operators(2aac430e-3ac8-4624-8575-66386b5c2df3): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.513633 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:9404536bf7cb7c3818e1a0f92b53e4d7c02fe7942324f32894106f02f8fc7e92,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dvvxc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-85dd56d4cc-rk8l8_openstack-operators(8c504afd-e4e0-4676-b292-b569b638a7dd): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.514443 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-tttcz" podUID="2aac430e-3ac8-4624-8575-66386b5c2df3" Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.515384 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-rk8l8" podUID="8c504afd-e4e0-4676-b292-b569b638a7dd" Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.517350 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-65849867d6-798zt"] Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.535576 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-96fmk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-65849867d6-798zt_openstack-operators(761a1a49-e01e-4674-b1f4-da732e1def98): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.536800 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-65849867d6-798zt" podUID="761a1a49-e01e-4674-b1f4-da732e1def98" Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.670054 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-svq8w" event={"ID":"848fd8db-3bd5-4e22-96ca-f69b181e48be","Type":"ContainerStarted","Data":"d523e709afe6be547fb9649a5bbc2cdef91edff360388c92c5a2498105b386be"} Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.671269 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-tttcz" event={"ID":"2aac430e-3ac8-4624-8575-66386b5c2df3","Type":"ContainerStarted","Data":"62bdcc15a65f1ed35c94ec3dea6a3c543fa7b28dd41b1fdfa362c736c28501c4"} Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.672838 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e\\\"\"" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-tttcz" podUID="2aac430e-3ac8-4624-8575-66386b5c2df3" Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.674387 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-9f958b845-4wmln" event={"ID":"36e5ddfe-67a4-4721-9ef5-b9459c64bf5c","Type":"ContainerStarted","Data":"a3bf9d1f7f2a3f7faa4275cef20669af63558cfc9bb35df5469246cc5d68128e"} Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.676435 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-7qgck" event={"ID":"a028dcae-6b9d-414d-8bab-652f301de541","Type":"ContainerStarted","Data":"34e07c33fca9996b71aec285847fc0e1b6313856e5811d2b7e23d11c855ced9a"} Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.678091 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-849fd9b886-k9t7q" event={"ID":"1cebbaaf-6189-409a-8f25-43d7fac77f95","Type":"ContainerStarted","Data":"7a72c1d78ee332762b08b248316b0a5d30c3a405c177d37bf03da637118e6401"} Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.681456 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-h6dr4" event={"ID":"b72b2323-5329-4145-9cee-b447d9e2a304","Type":"ContainerStarted","Data":"415c3a374607aa36d534fe15022f92cc1c7b8964bc9b8c3dd1323eefbb92219c"} Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.683058 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:fd2e631e747c35a95f083418f5829d06c4b830f1fdb322368ff6190b9887ea32\\\"\"" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-h6dr4" podUID="b72b2323-5329-4145-9cee-b447d9e2a304" Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.684410 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jh4z9" event={"ID":"e8e6f423-a07b-4a22-9e39-efa8de22747e","Type":"ContainerStarted","Data":"ef05d38ff266728a64eb1d01c6a0ea065a58968faf1ec7d3ee5aed5432d604a4"} Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.685768 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:146961cac3291daf96c1ca2bc7bd52bc94d1f4787a0770e23205c2c9beb0d737\\\"\"" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jh4z9" podUID="e8e6f423-a07b-4a22-9e39-efa8de22747e" Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.686871 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-rk8l8" event={"ID":"8c504afd-e4e0-4676-b292-b569b638a7dd","Type":"ContainerStarted","Data":"1b839dbf4409e9315a7364f6fb7c43674c64cedc21438656b9e761c61a2ba388"} Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.690044 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:9404536bf7cb7c3818e1a0f92b53e4d7c02fe7942324f32894106f02f8fc7e92\\\"\"" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-rk8l8" podUID="8c504afd-e4e0-4676-b292-b569b638a7dd" Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.693411 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-s6gm8" event={"ID":"4c2550fe-b3eb-4eef-8ffc-ebb4a9ce1b5f","Type":"ContainerStarted","Data":"4c84a19765fe7772a94a4cb6d3632ce28346afc6e594da959e1dd40376d118fd"} Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.696548 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-c6994669c-jv7cr" event={"ID":"1f795f92-d385-49bc-bc91-5109734f4d5a","Type":"ContainerStarted","Data":"155c3c510496af1f04966e3427bde8ad8646a8854ad7c215b148b70d32e5a151"} Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.698145 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-5qcms" event={"ID":"d0cafd1d-5f37-499a-a531-547a137aae21","Type":"ContainerStarted","Data":"9b78972310c9556c8896a8e1905d8f1256dfa1c5257d16aff20e8e756d472a4c"} Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.699700 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-fcht4" event={"ID":"55ce5ee6-47f4-4874-92dc-6ab78f2ce212","Type":"ContainerStarted","Data":"c6889bb0a1437b385995f9935900046a8b7e40d8e117c7cf186721da4929aed4"} Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.702603 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-fcht4" podUID="55ce5ee6-47f4-4874-92dc-6ab78f2ce212" Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.704492 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-65849867d6-798zt" event={"ID":"761a1a49-e01e-4674-b1f4-da732e1def98","Type":"ContainerStarted","Data":"ee6c24e22567787582321ee023eb314186b145ce7792fd58c3ac0bb32ea68bf7"} Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.705893 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231\\\"\"" pod="openstack-operators/nova-operator-controller-manager-65849867d6-798zt" podUID="761a1a49-e01e-4674-b1f4-da732e1def98" Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.706190 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-zmgll" event={"ID":"efb259b7-934f-4bc3-b502-633472d1a1c5","Type":"ContainerStarted","Data":"261098f48f1d26ebb4c75be3cadb08b9b9c660b7de3dd29d9855066e033691d5"} Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.708179 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-vpqw4" event={"ID":"50cfdf18-6a7e-4b3c-bb0f-5260fc3d42eb","Type":"ContainerStarted","Data":"b1578a57aad395e5ece82b0c12158468c4d9f2f5120badf5d29f82f41dc71ce1"} Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.713707 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-76qxc" event={"ID":"8c8feeec-377c-499a-b666-895010f8ebeb","Type":"ContainerStarted","Data":"fef568e9419c19adaca1121cd34af986643033aa54ba8a4f061832377e4d953b"} Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.715203 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-76qxc" podUID="8c8feeec-377c-499a-b666-895010f8ebeb" Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.716769 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-bv8wz" event={"ID":"bb9b2c3f-4f68-44fc-addf-2cf4421be015","Type":"ContainerStarted","Data":"0ac0a28c189579319e2ae1a4cb689567f964d4d85af14aaa79d7b3610635a8bc"} Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.719627 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-ncnww" event={"ID":"c3b86204-5389-4b6a-bd45-fb6ee23b784e","Type":"ContainerStarted","Data":"d4b01fff042e17e842cb2aba4844d1807f3e65fc3b3c4a63724b2347d70689a1"} Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.721671 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:0f440bf7dc937ce0135bdd328716686fd2f1320f453a9ac4e11e96383148ad6c\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-ncnww" podUID="c3b86204-5389-4b6a-bd45-fb6ee23b784e" Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.723953 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-n7kgd" event={"ID":"340257c4-9218-49b0-8a75-b2a4e0231fe3","Type":"ContainerStarted","Data":"705bc8c9961f2a159bfd5194f6f035adc5ac923dbc26dd216480b551db77a558"} Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.726771 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-9zp7h" event={"ID":"ba9a1249-fc58-4809-a472-d199afa9b52b","Type":"ContainerStarted","Data":"6ed1b9a3832f10fdbf3e2449a7b2bb34f9e26dc7a228af18531748da3e06a717"} Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.902957 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-webhook-certs\") pod \"openstack-operator-controller-manager-87d6d564b-ktcf8\" (UID: \"a55fdb43-cd6c-4415-8ef6-07f6c7da6272\") " pod="openstack-operators/openstack-operator-controller-manager-87d6d564b-ktcf8" Jan 21 11:14:44 crc kubenswrapper[4881]: I0121 11:14:44.903245 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-metrics-certs\") pod \"openstack-operator-controller-manager-87d6d564b-ktcf8\" (UID: \"a55fdb43-cd6c-4415-8ef6-07f6c7da6272\") " pod="openstack-operators/openstack-operator-controller-manager-87d6d564b-ktcf8" Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.903397 4881 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.903448 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-metrics-certs podName:a55fdb43-cd6c-4415-8ef6-07f6c7da6272 nodeName:}" failed. No retries permitted until 2026-01-21 11:14:48.903431783 +0000 UTC m=+1076.163388252 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-metrics-certs") pod "openstack-operator-controller-manager-87d6d564b-ktcf8" (UID: "a55fdb43-cd6c-4415-8ef6-07f6c7da6272") : secret "metrics-server-cert" not found Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.903804 4881 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 11:14:44 crc kubenswrapper[4881]: E0121 11:14:44.903827 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-webhook-certs podName:a55fdb43-cd6c-4415-8ef6-07f6c7da6272 nodeName:}" failed. No retries permitted until 2026-01-21 11:14:48.903819922 +0000 UTC m=+1076.163776391 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-webhook-certs") pod "openstack-operator-controller-manager-87d6d564b-ktcf8" (UID: "a55fdb43-cd6c-4415-8ef6-07f6c7da6272") : secret "webhook-server-cert" not found Jan 21 11:14:45 crc kubenswrapper[4881]: E0121 11:14:45.746091 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-fcht4" podUID="55ce5ee6-47f4-4874-92dc-6ab78f2ce212" Jan 21 11:14:45 crc kubenswrapper[4881]: E0121 11:14:45.746346 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231\\\"\"" pod="openstack-operators/nova-operator-controller-manager-65849867d6-798zt" podUID="761a1a49-e01e-4674-b1f4-da732e1def98" Jan 21 11:14:45 crc kubenswrapper[4881]: E0121 11:14:45.746459 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:fd2e631e747c35a95f083418f5829d06c4b830f1fdb322368ff6190b9887ea32\\\"\"" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-h6dr4" podUID="b72b2323-5329-4145-9cee-b447d9e2a304" Jan 21 11:14:45 crc kubenswrapper[4881]: E0121 11:14:45.746513 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:146961cac3291daf96c1ca2bc7bd52bc94d1f4787a0770e23205c2c9beb0d737\\\"\"" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jh4z9" podUID="e8e6f423-a07b-4a22-9e39-efa8de22747e" Jan 21 11:14:45 crc kubenswrapper[4881]: E0121 11:14:45.746602 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e\\\"\"" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-tttcz" podUID="2aac430e-3ac8-4624-8575-66386b5c2df3" Jan 21 11:14:45 crc kubenswrapper[4881]: E0121 11:14:45.746687 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:9404536bf7cb7c3818e1a0f92b53e4d7c02fe7942324f32894106f02f8fc7e92\\\"\"" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-rk8l8" podUID="8c504afd-e4e0-4676-b292-b569b638a7dd" Jan 21 11:14:45 crc kubenswrapper[4881]: E0121 11:14:45.746827 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-76qxc" podUID="8c8feeec-377c-499a-b666-895010f8ebeb" Jan 21 11:14:45 crc kubenswrapper[4881]: E0121 11:14:45.749500 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:0f440bf7dc937ce0135bdd328716686fd2f1320f453a9ac4e11e96383148ad6c\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-ncnww" podUID="c3b86204-5389-4b6a-bd45-fb6ee23b784e" Jan 21 11:14:48 crc kubenswrapper[4881]: I0121 11:14:48.114170 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2fe210a4-2adf-4b55-9a43-c1c390f51b0e-cert\") pod \"infra-operator-controller-manager-77c48c7859-klgq4\" (UID: \"2fe210a4-2adf-4b55-9a43-c1c390f51b0e\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-klgq4" Jan 21 11:14:48 crc kubenswrapper[4881]: E0121 11:14:48.114521 4881 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 11:14:48 crc kubenswrapper[4881]: E0121 11:14:48.114565 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2fe210a4-2adf-4b55-9a43-c1c390f51b0e-cert podName:2fe210a4-2adf-4b55-9a43-c1c390f51b0e nodeName:}" failed. No retries permitted until 2026-01-21 11:14:56.114552476 +0000 UTC m=+1083.374508935 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2fe210a4-2adf-4b55-9a43-c1c390f51b0e-cert") pod "infra-operator-controller-manager-77c48c7859-klgq4" (UID: "2fe210a4-2adf-4b55-9a43-c1c390f51b0e") : secret "infra-operator-webhook-server-cert" not found Jan 21 11:14:48 crc kubenswrapper[4881]: I0121 11:14:48.556320 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b1b17be2-e382-4916-8e53-a68c85b5bfc2-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8544795q\" (UID: \"b1b17be2-e382-4916-8e53-a68c85b5bfc2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544795q" Jan 21 11:14:48 crc kubenswrapper[4881]: E0121 11:14:48.556778 4881 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 11:14:48 crc kubenswrapper[4881]: E0121 11:14:48.557291 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1b17be2-e382-4916-8e53-a68c85b5bfc2-cert podName:b1b17be2-e382-4916-8e53-a68c85b5bfc2 nodeName:}" failed. No retries permitted until 2026-01-21 11:14:56.55726217 +0000 UTC m=+1083.817218639 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b1b17be2-e382-4916-8e53-a68c85b5bfc2-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8544795q" (UID: "b1b17be2-e382-4916-8e53-a68c85b5bfc2") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 11:14:48 crc kubenswrapper[4881]: I0121 11:14:48.962530 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-webhook-certs\") pod \"openstack-operator-controller-manager-87d6d564b-ktcf8\" (UID: \"a55fdb43-cd6c-4415-8ef6-07f6c7da6272\") " pod="openstack-operators/openstack-operator-controller-manager-87d6d564b-ktcf8" Jan 21 11:14:48 crc kubenswrapper[4881]: I0121 11:14:48.962630 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-metrics-certs\") pod \"openstack-operator-controller-manager-87d6d564b-ktcf8\" (UID: \"a55fdb43-cd6c-4415-8ef6-07f6c7da6272\") " pod="openstack-operators/openstack-operator-controller-manager-87d6d564b-ktcf8" Jan 21 11:14:48 crc kubenswrapper[4881]: E0121 11:14:48.962897 4881 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 11:14:48 crc kubenswrapper[4881]: E0121 11:14:48.962962 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-metrics-certs podName:a55fdb43-cd6c-4415-8ef6-07f6c7da6272 nodeName:}" failed. No retries permitted until 2026-01-21 11:14:56.96294156 +0000 UTC m=+1084.222898039 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-metrics-certs") pod "openstack-operator-controller-manager-87d6d564b-ktcf8" (UID: "a55fdb43-cd6c-4415-8ef6-07f6c7da6272") : secret "metrics-server-cert" not found Jan 21 11:14:48 crc kubenswrapper[4881]: E0121 11:14:48.963451 4881 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 11:14:48 crc kubenswrapper[4881]: E0121 11:14:48.963558 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-webhook-certs podName:a55fdb43-cd6c-4415-8ef6-07f6c7da6272 nodeName:}" failed. No retries permitted until 2026-01-21 11:14:56.963534446 +0000 UTC m=+1084.223491085 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-webhook-certs") pod "openstack-operator-controller-manager-87d6d564b-ktcf8" (UID: "a55fdb43-cd6c-4415-8ef6-07f6c7da6272") : secret "webhook-server-cert" not found Jan 21 11:14:56 crc kubenswrapper[4881]: I0121 11:14:56.288134 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2fe210a4-2adf-4b55-9a43-c1c390f51b0e-cert\") pod \"infra-operator-controller-manager-77c48c7859-klgq4\" (UID: \"2fe210a4-2adf-4b55-9a43-c1c390f51b0e\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-klgq4" Jan 21 11:14:56 crc kubenswrapper[4881]: E0121 11:14:56.288332 4881 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 11:14:56 crc kubenswrapper[4881]: E0121 11:14:56.289072 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2fe210a4-2adf-4b55-9a43-c1c390f51b0e-cert podName:2fe210a4-2adf-4b55-9a43-c1c390f51b0e nodeName:}" failed. No retries permitted until 2026-01-21 11:15:12.289052364 +0000 UTC m=+1099.549008843 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2fe210a4-2adf-4b55-9a43-c1c390f51b0e-cert") pod "infra-operator-controller-manager-77c48c7859-klgq4" (UID: "2fe210a4-2adf-4b55-9a43-c1c390f51b0e") : secret "infra-operator-webhook-server-cert" not found Jan 21 11:14:56 crc kubenswrapper[4881]: I0121 11:14:56.593827 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b1b17be2-e382-4916-8e53-a68c85b5bfc2-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8544795q\" (UID: \"b1b17be2-e382-4916-8e53-a68c85b5bfc2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544795q" Jan 21 11:14:56 crc kubenswrapper[4881]: I0121 11:14:56.649852 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b1b17be2-e382-4916-8e53-a68c85b5bfc2-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8544795q\" (UID: \"b1b17be2-e382-4916-8e53-a68c85b5bfc2\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544795q" Jan 21 11:14:56 crc kubenswrapper[4881]: I0121 11:14:56.839634 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-fzcfv" Jan 21 11:14:56 crc kubenswrapper[4881]: I0121 11:14:56.849033 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544795q" Jan 21 11:14:57 crc kubenswrapper[4881]: I0121 11:14:57.040976 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-webhook-certs\") pod \"openstack-operator-controller-manager-87d6d564b-ktcf8\" (UID: \"a55fdb43-cd6c-4415-8ef6-07f6c7da6272\") " pod="openstack-operators/openstack-operator-controller-manager-87d6d564b-ktcf8" Jan 21 11:14:57 crc kubenswrapper[4881]: I0121 11:14:57.041081 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-metrics-certs\") pod \"openstack-operator-controller-manager-87d6d564b-ktcf8\" (UID: \"a55fdb43-cd6c-4415-8ef6-07f6c7da6272\") " pod="openstack-operators/openstack-operator-controller-manager-87d6d564b-ktcf8" Jan 21 11:14:57 crc kubenswrapper[4881]: I0121 11:14:57.102409 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-webhook-certs\") pod \"openstack-operator-controller-manager-87d6d564b-ktcf8\" (UID: \"a55fdb43-cd6c-4415-8ef6-07f6c7da6272\") " pod="openstack-operators/openstack-operator-controller-manager-87d6d564b-ktcf8" Jan 21 11:14:57 crc kubenswrapper[4881]: I0121 11:14:57.102499 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a55fdb43-cd6c-4415-8ef6-07f6c7da6272-metrics-certs\") pod \"openstack-operator-controller-manager-87d6d564b-ktcf8\" (UID: \"a55fdb43-cd6c-4415-8ef6-07f6c7da6272\") " pod="openstack-operators/openstack-operator-controller-manager-87d6d564b-ktcf8" Jan 21 11:14:57 crc kubenswrapper[4881]: I0121 11:14:57.138543 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-t9k6g" Jan 21 11:14:57 crc kubenswrapper[4881]: I0121 11:14:57.147088 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-87d6d564b-ktcf8" Jan 21 11:14:57 crc kubenswrapper[4881]: E0121 11:14:57.575358 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf" Jan 21 11:14:57 crc kubenswrapper[4881]: E0121 11:14:57.575556 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9b5pw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-55db956ddc-vpqw4_openstack-operators(50cfdf18-6a7e-4b3c-bb0f-5260fc3d42eb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:14:57 crc kubenswrapper[4881]: E0121 11:14:57.577774 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-vpqw4" podUID="50cfdf18-6a7e-4b3c-bb0f-5260fc3d42eb" Jan 21 11:14:58 crc kubenswrapper[4881]: E0121 11:14:58.070383 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-vpqw4" podUID="50cfdf18-6a7e-4b3c-bb0f-5260fc3d42eb" Jan 21 11:14:58 crc kubenswrapper[4881]: E0121 11:14:58.408140 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:ab629ec4ce57b5cde9cd6d75069e68edca441b97b7b5a3f58804e2e61766b729" Jan 21 11:14:58 crc kubenswrapper[4881]: E0121 11:14:58.408834 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:ab629ec4ce57b5cde9cd6d75069e68edca441b97b7b5a3f58804e2e61766b729,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nhjc2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-7fc9b76cf6-n7kgd_openstack-operators(340257c4-9218-49b0-8a75-b2a4e0231fe3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:14:58 crc kubenswrapper[4881]: E0121 11:14:58.410238 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-n7kgd" podUID="340257c4-9218-49b0-8a75-b2a4e0231fe3" Jan 21 11:14:59 crc kubenswrapper[4881]: E0121 11:14:59.271086 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ab629ec4ce57b5cde9cd6d75069e68edca441b97b7b5a3f58804e2e61766b729\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-n7kgd" podUID="340257c4-9218-49b0-8a75-b2a4e0231fe3" Jan 21 11:14:59 crc kubenswrapper[4881]: E0121 11:14:59.954255 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:ddb59f1a8e3fd0d641405e371e33b3d8c913af08e40e84f390e7e06f0a7f3488" Jan 21 11:14:59 crc kubenswrapper[4881]: E0121 11:14:59.954487 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:ddb59f1a8e3fd0d641405e371e33b3d8c913af08e40e84f390e7e06f0a7f3488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8z9wn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-9b68f5989-7qgck_openstack-operators(a028dcae-6b9d-414d-8bab-652f301de541): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:14:59 crc kubenswrapper[4881]: E0121 11:14:59.955680 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-7qgck" podUID="a028dcae-6b9d-414d-8bab-652f301de541" Jan 21 11:15:00 crc kubenswrapper[4881]: I0121 11:15:00.146313 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483235-h6fqb"] Jan 21 11:15:00 crc kubenswrapper[4881]: I0121 11:15:00.147283 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483235-h6fqb" Jan 21 11:15:00 crc kubenswrapper[4881]: I0121 11:15:00.150184 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 11:15:00 crc kubenswrapper[4881]: I0121 11:15:00.150316 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 11:15:00 crc kubenswrapper[4881]: I0121 11:15:00.155633 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv8v5\" (UniqueName: \"kubernetes.io/projected/c37f0ee6-fcc1-4663-91a3-ab5e47dad851-kube-api-access-kv8v5\") pod \"collect-profiles-29483235-h6fqb\" (UID: \"c37f0ee6-fcc1-4663-91a3-ab5e47dad851\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483235-h6fqb" Jan 21 11:15:00 crc kubenswrapper[4881]: I0121 11:15:00.155866 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c37f0ee6-fcc1-4663-91a3-ab5e47dad851-secret-volume\") pod \"collect-profiles-29483235-h6fqb\" (UID: \"c37f0ee6-fcc1-4663-91a3-ab5e47dad851\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483235-h6fqb" Jan 21 11:15:00 crc kubenswrapper[4881]: I0121 11:15:00.155941 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c37f0ee6-fcc1-4663-91a3-ab5e47dad851-config-volume\") pod \"collect-profiles-29483235-h6fqb\" (UID: \"c37f0ee6-fcc1-4663-91a3-ab5e47dad851\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483235-h6fqb" Jan 21 11:15:00 crc kubenswrapper[4881]: I0121 11:15:00.156543 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483235-h6fqb"] Jan 21 11:15:00 crc kubenswrapper[4881]: I0121 11:15:00.257421 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c37f0ee6-fcc1-4663-91a3-ab5e47dad851-secret-volume\") pod \"collect-profiles-29483235-h6fqb\" (UID: \"c37f0ee6-fcc1-4663-91a3-ab5e47dad851\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483235-h6fqb" Jan 21 11:15:00 crc kubenswrapper[4881]: I0121 11:15:00.257480 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c37f0ee6-fcc1-4663-91a3-ab5e47dad851-config-volume\") pod \"collect-profiles-29483235-h6fqb\" (UID: \"c37f0ee6-fcc1-4663-91a3-ab5e47dad851\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483235-h6fqb" Jan 21 11:15:00 crc kubenswrapper[4881]: I0121 11:15:00.257517 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kv8v5\" (UniqueName: \"kubernetes.io/projected/c37f0ee6-fcc1-4663-91a3-ab5e47dad851-kube-api-access-kv8v5\") pod \"collect-profiles-29483235-h6fqb\" (UID: \"c37f0ee6-fcc1-4663-91a3-ab5e47dad851\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483235-h6fqb" Jan 21 11:15:00 crc kubenswrapper[4881]: I0121 11:15:00.258992 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c37f0ee6-fcc1-4663-91a3-ab5e47dad851-config-volume\") pod \"collect-profiles-29483235-h6fqb\" (UID: \"c37f0ee6-fcc1-4663-91a3-ab5e47dad851\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483235-h6fqb" Jan 21 11:15:00 crc kubenswrapper[4881]: E0121 11:15:00.296268 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:ddb59f1a8e3fd0d641405e371e33b3d8c913af08e40e84f390e7e06f0a7f3488\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-7qgck" podUID="a028dcae-6b9d-414d-8bab-652f301de541" Jan 21 11:15:00 crc kubenswrapper[4881]: I0121 11:15:00.300258 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c37f0ee6-fcc1-4663-91a3-ab5e47dad851-secret-volume\") pod \"collect-profiles-29483235-h6fqb\" (UID: \"c37f0ee6-fcc1-4663-91a3-ab5e47dad851\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483235-h6fqb" Jan 21 11:15:00 crc kubenswrapper[4881]: I0121 11:15:00.317690 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kv8v5\" (UniqueName: \"kubernetes.io/projected/c37f0ee6-fcc1-4663-91a3-ab5e47dad851-kube-api-access-kv8v5\") pod \"collect-profiles-29483235-h6fqb\" (UID: \"c37f0ee6-fcc1-4663-91a3-ab5e47dad851\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483235-h6fqb" Jan 21 11:15:00 crc kubenswrapper[4881]: I0121 11:15:00.477845 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483235-h6fqb" Jan 21 11:15:01 crc kubenswrapper[4881]: E0121 11:15:01.608613 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:0d59a405f50b37c833e14c0f4987e95c8769d9ab06a7087078bdd02568c18ca8" Jan 21 11:15:01 crc kubenswrapper[4881]: E0121 11:15:01.608902 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:0d59a405f50b37c833e14c0f4987e95c8769d9ab06a7087078bdd02568c18ca8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zvpzz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-9f958b845-4wmln_openstack-operators(36e5ddfe-67a4-4721-9ef5-b9459c64bf5c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:15:01 crc kubenswrapper[4881]: E0121 11:15:01.610854 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-9f958b845-4wmln" podUID="36e5ddfe-67a4-4721-9ef5-b9459c64bf5c" Jan 21 11:15:02 crc kubenswrapper[4881]: E0121 11:15:02.357950 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:0d59a405f50b37c833e14c0f4987e95c8769d9ab06a7087078bdd02568c18ca8\\\"\"" pod="openstack-operators/designate-operator-controller-manager-9f958b845-4wmln" podUID="36e5ddfe-67a4-4721-9ef5-b9459c64bf5c" Jan 21 11:15:02 crc kubenswrapper[4881]: E0121 11:15:02.935125 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:f0634d8cf7c2c2919ca248a6883ce43d6ae4ac59252c987a5cfe17643fe7d38a" Jan 21 11:15:02 crc kubenswrapper[4881]: E0121 11:15:02.935703 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:f0634d8cf7c2c2919ca248a6883ce43d6ae4ac59252c987a5cfe17643fe7d38a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-znqn9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-7ddb5c749-svq8w_openstack-operators(848fd8db-3bd5-4e22-96ca-f69b181e48be): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:15:02 crc kubenswrapper[4881]: E0121 11:15:02.936935 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-svq8w" podUID="848fd8db-3bd5-4e22-96ca-f69b181e48be" Jan 21 11:15:03 crc kubenswrapper[4881]: E0121 11:15:03.369718 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:f0634d8cf7c2c2919ca248a6883ce43d6ae4ac59252c987a5cfe17643fe7d38a\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-svq8w" podUID="848fd8db-3bd5-4e22-96ca-f69b181e48be" Jan 21 11:15:06 crc kubenswrapper[4881]: E0121 11:15:06.284429 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:d69a68cdac59165797daf1064f3a3b4b14b546bf1c7254070a7ed1238998c028" Jan 21 11:15:06 crc kubenswrapper[4881]: E0121 11:15:06.284926 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:d69a68cdac59165797daf1064f3a3b4b14b546bf1c7254070a7ed1238998c028,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n7p2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-c6994669c-jv7cr_openstack-operators(1f795f92-d385-49bc-bc91-5109734f4d5a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:15:06 crc kubenswrapper[4881]: E0121 11:15:06.286945 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-c6994669c-jv7cr" podUID="1f795f92-d385-49bc-bc91-5109734f4d5a" Jan 21 11:15:06 crc kubenswrapper[4881]: E0121 11:15:06.393236 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:d69a68cdac59165797daf1064f3a3b4b14b546bf1c7254070a7ed1238998c028\\\"\"" pod="openstack-operators/glance-operator-controller-manager-c6994669c-jv7cr" podUID="1f795f92-d385-49bc-bc91-5109734f4d5a" Jan 21 11:15:12 crc kubenswrapper[4881]: I0121 11:15:12.276103 4881 trace.go:236] Trace[1986326163]: "Calculate volume metrics of kube-api-access-l24bg for pod cert-manager/cert-manager-cainjector-cf98fcc89-cdm4s" (21-Jan-2026 11:15:11.021) (total time: 1254ms): Jan 21 11:15:12 crc kubenswrapper[4881]: Trace[1986326163]: [1.254933196s] [1.254933196s] END Jan 21 11:15:12 crc kubenswrapper[4881]: I0121 11:15:12.277805 4881 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-rslv2 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 11:15:12 crc kubenswrapper[4881]: I0121 11:15:12.277860 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-rslv2" podUID="537a87a4-8f58-441f-9199-62c5849c693c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 11:15:12 crc kubenswrapper[4881]: I0121 11:15:12.379276 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2fe210a4-2adf-4b55-9a43-c1c390f51b0e-cert\") pod \"infra-operator-controller-manager-77c48c7859-klgq4\" (UID: \"2fe210a4-2adf-4b55-9a43-c1c390f51b0e\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-klgq4" Jan 21 11:15:12 crc kubenswrapper[4881]: I0121 11:15:12.385336 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2fe210a4-2adf-4b55-9a43-c1c390f51b0e-cert\") pod \"infra-operator-controller-manager-77c48c7859-klgq4\" (UID: \"2fe210a4-2adf-4b55-9a43-c1c390f51b0e\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-klgq4" Jan 21 11:15:12 crc kubenswrapper[4881]: I0121 11:15:12.599844 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-m6lch" Jan 21 11:15:12 crc kubenswrapper[4881]: I0121 11:15:12.607607 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-klgq4" Jan 21 11:15:17 crc kubenswrapper[4881]: E0121 11:15:17.576404 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e" Jan 21 11:15:17 crc kubenswrapper[4881]: E0121 11:15:17.577452 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pxbkl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7cd8bc9dbb-tttcz_openstack-operators(2aac430e-3ac8-4624-8575-66386b5c2df3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:15:17 crc kubenswrapper[4881]: E0121 11:15:17.579423 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-tttcz" podUID="2aac430e-3ac8-4624-8575-66386b5c2df3" Jan 21 11:15:17 crc kubenswrapper[4881]: I0121 11:15:17.608044 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-7wxr8" podUID="6e9defc7-ad37-4742-b149-cb71d7ea177a" containerName="registry-server" probeResult="failure" output=< Jan 21 11:15:17 crc kubenswrapper[4881]: timeout: health rpc did not complete within 1s Jan 21 11:15:17 crc kubenswrapper[4881]: > Jan 21 11:15:17 crc kubenswrapper[4881]: I0121 11:15:17.609686 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-7wxr8" podUID="6e9defc7-ad37-4742-b149-cb71d7ea177a" containerName="registry-server" probeResult="failure" output=< Jan 21 11:15:17 crc kubenswrapper[4881]: timeout: health rpc did not complete within 1s Jan 21 11:15:17 crc kubenswrapper[4881]: > Jan 21 11:15:19 crc kubenswrapper[4881]: E0121 11:15:19.894856 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843" Jan 21 11:15:19 crc kubenswrapper[4881]: E0121 11:15:19.895374 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nm5x4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-5f8f495fcf-fcht4_openstack-operators(55ce5ee6-47f4-4874-92dc-6ab78f2ce212): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:15:19 crc kubenswrapper[4881]: E0121 11:15:19.896847 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-fcht4" podUID="55ce5ee6-47f4-4874-92dc-6ab78f2ce212" Jan 21 11:15:20 crc kubenswrapper[4881]: E0121 11:15:20.698912 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231" Jan 21 11:15:20 crc kubenswrapper[4881]: E0121 11:15:20.699149 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-96fmk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-65849867d6-798zt_openstack-operators(761a1a49-e01e-4674-b1f4-da732e1def98): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:15:20 crc kubenswrapper[4881]: E0121 11:15:20.700586 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-65849867d6-798zt" podUID="761a1a49-e01e-4674-b1f4-da732e1def98" Jan 21 11:15:21 crc kubenswrapper[4881]: E0121 11:15:21.540727 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:fd2e631e747c35a95f083418f5829d06c4b830f1fdb322368ff6190b9887ea32" Jan 21 11:15:21 crc kubenswrapper[4881]: E0121 11:15:21.541186 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:fd2e631e747c35a95f083418f5829d06c4b830f1fdb322368ff6190b9887ea32,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wq74t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-864f6b75bf-h6dr4_openstack-operators(b72b2323-5329-4145-9cee-b447d9e2a304): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:15:21 crc kubenswrapper[4881]: E0121 11:15:21.543298 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-h6dr4" podUID="b72b2323-5329-4145-9cee-b447d9e2a304" Jan 21 11:15:22 crc kubenswrapper[4881]: I0121 11:15:22.084018 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544795q"] Jan 21 11:15:24 crc kubenswrapper[4881]: I0121 11:15:24.896830 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544795q" event={"ID":"b1b17be2-e382-4916-8e53-a68c85b5bfc2","Type":"ContainerStarted","Data":"0048c64a89fa99df970b415fb3ce60253d1737b9b9ec85451632d9017fdfac41"} Jan 21 11:15:25 crc kubenswrapper[4881]: I0121 11:15:25.523136 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-87d6d564b-ktcf8"] Jan 21 11:15:25 crc kubenswrapper[4881]: E0121 11:15:25.762413 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 21 11:15:25 crc kubenswrapper[4881]: E0121 11:15:25.762672 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jfj8h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-76qxc_openstack-operators(8c8feeec-377c-499a-b666-895010f8ebeb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:15:25 crc kubenswrapper[4881]: E0121 11:15:25.764506 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-76qxc" podUID="8c8feeec-377c-499a-b666-895010f8ebeb" Jan 21 11:15:25 crc kubenswrapper[4881]: I0121 11:15:25.910830 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-87d6d564b-ktcf8" event={"ID":"a55fdb43-cd6c-4415-8ef6-07f6c7da6272","Type":"ContainerStarted","Data":"5c5727274545ebad33744301076582e79e5dc9cc83c053a0dac5467d5716cb2d"} Jan 21 11:15:26 crc kubenswrapper[4881]: I0121 11:15:26.238697 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483235-h6fqb"] Jan 21 11:15:26 crc kubenswrapper[4881]: I0121 11:15:26.342512 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-77c48c7859-klgq4"] Jan 21 11:15:26 crc kubenswrapper[4881]: I0121 11:15:26.933891 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-5qcms" event={"ID":"d0cafd1d-5f37-499a-a531-547a137aae21","Type":"ContainerStarted","Data":"134803dd77fbcf302659b8f128e932eb1c9179c03abbc1043d52d65470d38ba1"} Jan 21 11:15:26 crc kubenswrapper[4881]: I0121 11:15:26.934480 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-5qcms" Jan 21 11:15:26 crc kubenswrapper[4881]: I0121 11:15:26.937300 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-svq8w" event={"ID":"848fd8db-3bd5-4e22-96ca-f69b181e48be","Type":"ContainerStarted","Data":"0d5be4fd016179db3483c6888a6d1b657e6fdd493c2a026f0647701c3a1db78c"} Jan 21 11:15:26 crc kubenswrapper[4881]: I0121 11:15:26.937567 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-svq8w" Jan 21 11:15:26 crc kubenswrapper[4881]: I0121 11:15:26.938867 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-bv8wz" event={"ID":"bb9b2c3f-4f68-44fc-addf-2cf4421be015","Type":"ContainerStarted","Data":"75cbd7d794f72c24c1153a927c2c056f23e41395f9670194737215511fef8da9"} Jan 21 11:15:26 crc kubenswrapper[4881]: I0121 11:15:26.939020 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-bv8wz" Jan 21 11:15:26 crc kubenswrapper[4881]: I0121 11:15:26.940165 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-klgq4" event={"ID":"2fe210a4-2adf-4b55-9a43-c1c390f51b0e","Type":"ContainerStarted","Data":"9c3cd6fd76ccb3e1aebf3c144313292f304e07697bda9f59f0bd38c7102cae69"} Jan 21 11:15:26 crc kubenswrapper[4881]: I0121 11:15:26.942832 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-n7kgd" event={"ID":"340257c4-9218-49b0-8a75-b2a4e0231fe3","Type":"ContainerStarted","Data":"7acfecd37cad07ec1dd7df4569586025cfb66a05a725369f74cee260b965c5d6"} Jan 21 11:15:26 crc kubenswrapper[4881]: I0121 11:15:26.943296 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-n7kgd" Jan 21 11:15:26 crc kubenswrapper[4881]: I0121 11:15:26.949180 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-zmgll" event={"ID":"efb259b7-934f-4bc3-b502-633472d1a1c5","Type":"ContainerStarted","Data":"2ffeca8fec4eb946b8f37fa7f383d2a2fa4c9b2c224984d9449590d48df8fbcc"} Jan 21 11:15:26 crc kubenswrapper[4881]: I0121 11:15:26.949360 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-zmgll" Jan 21 11:15:26 crc kubenswrapper[4881]: I0121 11:15:26.953329 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-9zp7h" event={"ID":"ba9a1249-fc58-4809-a472-d199afa9b52b","Type":"ContainerStarted","Data":"582cfcc5c1ddf71d8e17d3aabeca2b879f7bd34e3fbf062b6c5a1d8eeddeb7c6"} Jan 21 11:15:26 crc kubenswrapper[4881]: I0121 11:15:26.954893 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-9zp7h" Jan 21 11:15:26 crc kubenswrapper[4881]: I0121 11:15:26.971117 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-9f958b845-4wmln" event={"ID":"36e5ddfe-67a4-4721-9ef5-b9459c64bf5c","Type":"ContainerStarted","Data":"d0564b32fc1cc85ec20378db752a0cd98f3ad490e7279922c5cf5b475bee8972"} Jan 21 11:15:26 crc kubenswrapper[4881]: I0121 11:15:26.972376 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-9f958b845-4wmln" Jan 21 11:15:26 crc kubenswrapper[4881]: I0121 11:15:26.979392 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-5qcms" podStartSLOduration=23.256460966 podStartE2EDuration="46.979370683s" podCreationTimestamp="2026-01-21 11:14:40 +0000 UTC" firstStartedPulling="2026-01-21 11:14:44.490070391 +0000 UTC m=+1071.750026850" lastFinishedPulling="2026-01-21 11:15:08.212980098 +0000 UTC m=+1095.472936567" observedRunningTime="2026-01-21 11:15:26.97282632 +0000 UTC m=+1114.232782789" watchObservedRunningTime="2026-01-21 11:15:26.979370683 +0000 UTC m=+1114.239327152" Jan 21 11:15:26 crc kubenswrapper[4881]: I0121 11:15:26.985256 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483235-h6fqb" event={"ID":"c37f0ee6-fcc1-4663-91a3-ab5e47dad851","Type":"ContainerStarted","Data":"b5629bef799bd58fd7c322f334ed2c842d7e326aba733a303f14c5c0f68e0efa"} Jan 21 11:15:26 crc kubenswrapper[4881]: I0121 11:15:26.987110 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-849fd9b886-k9t7q" event={"ID":"1cebbaaf-6189-409a-8f25-43d7fac77f95","Type":"ContainerStarted","Data":"57d2d3483eb94a11159fbf1a965ba524634046511bb7497d4075264dd9f612cc"} Jan 21 11:15:26 crc kubenswrapper[4881]: I0121 11:15:26.988090 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-849fd9b886-k9t7q" Jan 21 11:15:27 crc kubenswrapper[4881]: I0121 11:15:27.003589 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-ncnww" event={"ID":"c3b86204-5389-4b6a-bd45-fb6ee23b784e","Type":"ContainerStarted","Data":"3d993e2c3c267c5fb1d5c8678bfc830cbf513bb2a53348ecdd6965049ed3d807"} Jan 21 11:15:27 crc kubenswrapper[4881]: I0121 11:15:27.004535 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-ncnww" Jan 21 11:15:27 crc kubenswrapper[4881]: I0121 11:15:27.008715 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-87d6d564b-ktcf8" event={"ID":"a55fdb43-cd6c-4415-8ef6-07f6c7da6272","Type":"ContainerStarted","Data":"a966ab60808193570083f09ccfb55452509cf01f5e2a2fc1c5f47bae085f504e"} Jan 21 11:15:27 crc kubenswrapper[4881]: I0121 11:15:27.009844 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-87d6d564b-ktcf8" Jan 21 11:15:27 crc kubenswrapper[4881]: I0121 11:15:27.026347 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-s6gm8" event={"ID":"4c2550fe-b3eb-4eef-8ffc-ebb4a9ce1b5f","Type":"ContainerStarted","Data":"593d7c66925d15e46c74a91403fde16bbc659993e2f13211ec8ae807ed8ad22e"} Jan 21 11:15:27 crc kubenswrapper[4881]: I0121 11:15:27.027720 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-s6gm8" Jan 21 11:15:27 crc kubenswrapper[4881]: I0121 11:15:27.075980 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-zmgll" podStartSLOduration=23.904754909 podStartE2EDuration="47.075962499s" podCreationTimestamp="2026-01-21 11:14:40 +0000 UTC" firstStartedPulling="2026-01-21 11:14:44.24143428 +0000 UTC m=+1071.501390749" lastFinishedPulling="2026-01-21 11:15:07.41264186 +0000 UTC m=+1094.672598339" observedRunningTime="2026-01-21 11:15:27.036032745 +0000 UTC m=+1114.295989214" watchObservedRunningTime="2026-01-21 11:15:27.075962499 +0000 UTC m=+1114.335918968" Jan 21 11:15:27 crc kubenswrapper[4881]: I0121 11:15:27.078711 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-svq8w" podStartSLOduration=6.503846803 podStartE2EDuration="48.078704807s" podCreationTimestamp="2026-01-21 11:14:39 +0000 UTC" firstStartedPulling="2026-01-21 11:14:44.218917769 +0000 UTC m=+1071.478874238" lastFinishedPulling="2026-01-21 11:15:25.793775773 +0000 UTC m=+1113.053732242" observedRunningTime="2026-01-21 11:15:27.072716748 +0000 UTC m=+1114.332673217" watchObservedRunningTime="2026-01-21 11:15:27.078704807 +0000 UTC m=+1114.338661266" Jan 21 11:15:27 crc kubenswrapper[4881]: I0121 11:15:27.609538 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-9zp7h" podStartSLOduration=24.377316083 podStartE2EDuration="47.609517513s" podCreationTimestamp="2026-01-21 11:14:40 +0000 UTC" firstStartedPulling="2026-01-21 11:14:44.180355218 +0000 UTC m=+1071.440311687" lastFinishedPulling="2026-01-21 11:15:07.412556648 +0000 UTC m=+1094.672513117" observedRunningTime="2026-01-21 11:15:27.608458367 +0000 UTC m=+1114.868414836" watchObservedRunningTime="2026-01-21 11:15:27.609517513 +0000 UTC m=+1114.869473982" Jan 21 11:15:27 crc kubenswrapper[4881]: I0121 11:15:27.715366 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-n7kgd" podStartSLOduration=6.008531089 podStartE2EDuration="47.715345539s" podCreationTimestamp="2026-01-21 11:14:40 +0000 UTC" firstStartedPulling="2026-01-21 11:14:44.091591879 +0000 UTC m=+1071.351548348" lastFinishedPulling="2026-01-21 11:15:25.798406329 +0000 UTC m=+1113.058362798" observedRunningTime="2026-01-21 11:15:27.712020056 +0000 UTC m=+1114.971976525" watchObservedRunningTime="2026-01-21 11:15:27.715345539 +0000 UTC m=+1114.975302008" Jan 21 11:15:27 crc kubenswrapper[4881]: I0121 11:15:27.853898 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-bv8wz" podStartSLOduration=23.717445864 podStartE2EDuration="47.853861558s" podCreationTimestamp="2026-01-21 11:14:40 +0000 UTC" firstStartedPulling="2026-01-21 11:14:44.076320788 +0000 UTC m=+1071.336277257" lastFinishedPulling="2026-01-21 11:15:08.212736482 +0000 UTC m=+1095.472692951" observedRunningTime="2026-01-21 11:15:27.786190612 +0000 UTC m=+1115.046147091" watchObservedRunningTime="2026-01-21 11:15:27.853861558 +0000 UTC m=+1115.113818027" Jan 21 11:15:28 crc kubenswrapper[4881]: I0121 11:15:28.152925 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-rk8l8" event={"ID":"8c504afd-e4e0-4676-b292-b569b638a7dd","Type":"ContainerStarted","Data":"a83a5499d8117eabf9e4c8defff59361671d700500aed6e9e45489a025b95b6b"} Jan 21 11:15:28 crc kubenswrapper[4881]: I0121 11:15:28.156694 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-rk8l8" Jan 21 11:15:28 crc kubenswrapper[4881]: I0121 11:15:28.164669 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-vpqw4" event={"ID":"50cfdf18-6a7e-4b3c-bb0f-5260fc3d42eb","Type":"ContainerStarted","Data":"1674472cc072745705294b1d7a2ba6968803bca0481f9d4533791647066f7a85"} Jan 21 11:15:28 crc kubenswrapper[4881]: I0121 11:15:28.166914 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-vpqw4" Jan 21 11:15:28 crc kubenswrapper[4881]: I0121 11:15:28.170425 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jh4z9" event={"ID":"e8e6f423-a07b-4a22-9e39-efa8de22747e","Type":"ContainerStarted","Data":"7979b5e10538277149f3b9bcc1c010cdc994d994df73f8a7a43087eb64a0f49c"} Jan 21 11:15:28 crc kubenswrapper[4881]: I0121 11:15:28.170918 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jh4z9" Jan 21 11:15:28 crc kubenswrapper[4881]: I0121 11:15:28.295818 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-9f958b845-4wmln" podStartSLOduration=7.673196027 podStartE2EDuration="49.295798751s" podCreationTimestamp="2026-01-21 11:14:39 +0000 UTC" firstStartedPulling="2026-01-21 11:14:44.180050981 +0000 UTC m=+1071.440007450" lastFinishedPulling="2026-01-21 11:15:25.802653705 +0000 UTC m=+1113.062610174" observedRunningTime="2026-01-21 11:15:28.290972821 +0000 UTC m=+1115.550929290" watchObservedRunningTime="2026-01-21 11:15:28.295798751 +0000 UTC m=+1115.555755220" Jan 21 11:15:28 crc kubenswrapper[4881]: I0121 11:15:28.345169 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-rk8l8" podStartSLOduration=7.085110125 podStartE2EDuration="48.345152301s" podCreationTimestamp="2026-01-21 11:14:40 +0000 UTC" firstStartedPulling="2026-01-21 11:14:44.513489343 +0000 UTC m=+1071.773445812" lastFinishedPulling="2026-01-21 11:15:25.773531499 +0000 UTC m=+1113.033487988" observedRunningTime="2026-01-21 11:15:28.33750352 +0000 UTC m=+1115.597459999" watchObservedRunningTime="2026-01-21 11:15:28.345152301 +0000 UTC m=+1115.605108770" Jan 21 11:15:28 crc kubenswrapper[4881]: I0121 11:15:28.561713 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jh4z9" podStartSLOduration=7.28816487 podStartE2EDuration="48.561687001s" podCreationTimestamp="2026-01-21 11:14:40 +0000 UTC" firstStartedPulling="2026-01-21 11:14:44.50006539 +0000 UTC m=+1071.760021859" lastFinishedPulling="2026-01-21 11:15:25.773587511 +0000 UTC m=+1113.033543990" observedRunningTime="2026-01-21 11:15:28.539271333 +0000 UTC m=+1115.799227812" watchObservedRunningTime="2026-01-21 11:15:28.561687001 +0000 UTC m=+1115.821643470" Jan 21 11:15:28 crc kubenswrapper[4881]: I0121 11:15:28.567422 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-s6gm8" podStartSLOduration=24.576440632 podStartE2EDuration="48.567401374s" podCreationTimestamp="2026-01-21 11:14:40 +0000 UTC" firstStartedPulling="2026-01-21 11:14:44.220047217 +0000 UTC m=+1071.480003696" lastFinishedPulling="2026-01-21 11:15:08.211007969 +0000 UTC m=+1095.470964438" observedRunningTime="2026-01-21 11:15:28.566121242 +0000 UTC m=+1115.826077711" watchObservedRunningTime="2026-01-21 11:15:28.567401374 +0000 UTC m=+1115.827357843" Jan 21 11:15:28 crc kubenswrapper[4881]: I0121 11:15:28.616359 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-ncnww" podStartSLOduration=7.059489275 podStartE2EDuration="48.616338722s" podCreationTimestamp="2026-01-21 11:14:40 +0000 UTC" firstStartedPulling="2026-01-21 11:14:44.241588433 +0000 UTC m=+1071.501544902" lastFinishedPulling="2026-01-21 11:15:25.79843788 +0000 UTC m=+1113.058394349" observedRunningTime="2026-01-21 11:15:28.607850291 +0000 UTC m=+1115.867806770" watchObservedRunningTime="2026-01-21 11:15:28.616338722 +0000 UTC m=+1115.876295211" Jan 21 11:15:28 crc kubenswrapper[4881]: I0121 11:15:28.782333 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-vpqw4" podStartSLOduration=7.471981286 podStartE2EDuration="48.782309795s" podCreationTimestamp="2026-01-21 11:14:40 +0000 UTC" firstStartedPulling="2026-01-21 11:14:44.488154492 +0000 UTC m=+1071.748110961" lastFinishedPulling="2026-01-21 11:15:25.798483001 +0000 UTC m=+1113.058439470" observedRunningTime="2026-01-21 11:15:28.781621268 +0000 UTC m=+1116.041577737" watchObservedRunningTime="2026-01-21 11:15:28.782309795 +0000 UTC m=+1116.042266264" Jan 21 11:15:28 crc kubenswrapper[4881]: I0121 11:15:28.782859 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-849fd9b886-k9t7q" podStartSLOduration=24.785807954 podStartE2EDuration="48.782852668s" podCreationTimestamp="2026-01-21 11:14:40 +0000 UTC" firstStartedPulling="2026-01-21 11:14:44.213969675 +0000 UTC m=+1071.473926144" lastFinishedPulling="2026-01-21 11:15:08.211014389 +0000 UTC m=+1095.470970858" observedRunningTime="2026-01-21 11:15:28.670297996 +0000 UTC m=+1115.930254475" watchObservedRunningTime="2026-01-21 11:15:28.782852668 +0000 UTC m=+1116.042809137" Jan 21 11:15:29 crc kubenswrapper[4881]: I0121 11:15:29.188379 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-c6994669c-jv7cr" event={"ID":"1f795f92-d385-49bc-bc91-5109734f4d5a","Type":"ContainerStarted","Data":"b3edf28ac7eef119da54cafded18ce56ede9f57a68a95eec0a79655af9ea1d0d"} Jan 21 11:15:29 crc kubenswrapper[4881]: I0121 11:15:29.188605 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-c6994669c-jv7cr" Jan 21 11:15:29 crc kubenswrapper[4881]: I0121 11:15:29.198389 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-7qgck" event={"ID":"a028dcae-6b9d-414d-8bab-652f301de541","Type":"ContainerStarted","Data":"829dee12939d6e36d536226ad4cd65d36d606cc10b5d418fb9e9bfbd4a261f34"} Jan 21 11:15:29 crc kubenswrapper[4881]: I0121 11:15:29.199200 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-7qgck" Jan 21 11:15:29 crc kubenswrapper[4881]: I0121 11:15:29.209308 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483235-h6fqb" event={"ID":"c37f0ee6-fcc1-4663-91a3-ab5e47dad851","Type":"ContainerStarted","Data":"4ef110f660eb1c97d787ba6c2683b1ded92c0cd6a25a9dac3c9da2e19fd3d06a"} Jan 21 11:15:29 crc kubenswrapper[4881]: I0121 11:15:29.222660 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-87d6d564b-ktcf8" podStartSLOduration=49.222632548 podStartE2EDuration="49.222632548s" podCreationTimestamp="2026-01-21 11:14:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:15:28.899450181 +0000 UTC m=+1116.159406660" watchObservedRunningTime="2026-01-21 11:15:29.222632548 +0000 UTC m=+1116.482589017" Jan 21 11:15:29 crc kubenswrapper[4881]: I0121 11:15:29.223405 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-c6994669c-jv7cr" podStartSLOduration=6.342991528 podStartE2EDuration="49.223398378s" podCreationTimestamp="2026-01-21 11:14:40 +0000 UTC" firstStartedPulling="2026-01-21 11:14:44.181824656 +0000 UTC m=+1071.441781125" lastFinishedPulling="2026-01-21 11:15:27.062231506 +0000 UTC m=+1114.322187975" observedRunningTime="2026-01-21 11:15:29.218632599 +0000 UTC m=+1116.478589068" watchObservedRunningTime="2026-01-21 11:15:29.223398378 +0000 UTC m=+1116.483354847" Jan 21 11:15:29 crc kubenswrapper[4881]: I0121 11:15:29.993304 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-7qgck" podStartSLOduration=9.318863134 podStartE2EDuration="50.993279087s" podCreationTimestamp="2026-01-21 11:14:39 +0000 UTC" firstStartedPulling="2026-01-21 11:14:44.10048251 +0000 UTC m=+1071.360438979" lastFinishedPulling="2026-01-21 11:15:25.774898463 +0000 UTC m=+1113.034854932" observedRunningTime="2026-01-21 11:15:29.283538175 +0000 UTC m=+1116.543494654" watchObservedRunningTime="2026-01-21 11:15:29.993279087 +0000 UTC m=+1117.253235556" Jan 21 11:15:30 crc kubenswrapper[4881]: I0121 11:15:30.232255 4881 generic.go:334] "Generic (PLEG): container finished" podID="c37f0ee6-fcc1-4663-91a3-ab5e47dad851" containerID="4ef110f660eb1c97d787ba6c2683b1ded92c0cd6a25a9dac3c9da2e19fd3d06a" exitCode=0 Jan 21 11:15:30 crc kubenswrapper[4881]: I0121 11:15:30.233462 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483235-h6fqb" event={"ID":"c37f0ee6-fcc1-4663-91a3-ab5e47dad851","Type":"ContainerDied","Data":"4ef110f660eb1c97d787ba6c2683b1ded92c0cd6a25a9dac3c9da2e19fd3d06a"} Jan 21 11:15:31 crc kubenswrapper[4881]: I0121 11:15:31.225546 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-849fd9b886-k9t7q" Jan 21 11:15:32 crc kubenswrapper[4881]: E0121 11:15:32.350232 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e\\\"\"" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-tttcz" podUID="2aac430e-3ac8-4624-8575-66386b5c2df3" Jan 21 11:15:32 crc kubenswrapper[4881]: E0121 11:15:32.350262 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-fcht4" podUID="55ce5ee6-47f4-4874-92dc-6ab78f2ce212" Jan 21 11:15:32 crc kubenswrapper[4881]: I0121 11:15:32.694521 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483235-h6fqb" Jan 21 11:15:32 crc kubenswrapper[4881]: I0121 11:15:32.829979 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kv8v5\" (UniqueName: \"kubernetes.io/projected/c37f0ee6-fcc1-4663-91a3-ab5e47dad851-kube-api-access-kv8v5\") pod \"c37f0ee6-fcc1-4663-91a3-ab5e47dad851\" (UID: \"c37f0ee6-fcc1-4663-91a3-ab5e47dad851\") " Jan 21 11:15:32 crc kubenswrapper[4881]: I0121 11:15:32.830178 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c37f0ee6-fcc1-4663-91a3-ab5e47dad851-config-volume\") pod \"c37f0ee6-fcc1-4663-91a3-ab5e47dad851\" (UID: \"c37f0ee6-fcc1-4663-91a3-ab5e47dad851\") " Jan 21 11:15:32 crc kubenswrapper[4881]: I0121 11:15:32.830283 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c37f0ee6-fcc1-4663-91a3-ab5e47dad851-secret-volume\") pod \"c37f0ee6-fcc1-4663-91a3-ab5e47dad851\" (UID: \"c37f0ee6-fcc1-4663-91a3-ab5e47dad851\") " Jan 21 11:15:32 crc kubenswrapper[4881]: I0121 11:15:32.832435 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c37f0ee6-fcc1-4663-91a3-ab5e47dad851-config-volume" (OuterVolumeSpecName: "config-volume") pod "c37f0ee6-fcc1-4663-91a3-ab5e47dad851" (UID: "c37f0ee6-fcc1-4663-91a3-ab5e47dad851"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:15:32 crc kubenswrapper[4881]: I0121 11:15:32.838208 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c37f0ee6-fcc1-4663-91a3-ab5e47dad851-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c37f0ee6-fcc1-4663-91a3-ab5e47dad851" (UID: "c37f0ee6-fcc1-4663-91a3-ab5e47dad851"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:15:32 crc kubenswrapper[4881]: I0121 11:15:32.838294 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c37f0ee6-fcc1-4663-91a3-ab5e47dad851-kube-api-access-kv8v5" (OuterVolumeSpecName: "kube-api-access-kv8v5") pod "c37f0ee6-fcc1-4663-91a3-ab5e47dad851" (UID: "c37f0ee6-fcc1-4663-91a3-ab5e47dad851"). InnerVolumeSpecName "kube-api-access-kv8v5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:15:33 crc kubenswrapper[4881]: I0121 11:15:33.011272 4881 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c37f0ee6-fcc1-4663-91a3-ab5e47dad851-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 11:15:33 crc kubenswrapper[4881]: I0121 11:15:33.011305 4881 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c37f0ee6-fcc1-4663-91a3-ab5e47dad851-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 11:15:33 crc kubenswrapper[4881]: I0121 11:15:33.011318 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kv8v5\" (UniqueName: \"kubernetes.io/projected/c37f0ee6-fcc1-4663-91a3-ab5e47dad851-kube-api-access-kv8v5\") on node \"crc\" DevicePath \"\"" Jan 21 11:15:33 crc kubenswrapper[4881]: I0121 11:15:33.450257 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483235-h6fqb" event={"ID":"c37f0ee6-fcc1-4663-91a3-ab5e47dad851","Type":"ContainerDied","Data":"b5629bef799bd58fd7c322f334ed2c842d7e326aba733a303f14c5c0f68e0efa"} Jan 21 11:15:33 crc kubenswrapper[4881]: I0121 11:15:33.450532 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5629bef799bd58fd7c322f334ed2c842d7e326aba733a303f14c5c0f68e0efa" Jan 21 11:15:33 crc kubenswrapper[4881]: I0121 11:15:33.450305 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483235-h6fqb" Jan 21 11:15:34 crc kubenswrapper[4881]: E0121 11:15:34.312218 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231\\\"\"" pod="openstack-operators/nova-operator-controller-manager-65849867d6-798zt" podUID="761a1a49-e01e-4674-b1f4-da732e1def98" Jan 21 11:15:36 crc kubenswrapper[4881]: E0121 11:15:36.314099 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:fd2e631e747c35a95f083418f5829d06c4b830f1fdb322368ff6190b9887ea32\\\"\"" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-h6dr4" podUID="b72b2323-5329-4145-9cee-b447d9e2a304" Jan 21 11:15:36 crc kubenswrapper[4881]: I0121 11:15:36.476704 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-klgq4" event={"ID":"2fe210a4-2adf-4b55-9a43-c1c390f51b0e","Type":"ContainerStarted","Data":"8a0e87d567a41e21b314b35a5d90caf243d4da3f73e353958f6db8df3bcfc112"} Jan 21 11:15:36 crc kubenswrapper[4881]: I0121 11:15:36.476856 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-klgq4" Jan 21 11:15:36 crc kubenswrapper[4881]: I0121 11:15:36.478218 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544795q" event={"ID":"b1b17be2-e382-4916-8e53-a68c85b5bfc2","Type":"ContainerStarted","Data":"57e0e7d6fa227adc203daf6f6c58f0611794887404ca6cd9bf60634c2316a2c3"} Jan 21 11:15:36 crc kubenswrapper[4881]: I0121 11:15:36.478392 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544795q" Jan 21 11:15:36 crc kubenswrapper[4881]: I0121 11:15:36.504881 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-klgq4" podStartSLOduration=47.535564233 podStartE2EDuration="56.504860269s" podCreationTimestamp="2026-01-21 11:14:40 +0000 UTC" firstStartedPulling="2026-01-21 11:15:27.059712115 +0000 UTC m=+1114.319668584" lastFinishedPulling="2026-01-21 11:15:36.029008151 +0000 UTC m=+1123.288964620" observedRunningTime="2026-01-21 11:15:36.503217398 +0000 UTC m=+1123.763173867" watchObservedRunningTime="2026-01-21 11:15:36.504860269 +0000 UTC m=+1123.764816738" Jan 21 11:15:36 crc kubenswrapper[4881]: I0121 11:15:36.534405 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544795q" podStartSLOduration=45.378616515 podStartE2EDuration="56.534389214s" podCreationTimestamp="2026-01-21 11:14:40 +0000 UTC" firstStartedPulling="2026-01-21 11:15:24.85666378 +0000 UTC m=+1112.116620249" lastFinishedPulling="2026-01-21 11:15:36.012436479 +0000 UTC m=+1123.272392948" observedRunningTime="2026-01-21 11:15:36.529837151 +0000 UTC m=+1123.789793630" watchObservedRunningTime="2026-01-21 11:15:36.534389214 +0000 UTC m=+1123.794345683" Jan 21 11:15:37 crc kubenswrapper[4881]: I0121 11:15:37.404023 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-87d6d564b-ktcf8" Jan 21 11:15:38 crc kubenswrapper[4881]: E0121 11:15:38.312092 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-76qxc" podUID="8c8feeec-377c-499a-b666-895010f8ebeb" Jan 21 11:15:41 crc kubenswrapper[4881]: I0121 11:15:41.021594 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-n7kgd" Jan 21 11:15:41 crc kubenswrapper[4881]: I0121 11:15:41.025110 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-zmgll" Jan 21 11:15:41 crc kubenswrapper[4881]: I0121 11:15:41.026550 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-bv8wz" Jan 21 11:15:41 crc kubenswrapper[4881]: I0121 11:15:41.027164 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-s6gm8" Jan 21 11:15:41 crc kubenswrapper[4881]: I0121 11:15:41.029421 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-c6994669c-jv7cr" Jan 21 11:15:41 crc kubenswrapper[4881]: I0121 11:15:41.030023 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-ncnww" Jan 21 11:15:41 crc kubenswrapper[4881]: I0121 11:15:41.033034 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-5qcms" Jan 21 11:15:41 crc kubenswrapper[4881]: I0121 11:15:41.072358 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-svq8w" Jan 21 11:15:41 crc kubenswrapper[4881]: I0121 11:15:41.166030 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-9zp7h" Jan 21 11:15:41 crc kubenswrapper[4881]: I0121 11:15:41.265640 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-jh4z9" Jan 21 11:15:41 crc kubenswrapper[4881]: I0121 11:15:41.328179 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-vpqw4" Jan 21 11:15:41 crc kubenswrapper[4881]: I0121 11:15:41.331140 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-rk8l8" Jan 21 11:15:41 crc kubenswrapper[4881]: I0121 11:15:41.466839 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-7qgck" Jan 21 11:15:41 crc kubenswrapper[4881]: I0121 11:15:41.467585 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-9f958b845-4wmln" Jan 21 11:15:42 crc kubenswrapper[4881]: I0121 11:15:42.613930 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-klgq4" Jan 21 11:15:46 crc kubenswrapper[4881]: I0121 11:15:46.858101 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8544795q" Jan 21 11:15:55 crc kubenswrapper[4881]: I0121 11:15:55.812312 4881 scope.go:117] "RemoveContainer" containerID="8d96b6ac2acd440f7e60cdd073c30593c6e0c4417e979419134016d123abd969" Jan 21 11:15:55 crc kubenswrapper[4881]: I0121 11:15:55.852025 4881 scope.go:117] "RemoveContainer" containerID="6c72489f579e659d3691891984c6b73c6e38f55451044ec4d36e63d9b6a30869" Jan 21 11:15:55 crc kubenswrapper[4881]: I0121 11:15:55.873992 4881 scope.go:117] "RemoveContainer" containerID="caff78396a524a2b7173fa89076846a700461a26e3edd64b51c4f8b958b5c232" Jan 21 11:15:59 crc kubenswrapper[4881]: I0121 11:15:59.862874 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-76qxc" event={"ID":"8c8feeec-377c-499a-b666-895010f8ebeb","Type":"ContainerStarted","Data":"9ec8d0919021fe429acf31e4c26796cde20929e0c4a91af67e3f588e7748e32c"} Jan 21 11:15:59 crc kubenswrapper[4881]: I0121 11:15:59.880885 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-fcht4" event={"ID":"55ce5ee6-47f4-4874-92dc-6ab78f2ce212","Type":"ContainerStarted","Data":"cd4f6669f53bcdd461f3289f7839a164427dd1a2eab328184ab161ff72233590"} Jan 21 11:15:59 crc kubenswrapper[4881]: I0121 11:15:59.880926 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-tttcz" event={"ID":"2aac430e-3ac8-4624-8575-66386b5c2df3","Type":"ContainerStarted","Data":"f0d8a93ee3a6c1809723ace8d21684a8771c184c59fe96d0c200e76d2b7449bb"} Jan 21 11:15:59 crc kubenswrapper[4881]: I0121 11:15:59.880940 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-65849867d6-798zt" event={"ID":"761a1a49-e01e-4674-b1f4-da732e1def98","Type":"ContainerStarted","Data":"fead8bd9d051fcfdfde9c0e76860cb7fe7f5e2785f04931a88723424452e79bd"} Jan 21 11:15:59 crc kubenswrapper[4881]: I0121 11:15:59.884642 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-65849867d6-798zt" Jan 21 11:15:59 crc kubenswrapper[4881]: I0121 11:15:59.884736 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-tttcz" Jan 21 11:15:59 crc kubenswrapper[4881]: I0121 11:15:59.884762 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-fcht4" Jan 21 11:15:59 crc kubenswrapper[4881]: I0121 11:15:59.888554 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-h6dr4" event={"ID":"b72b2323-5329-4145-9cee-b447d9e2a304","Type":"ContainerStarted","Data":"65b6350c2a2757964d8fd1a52b1d961e92fcb2f9c327fcc1b8fa9828886fe533"} Jan 21 11:15:59 crc kubenswrapper[4881]: I0121 11:15:59.889762 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-h6dr4" Jan 21 11:15:59 crc kubenswrapper[4881]: I0121 11:15:59.916111 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-fcht4" podStartSLOduration=7.254934793 podStartE2EDuration="1m19.916088746s" podCreationTimestamp="2026-01-21 11:14:40 +0000 UTC" firstStartedPulling="2026-01-21 11:14:44.508911329 +0000 UTC m=+1071.768867798" lastFinishedPulling="2026-01-21 11:15:57.170065282 +0000 UTC m=+1144.430021751" observedRunningTime="2026-01-21 11:15:59.910503357 +0000 UTC m=+1147.170459826" watchObservedRunningTime="2026-01-21 11:15:59.916088746 +0000 UTC m=+1147.176045215" Jan 21 11:15:59 crc kubenswrapper[4881]: I0121 11:15:59.933327 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-65849867d6-798zt" podStartSLOduration=12.225210368 podStartE2EDuration="1m19.933308894s" podCreationTimestamp="2026-01-21 11:14:40 +0000 UTC" firstStartedPulling="2026-01-21 11:14:44.53543293 +0000 UTC m=+1071.795389399" lastFinishedPulling="2026-01-21 11:15:52.243531456 +0000 UTC m=+1139.503487925" observedRunningTime="2026-01-21 11:15:59.92991124 +0000 UTC m=+1147.189867709" watchObservedRunningTime="2026-01-21 11:15:59.933308894 +0000 UTC m=+1147.193265363" Jan 21 11:15:59 crc kubenswrapper[4881]: I0121 11:15:59.951528 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-h6dr4" podStartSLOduration=6.889058793 podStartE2EDuration="1m19.951503687s" podCreationTimestamp="2026-01-21 11:14:40 +0000 UTC" firstStartedPulling="2026-01-21 11:14:44.241703636 +0000 UTC m=+1071.501660105" lastFinishedPulling="2026-01-21 11:15:57.30414853 +0000 UTC m=+1144.564104999" observedRunningTime="2026-01-21 11:15:59.950111052 +0000 UTC m=+1147.210067521" watchObservedRunningTime="2026-01-21 11:15:59.951503687 +0000 UTC m=+1147.211460156" Jan 21 11:15:59 crc kubenswrapper[4881]: I0121 11:15:59.966391 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-76qxc" podStartSLOduration=7.126640258 podStartE2EDuration="1m19.966368038s" podCreationTimestamp="2026-01-21 11:14:40 +0000 UTC" firstStartedPulling="2026-01-21 11:14:44.241965912 +0000 UTC m=+1071.501922381" lastFinishedPulling="2026-01-21 11:15:57.081693692 +0000 UTC m=+1144.341650161" observedRunningTime="2026-01-21 11:15:59.962532482 +0000 UTC m=+1147.222488971" watchObservedRunningTime="2026-01-21 11:15:59.966368038 +0000 UTC m=+1147.226324507" Jan 21 11:15:59 crc kubenswrapper[4881]: I0121 11:15:59.984381 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-tttcz" podStartSLOduration=12.24537667 podStartE2EDuration="1m19.984363616s" podCreationTimestamp="2026-01-21 11:14:40 +0000 UTC" firstStartedPulling="2026-01-21 11:14:44.513071873 +0000 UTC m=+1071.773028342" lastFinishedPulling="2026-01-21 11:15:52.252058819 +0000 UTC m=+1139.512015288" observedRunningTime="2026-01-21 11:15:59.983821412 +0000 UTC m=+1147.243777881" watchObservedRunningTime="2026-01-21 11:15:59.984363616 +0000 UTC m=+1147.244320075" Jan 21 11:16:01 crc kubenswrapper[4881]: I0121 11:16:01.667943 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-tttcz" Jan 21 11:16:10 crc kubenswrapper[4881]: I0121 11:16:10.626394 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-h6dr4" Jan 21 11:16:10 crc kubenswrapper[4881]: I0121 11:16:10.868897 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-65849867d6-798zt" Jan 21 11:16:11 crc kubenswrapper[4881]: I0121 11:16:11.419191 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-fcht4" Jan 21 11:16:33 crc kubenswrapper[4881]: I0121 11:16:33.641657 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5cd6c77d8f-6z4pf"] Jan 21 11:16:33 crc kubenswrapper[4881]: E0121 11:16:33.642635 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c37f0ee6-fcc1-4663-91a3-ab5e47dad851" containerName="collect-profiles" Jan 21 11:16:33 crc kubenswrapper[4881]: I0121 11:16:33.642651 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="c37f0ee6-fcc1-4663-91a3-ab5e47dad851" containerName="collect-profiles" Jan 21 11:16:33 crc kubenswrapper[4881]: I0121 11:16:33.642848 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="c37f0ee6-fcc1-4663-91a3-ab5e47dad851" containerName="collect-profiles" Jan 21 11:16:33 crc kubenswrapper[4881]: I0121 11:16:33.645802 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cd6c77d8f-6z4pf" Jan 21 11:16:33 crc kubenswrapper[4881]: I0121 11:16:33.651454 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cd6c77d8f-6z4pf"] Jan 21 11:16:33 crc kubenswrapper[4881]: I0121 11:16:33.655492 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 21 11:16:33 crc kubenswrapper[4881]: I0121 11:16:33.655523 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-q8h4t" Jan 21 11:16:33 crc kubenswrapper[4881]: I0121 11:16:33.657866 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 21 11:16:33 crc kubenswrapper[4881]: I0121 11:16:33.657882 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 21 11:16:33 crc kubenswrapper[4881]: I0121 11:16:33.746037 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7nlz\" (UniqueName: \"kubernetes.io/projected/ef08c5f4-dc05-46a7-bb1b-8039ba0117aa-kube-api-access-z7nlz\") pod \"dnsmasq-dns-5cd6c77d8f-6z4pf\" (UID: \"ef08c5f4-dc05-46a7-bb1b-8039ba0117aa\") " pod="openstack/dnsmasq-dns-5cd6c77d8f-6z4pf" Jan 21 11:16:33 crc kubenswrapper[4881]: I0121 11:16:33.746168 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef08c5f4-dc05-46a7-bb1b-8039ba0117aa-config\") pod \"dnsmasq-dns-5cd6c77d8f-6z4pf\" (UID: \"ef08c5f4-dc05-46a7-bb1b-8039ba0117aa\") " pod="openstack/dnsmasq-dns-5cd6c77d8f-6z4pf" Jan 21 11:16:33 crc kubenswrapper[4881]: I0121 11:16:33.816210 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-66b6fdbd65-2qwr2"] Jan 21 11:16:33 crc kubenswrapper[4881]: I0121 11:16:33.817525 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66b6fdbd65-2qwr2" Jan 21 11:16:33 crc kubenswrapper[4881]: I0121 11:16:33.819491 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 21 11:16:33 crc kubenswrapper[4881]: I0121 11:16:33.825993 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66b6fdbd65-2qwr2"] Jan 21 11:16:33 crc kubenswrapper[4881]: I0121 11:16:33.847575 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef08c5f4-dc05-46a7-bb1b-8039ba0117aa-config\") pod \"dnsmasq-dns-5cd6c77d8f-6z4pf\" (UID: \"ef08c5f4-dc05-46a7-bb1b-8039ba0117aa\") " pod="openstack/dnsmasq-dns-5cd6c77d8f-6z4pf" Jan 21 11:16:33 crc kubenswrapper[4881]: I0121 11:16:33.847650 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7nlz\" (UniqueName: \"kubernetes.io/projected/ef08c5f4-dc05-46a7-bb1b-8039ba0117aa-kube-api-access-z7nlz\") pod \"dnsmasq-dns-5cd6c77d8f-6z4pf\" (UID: \"ef08c5f4-dc05-46a7-bb1b-8039ba0117aa\") " pod="openstack/dnsmasq-dns-5cd6c77d8f-6z4pf" Jan 21 11:16:33 crc kubenswrapper[4881]: I0121 11:16:33.848547 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef08c5f4-dc05-46a7-bb1b-8039ba0117aa-config\") pod \"dnsmasq-dns-5cd6c77d8f-6z4pf\" (UID: \"ef08c5f4-dc05-46a7-bb1b-8039ba0117aa\") " pod="openstack/dnsmasq-dns-5cd6c77d8f-6z4pf" Jan 21 11:16:33 crc kubenswrapper[4881]: I0121 11:16:33.866383 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7nlz\" (UniqueName: \"kubernetes.io/projected/ef08c5f4-dc05-46a7-bb1b-8039ba0117aa-kube-api-access-z7nlz\") pod \"dnsmasq-dns-5cd6c77d8f-6z4pf\" (UID: \"ef08c5f4-dc05-46a7-bb1b-8039ba0117aa\") " pod="openstack/dnsmasq-dns-5cd6c77d8f-6z4pf" Jan 21 11:16:33 crc kubenswrapper[4881]: I0121 11:16:33.948756 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj4sc\" (UniqueName: \"kubernetes.io/projected/5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338-kube-api-access-gj4sc\") pod \"dnsmasq-dns-66b6fdbd65-2qwr2\" (UID: \"5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338\") " pod="openstack/dnsmasq-dns-66b6fdbd65-2qwr2" Jan 21 11:16:33 crc kubenswrapper[4881]: I0121 11:16:33.949131 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338-dns-svc\") pod \"dnsmasq-dns-66b6fdbd65-2qwr2\" (UID: \"5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338\") " pod="openstack/dnsmasq-dns-66b6fdbd65-2qwr2" Jan 21 11:16:33 crc kubenswrapper[4881]: I0121 11:16:33.949244 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338-config\") pod \"dnsmasq-dns-66b6fdbd65-2qwr2\" (UID: \"5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338\") " pod="openstack/dnsmasq-dns-66b6fdbd65-2qwr2" Jan 21 11:16:33 crc kubenswrapper[4881]: I0121 11:16:33.974942 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cd6c77d8f-6z4pf" Jan 21 11:16:34 crc kubenswrapper[4881]: I0121 11:16:34.051063 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gj4sc\" (UniqueName: \"kubernetes.io/projected/5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338-kube-api-access-gj4sc\") pod \"dnsmasq-dns-66b6fdbd65-2qwr2\" (UID: \"5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338\") " pod="openstack/dnsmasq-dns-66b6fdbd65-2qwr2" Jan 21 11:16:34 crc kubenswrapper[4881]: I0121 11:16:34.051382 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338-dns-svc\") pod \"dnsmasq-dns-66b6fdbd65-2qwr2\" (UID: \"5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338\") " pod="openstack/dnsmasq-dns-66b6fdbd65-2qwr2" Jan 21 11:16:34 crc kubenswrapper[4881]: I0121 11:16:34.051430 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338-config\") pod \"dnsmasq-dns-66b6fdbd65-2qwr2\" (UID: \"5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338\") " pod="openstack/dnsmasq-dns-66b6fdbd65-2qwr2" Jan 21 11:16:34 crc kubenswrapper[4881]: I0121 11:16:34.052273 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338-dns-svc\") pod \"dnsmasq-dns-66b6fdbd65-2qwr2\" (UID: \"5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338\") " pod="openstack/dnsmasq-dns-66b6fdbd65-2qwr2" Jan 21 11:16:34 crc kubenswrapper[4881]: I0121 11:16:34.052401 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338-config\") pod \"dnsmasq-dns-66b6fdbd65-2qwr2\" (UID: \"5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338\") " pod="openstack/dnsmasq-dns-66b6fdbd65-2qwr2" Jan 21 11:16:34 crc kubenswrapper[4881]: I0121 11:16:34.075096 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gj4sc\" (UniqueName: \"kubernetes.io/projected/5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338-kube-api-access-gj4sc\") pod \"dnsmasq-dns-66b6fdbd65-2qwr2\" (UID: \"5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338\") " pod="openstack/dnsmasq-dns-66b6fdbd65-2qwr2" Jan 21 11:16:34 crc kubenswrapper[4881]: I0121 11:16:34.137484 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66b6fdbd65-2qwr2" Jan 21 11:16:34 crc kubenswrapper[4881]: I0121 11:16:34.337638 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cd6c77d8f-6z4pf"] Jan 21 11:16:34 crc kubenswrapper[4881]: I0121 11:16:34.694751 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66b6fdbd65-2qwr2"] Jan 21 11:16:34 crc kubenswrapper[4881]: W0121 11:16:34.703519 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d59d9e0_8dd3_4bbd_ab3c_01e0e4a3b338.slice/crio-b01fee828c93da9e7f8d614e402f96983135c404e70276a21ff9ec11bf276820 WatchSource:0}: Error finding container b01fee828c93da9e7f8d614e402f96983135c404e70276a21ff9ec11bf276820: Status 404 returned error can't find the container with id b01fee828c93da9e7f8d614e402f96983135c404e70276a21ff9ec11bf276820 Jan 21 11:16:35 crc kubenswrapper[4881]: I0121 11:16:35.249451 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66b6fdbd65-2qwr2" event={"ID":"5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338","Type":"ContainerStarted","Data":"b01fee828c93da9e7f8d614e402f96983135c404e70276a21ff9ec11bf276820"} Jan 21 11:16:35 crc kubenswrapper[4881]: I0121 11:16:35.251422 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cd6c77d8f-6z4pf" event={"ID":"ef08c5f4-dc05-46a7-bb1b-8039ba0117aa","Type":"ContainerStarted","Data":"385e3ff947423b95dcd5a48ddbdf919434e21551c87e247766e40b37cfc15a72"} Jan 21 11:16:38 crc kubenswrapper[4881]: I0121 11:16:38.141811 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cd6c77d8f-6z4pf"] Jan 21 11:16:38 crc kubenswrapper[4881]: I0121 11:16:38.177013 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6fc7fbc9b9-cj7zb"] Jan 21 11:16:38 crc kubenswrapper[4881]: I0121 11:16:38.179245 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fc7fbc9b9-cj7zb" Jan 21 11:16:38 crc kubenswrapper[4881]: I0121 11:16:38.192036 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6fc7fbc9b9-cj7zb"] Jan 21 11:16:38 crc kubenswrapper[4881]: I0121 11:16:38.221602 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb0e6ce6-181c-4edb-b4b3-d169c41c63a8-dns-svc\") pod \"dnsmasq-dns-6fc7fbc9b9-cj7zb\" (UID: \"eb0e6ce6-181c-4edb-b4b3-d169c41c63a8\") " pod="openstack/dnsmasq-dns-6fc7fbc9b9-cj7zb" Jan 21 11:16:38 crc kubenswrapper[4881]: I0121 11:16:38.221739 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zwhs\" (UniqueName: \"kubernetes.io/projected/eb0e6ce6-181c-4edb-b4b3-d169c41c63a8-kube-api-access-6zwhs\") pod \"dnsmasq-dns-6fc7fbc9b9-cj7zb\" (UID: \"eb0e6ce6-181c-4edb-b4b3-d169c41c63a8\") " pod="openstack/dnsmasq-dns-6fc7fbc9b9-cj7zb" Jan 21 11:16:38 crc kubenswrapper[4881]: I0121 11:16:38.221848 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb0e6ce6-181c-4edb-b4b3-d169c41c63a8-config\") pod \"dnsmasq-dns-6fc7fbc9b9-cj7zb\" (UID: \"eb0e6ce6-181c-4edb-b4b3-d169c41c63a8\") " pod="openstack/dnsmasq-dns-6fc7fbc9b9-cj7zb" Jan 21 11:16:38 crc kubenswrapper[4881]: I0121 11:16:38.323707 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb0e6ce6-181c-4edb-b4b3-d169c41c63a8-config\") pod \"dnsmasq-dns-6fc7fbc9b9-cj7zb\" (UID: \"eb0e6ce6-181c-4edb-b4b3-d169c41c63a8\") " pod="openstack/dnsmasq-dns-6fc7fbc9b9-cj7zb" Jan 21 11:16:38 crc kubenswrapper[4881]: I0121 11:16:38.324055 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb0e6ce6-181c-4edb-b4b3-d169c41c63a8-dns-svc\") pod \"dnsmasq-dns-6fc7fbc9b9-cj7zb\" (UID: \"eb0e6ce6-181c-4edb-b4b3-d169c41c63a8\") " pod="openstack/dnsmasq-dns-6fc7fbc9b9-cj7zb" Jan 21 11:16:38 crc kubenswrapper[4881]: I0121 11:16:38.324117 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zwhs\" (UniqueName: \"kubernetes.io/projected/eb0e6ce6-181c-4edb-b4b3-d169c41c63a8-kube-api-access-6zwhs\") pod \"dnsmasq-dns-6fc7fbc9b9-cj7zb\" (UID: \"eb0e6ce6-181c-4edb-b4b3-d169c41c63a8\") " pod="openstack/dnsmasq-dns-6fc7fbc9b9-cj7zb" Jan 21 11:16:38 crc kubenswrapper[4881]: I0121 11:16:38.325330 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb0e6ce6-181c-4edb-b4b3-d169c41c63a8-config\") pod \"dnsmasq-dns-6fc7fbc9b9-cj7zb\" (UID: \"eb0e6ce6-181c-4edb-b4b3-d169c41c63a8\") " pod="openstack/dnsmasq-dns-6fc7fbc9b9-cj7zb" Jan 21 11:16:38 crc kubenswrapper[4881]: I0121 11:16:38.325491 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb0e6ce6-181c-4edb-b4b3-d169c41c63a8-dns-svc\") pod \"dnsmasq-dns-6fc7fbc9b9-cj7zb\" (UID: \"eb0e6ce6-181c-4edb-b4b3-d169c41c63a8\") " pod="openstack/dnsmasq-dns-6fc7fbc9b9-cj7zb" Jan 21 11:16:38 crc kubenswrapper[4881]: I0121 11:16:38.357668 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zwhs\" (UniqueName: \"kubernetes.io/projected/eb0e6ce6-181c-4edb-b4b3-d169c41c63a8-kube-api-access-6zwhs\") pod \"dnsmasq-dns-6fc7fbc9b9-cj7zb\" (UID: \"eb0e6ce6-181c-4edb-b4b3-d169c41c63a8\") " pod="openstack/dnsmasq-dns-6fc7fbc9b9-cj7zb" Jan 21 11:16:38 crc kubenswrapper[4881]: I0121 11:16:38.517848 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fc7fbc9b9-cj7zb" Jan 21 11:16:38 crc kubenswrapper[4881]: I0121 11:16:38.519746 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66b6fdbd65-2qwr2"] Jan 21 11:16:38 crc kubenswrapper[4881]: I0121 11:16:38.545583 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7457897f45-vkp6c"] Jan 21 11:16:38 crc kubenswrapper[4881]: I0121 11:16:38.548603 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7457897f45-vkp6c" Jan 21 11:16:38 crc kubenswrapper[4881]: I0121 11:16:38.568021 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7457897f45-vkp6c"] Jan 21 11:16:38 crc kubenswrapper[4881]: I0121 11:16:38.634582 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99aba8a6-cc58-43be-9607-8ae1fcb57257-config\") pod \"dnsmasq-dns-7457897f45-vkp6c\" (UID: \"99aba8a6-cc58-43be-9607-8ae1fcb57257\") " pod="openstack/dnsmasq-dns-7457897f45-vkp6c" Jan 21 11:16:38 crc kubenswrapper[4881]: I0121 11:16:38.634628 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99aba8a6-cc58-43be-9607-8ae1fcb57257-dns-svc\") pod \"dnsmasq-dns-7457897f45-vkp6c\" (UID: \"99aba8a6-cc58-43be-9607-8ae1fcb57257\") " pod="openstack/dnsmasq-dns-7457897f45-vkp6c" Jan 21 11:16:38 crc kubenswrapper[4881]: I0121 11:16:38.634716 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gf75\" (UniqueName: \"kubernetes.io/projected/99aba8a6-cc58-43be-9607-8ae1fcb57257-kube-api-access-4gf75\") pod \"dnsmasq-dns-7457897f45-vkp6c\" (UID: \"99aba8a6-cc58-43be-9607-8ae1fcb57257\") " pod="openstack/dnsmasq-dns-7457897f45-vkp6c" Jan 21 11:16:38 crc kubenswrapper[4881]: I0121 11:16:38.736420 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99aba8a6-cc58-43be-9607-8ae1fcb57257-config\") pod \"dnsmasq-dns-7457897f45-vkp6c\" (UID: \"99aba8a6-cc58-43be-9607-8ae1fcb57257\") " pod="openstack/dnsmasq-dns-7457897f45-vkp6c" Jan 21 11:16:38 crc kubenswrapper[4881]: I0121 11:16:38.736968 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99aba8a6-cc58-43be-9607-8ae1fcb57257-dns-svc\") pod \"dnsmasq-dns-7457897f45-vkp6c\" (UID: \"99aba8a6-cc58-43be-9607-8ae1fcb57257\") " pod="openstack/dnsmasq-dns-7457897f45-vkp6c" Jan 21 11:16:38 crc kubenswrapper[4881]: I0121 11:16:38.737004 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gf75\" (UniqueName: \"kubernetes.io/projected/99aba8a6-cc58-43be-9607-8ae1fcb57257-kube-api-access-4gf75\") pod \"dnsmasq-dns-7457897f45-vkp6c\" (UID: \"99aba8a6-cc58-43be-9607-8ae1fcb57257\") " pod="openstack/dnsmasq-dns-7457897f45-vkp6c" Jan 21 11:16:38 crc kubenswrapper[4881]: I0121 11:16:38.738649 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99aba8a6-cc58-43be-9607-8ae1fcb57257-config\") pod \"dnsmasq-dns-7457897f45-vkp6c\" (UID: \"99aba8a6-cc58-43be-9607-8ae1fcb57257\") " pod="openstack/dnsmasq-dns-7457897f45-vkp6c" Jan 21 11:16:38 crc kubenswrapper[4881]: I0121 11:16:38.739291 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99aba8a6-cc58-43be-9607-8ae1fcb57257-dns-svc\") pod \"dnsmasq-dns-7457897f45-vkp6c\" (UID: \"99aba8a6-cc58-43be-9607-8ae1fcb57257\") " pod="openstack/dnsmasq-dns-7457897f45-vkp6c" Jan 21 11:16:38 crc kubenswrapper[4881]: I0121 11:16:38.778242 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gf75\" (UniqueName: \"kubernetes.io/projected/99aba8a6-cc58-43be-9607-8ae1fcb57257-kube-api-access-4gf75\") pod \"dnsmasq-dns-7457897f45-vkp6c\" (UID: \"99aba8a6-cc58-43be-9607-8ae1fcb57257\") " pod="openstack/dnsmasq-dns-7457897f45-vkp6c" Jan 21 11:16:38 crc kubenswrapper[4881]: I0121 11:16:38.895237 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6fc7fbc9b9-cj7zb"] Jan 21 11:16:38 crc kubenswrapper[4881]: I0121 11:16:38.922228 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6557d744f-gt5cx"] Jan 21 11:16:38 crc kubenswrapper[4881]: I0121 11:16:38.924223 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6557d744f-gt5cx" Jan 21 11:16:38 crc kubenswrapper[4881]: I0121 11:16:38.976966 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7457897f45-vkp6c" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.063760 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6557d744f-gt5cx"] Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.083756 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aec91505-d39a-41cf-90af-1593bcb02e68-config\") pod \"dnsmasq-dns-6557d744f-gt5cx\" (UID: \"aec91505-d39a-41cf-90af-1593bcb02e68\") " pod="openstack/dnsmasq-dns-6557d744f-gt5cx" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.083898 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aec91505-d39a-41cf-90af-1593bcb02e68-dns-svc\") pod \"dnsmasq-dns-6557d744f-gt5cx\" (UID: \"aec91505-d39a-41cf-90af-1593bcb02e68\") " pod="openstack/dnsmasq-dns-6557d744f-gt5cx" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.090324 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnn2q\" (UniqueName: \"kubernetes.io/projected/aec91505-d39a-41cf-90af-1593bcb02e68-kube-api-access-dnn2q\") pod \"dnsmasq-dns-6557d744f-gt5cx\" (UID: \"aec91505-d39a-41cf-90af-1593bcb02e68\") " pod="openstack/dnsmasq-dns-6557d744f-gt5cx" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.191981 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aec91505-d39a-41cf-90af-1593bcb02e68-config\") pod \"dnsmasq-dns-6557d744f-gt5cx\" (UID: \"aec91505-d39a-41cf-90af-1593bcb02e68\") " pod="openstack/dnsmasq-dns-6557d744f-gt5cx" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.192042 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aec91505-d39a-41cf-90af-1593bcb02e68-dns-svc\") pod \"dnsmasq-dns-6557d744f-gt5cx\" (UID: \"aec91505-d39a-41cf-90af-1593bcb02e68\") " pod="openstack/dnsmasq-dns-6557d744f-gt5cx" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.192088 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnn2q\" (UniqueName: \"kubernetes.io/projected/aec91505-d39a-41cf-90af-1593bcb02e68-kube-api-access-dnn2q\") pod \"dnsmasq-dns-6557d744f-gt5cx\" (UID: \"aec91505-d39a-41cf-90af-1593bcb02e68\") " pod="openstack/dnsmasq-dns-6557d744f-gt5cx" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.193463 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aec91505-d39a-41cf-90af-1593bcb02e68-config\") pod \"dnsmasq-dns-6557d744f-gt5cx\" (UID: \"aec91505-d39a-41cf-90af-1593bcb02e68\") " pod="openstack/dnsmasq-dns-6557d744f-gt5cx" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.193720 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aec91505-d39a-41cf-90af-1593bcb02e68-dns-svc\") pod \"dnsmasq-dns-6557d744f-gt5cx\" (UID: \"aec91505-d39a-41cf-90af-1593bcb02e68\") " pod="openstack/dnsmasq-dns-6557d744f-gt5cx" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.216931 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnn2q\" (UniqueName: \"kubernetes.io/projected/aec91505-d39a-41cf-90af-1593bcb02e68-kube-api-access-dnn2q\") pod \"dnsmasq-dns-6557d744f-gt5cx\" (UID: \"aec91505-d39a-41cf-90af-1593bcb02e68\") " pod="openstack/dnsmasq-dns-6557d744f-gt5cx" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.664176 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6557d744f-gt5cx" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.738196 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.739571 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.740002 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.745099 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.745147 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.745241 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.754124 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.754348 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.754423 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.754603 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.754633 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.754745 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.754774 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.754932 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.754982 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.754940 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.755265 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-tt7xn" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.755402 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.755653 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-x9qrf" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.762908 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.769821 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6fc7fbc9b9-cj7zb"] Jan 21 11:16:39 crc kubenswrapper[4881]: W0121 11:16:39.813974 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb0e6ce6_181c_4edb_b4b3_d169c41c63a8.slice/crio-8b64289332b9bf6e24ce3af64b2717f89e14cd1b712818252df454ed0a94562c WatchSource:0}: Error finding container 8b64289332b9bf6e24ce3af64b2717f89e14cd1b712818252df454ed0a94562c: Status 404 returned error can't find the container with id 8b64289332b9bf6e24ce3af64b2717f89e14cd1b712818252df454ed0a94562c Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.872122 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/078c2368-b247-49d4-8723-fd93918e99b1-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.872673 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.872716 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f7e90972-9be1-4d3e-852e-e7f7df6e6623-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.872741 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f7e90972-9be1-4d3e-852e-e7f7df6e6623-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.872812 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.872844 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f7e90972-9be1-4d3e-852e-e7f7df6e6623-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.872894 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f7e90972-9be1-4d3e-852e-e7f7df6e6623-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.872928 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f7e90972-9be1-4d3e-852e-e7f7df6e6623-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.872951 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f7e90972-9be1-4d3e-852e-e7f7df6e6623-config-data\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.872978 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f7e90972-9be1-4d3e-852e-e7f7df6e6623-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.873012 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/078c2368-b247-49d4-8723-fd93918e99b1-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.873053 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/078c2368-b247-49d4-8723-fd93918e99b1-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.873088 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjgnd\" (UniqueName: \"kubernetes.io/projected/f7e90972-9be1-4d3e-852e-e7f7df6e6623-kube-api-access-tjgnd\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.873117 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/078c2368-b247-49d4-8723-fd93918e99b1-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.873226 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/078c2368-b247-49d4-8723-fd93918e99b1-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.873945 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f7e90972-9be1-4d3e-852e-e7f7df6e6623-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.874100 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmd5s\" (UniqueName: \"kubernetes.io/projected/078c2368-b247-49d4-8723-fd93918e99b1-kube-api-access-bmd5s\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.874150 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/078c2368-b247-49d4-8723-fd93918e99b1-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.874180 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/078c2368-b247-49d4-8723-fd93918e99b1-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.874243 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/078c2368-b247-49d4-8723-fd93918e99b1-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.874281 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f7e90972-9be1-4d3e-852e-e7f7df6e6623-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.874316 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/078c2368-b247-49d4-8723-fd93918e99b1-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.980266 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7457897f45-vkp6c"] Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.982087 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f7e90972-9be1-4d3e-852e-e7f7df6e6623-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.982137 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/078c2368-b247-49d4-8723-fd93918e99b1-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.982179 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/078c2368-b247-49d4-8723-fd93918e99b1-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.982212 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjgnd\" (UniqueName: \"kubernetes.io/projected/f7e90972-9be1-4d3e-852e-e7f7df6e6623-kube-api-access-tjgnd\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.982243 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/078c2368-b247-49d4-8723-fd93918e99b1-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.982281 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/078c2368-b247-49d4-8723-fd93918e99b1-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.982317 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f7e90972-9be1-4d3e-852e-e7f7df6e6623-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.982342 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmd5s\" (UniqueName: \"kubernetes.io/projected/078c2368-b247-49d4-8723-fd93918e99b1-kube-api-access-bmd5s\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.982368 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/078c2368-b247-49d4-8723-fd93918e99b1-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.982391 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/078c2368-b247-49d4-8723-fd93918e99b1-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.982420 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/078c2368-b247-49d4-8723-fd93918e99b1-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.982447 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f7e90972-9be1-4d3e-852e-e7f7df6e6623-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.982470 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/078c2368-b247-49d4-8723-fd93918e99b1-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.982498 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/078c2368-b247-49d4-8723-fd93918e99b1-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.982525 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.982552 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f7e90972-9be1-4d3e-852e-e7f7df6e6623-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.982572 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f7e90972-9be1-4d3e-852e-e7f7df6e6623-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.982604 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.982633 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f7e90972-9be1-4d3e-852e-e7f7df6e6623-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.982666 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f7e90972-9be1-4d3e-852e-e7f7df6e6623-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.982691 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f7e90972-9be1-4d3e-852e-e7f7df6e6623-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.982883 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f7e90972-9be1-4d3e-852e-e7f7df6e6623-config-data\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.984188 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f7e90972-9be1-4d3e-852e-e7f7df6e6623-config-data\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.985227 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f7e90972-9be1-4d3e-852e-e7f7df6e6623-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.985576 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/078c2368-b247-49d4-8723-fd93918e99b1-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.986093 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f7e90972-9be1-4d3e-852e-e7f7df6e6623-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.987314 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/078c2368-b247-49d4-8723-fd93918e99b1-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.987394 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.988241 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/078c2368-b247-49d4-8723-fd93918e99b1-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.988276 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f7e90972-9be1-4d3e-852e-e7f7df6e6623-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:39 crc kubenswrapper[4881]: I0121 11:16:39.989413 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:39.992461 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/078c2368-b247-49d4-8723-fd93918e99b1-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:39.992862 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f7e90972-9be1-4d3e-852e-e7f7df6e6623-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:39.992930 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/078c2368-b247-49d4-8723-fd93918e99b1-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.005290 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/078c2368-b247-49d4-8723-fd93918e99b1-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.013178 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjgnd\" (UniqueName: \"kubernetes.io/projected/f7e90972-9be1-4d3e-852e-e7f7df6e6623-kube-api-access-tjgnd\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.018683 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f7e90972-9be1-4d3e-852e-e7f7df6e6623-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.021837 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmd5s\" (UniqueName: \"kubernetes.io/projected/078c2368-b247-49d4-8723-fd93918e99b1-kube-api-access-bmd5s\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.036693 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f7e90972-9be1-4d3e-852e-e7f7df6e6623-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.049771 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/078c2368-b247-49d4-8723-fd93918e99b1-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.050233 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f7e90972-9be1-4d3e-852e-e7f7df6e6623-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.057654 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f7e90972-9be1-4d3e-852e-e7f7df6e6623-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.064471 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/078c2368-b247-49d4-8723-fd93918e99b1-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.074370 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/078c2368-b247-49d4-8723-fd93918e99b1-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.136126 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.181234 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-notifications-server-0"] Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.184686 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.189926 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.196952 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " pod="openstack/rabbitmq-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.199260 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-notifications-server-conf" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.199440 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-notifications-config-data" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.199691 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-notifications-svc" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.199838 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-notifications-default-user" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.200469 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-notifications-erlang-cookie" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.200596 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-notifications-plugins-conf" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.202449 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-notifications-server-dockercfg-fc7sw" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.291201 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-notifications-server-0"] Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.292198 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/44bcf219-3358-4596-9d1e-88a51c415266-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.292277 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/44bcf219-3358-4596-9d1e-88a51c415266-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.292316 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/44bcf219-3358-4596-9d1e-88a51c415266-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.292343 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/44bcf219-3358-4596-9d1e-88a51c415266-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.292361 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5n6k\" (UniqueName: \"kubernetes.io/projected/44bcf219-3358-4596-9d1e-88a51c415266-kube-api-access-q5n6k\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.292383 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/44bcf219-3358-4596-9d1e-88a51c415266-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.292402 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/44bcf219-3358-4596-9d1e-88a51c415266-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.292418 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/44bcf219-3358-4596-9d1e-88a51c415266-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.292441 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/44bcf219-3358-4596-9d1e-88a51c415266-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.292458 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/44bcf219-3358-4596-9d1e-88a51c415266-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.292480 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.297850 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6557d744f-gt5cx"] Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.394306 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/44bcf219-3358-4596-9d1e-88a51c415266-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.394399 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/44bcf219-3358-4596-9d1e-88a51c415266-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.394437 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/44bcf219-3358-4596-9d1e-88a51c415266-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.394453 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/44bcf219-3358-4596-9d1e-88a51c415266-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.394472 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5n6k\" (UniqueName: \"kubernetes.io/projected/44bcf219-3358-4596-9d1e-88a51c415266-kube-api-access-q5n6k\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.394493 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/44bcf219-3358-4596-9d1e-88a51c415266-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.394512 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/44bcf219-3358-4596-9d1e-88a51c415266-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.394533 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/44bcf219-3358-4596-9d1e-88a51c415266-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.394568 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/44bcf219-3358-4596-9d1e-88a51c415266-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.394606 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/44bcf219-3358-4596-9d1e-88a51c415266-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.394626 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.394934 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.415055 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/44bcf219-3358-4596-9d1e-88a51c415266-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.423648 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/44bcf219-3358-4596-9d1e-88a51c415266-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.426164 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/44bcf219-3358-4596-9d1e-88a51c415266-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.426695 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/44bcf219-3358-4596-9d1e-88a51c415266-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.435636 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/44bcf219-3358-4596-9d1e-88a51c415266-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.436796 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/44bcf219-3358-4596-9d1e-88a51c415266-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.437762 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/44bcf219-3358-4596-9d1e-88a51c415266-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.449443 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/44bcf219-3358-4596-9d1e-88a51c415266-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.451457 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.453729 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5n6k\" (UniqueName: \"kubernetes.io/projected/44bcf219-3358-4596-9d1e-88a51c415266-kube-api-access-q5n6k\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.464395 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/44bcf219-3358-4596-9d1e-88a51c415266-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"44bcf219-3358-4596-9d1e-88a51c415266\") " pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.473623 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 11:16:40 crc kubenswrapper[4881]: I0121 11:16:40.584329 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:40.855747 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6557d744f-gt5cx" event={"ID":"aec91505-d39a-41cf-90af-1593bcb02e68","Type":"ContainerStarted","Data":"11e9d0f8032d3e65513f2d8249ce3ac74bc1a4ddfcd269afe6c654eddabc71b8"} Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:40.859929 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fc7fbc9b9-cj7zb" event={"ID":"eb0e6ce6-181c-4edb-b4b3-d169c41c63a8","Type":"ContainerStarted","Data":"8b64289332b9bf6e24ce3af64b2717f89e14cd1b712818252df454ed0a94562c"} Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:40.863777 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7457897f45-vkp6c" event={"ID":"99aba8a6-cc58-43be-9607-8ae1fcb57257","Type":"ContainerStarted","Data":"3ca12aa1fc94ac25d568434ebdd78b6fc24b1d504a1ce7b61d9ef849d50cf128"} Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.260492 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 11:16:42 crc kubenswrapper[4881]: W0121 11:16:41.332070 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod078c2368_b247_49d4_8723_fd93918e99b1.slice/crio-cb426b0ea6a917959cdcac6b6915e9a598cb2f51672af4e37994bc672acc84c9 WatchSource:0}: Error finding container cb426b0ea6a917959cdcac6b6915e9a598cb2f51672af4e37994bc672acc84c9: Status 404 returned error can't find the container with id cb426b0ea6a917959cdcac6b6915e9a598cb2f51672af4e37994bc672acc84c9 Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.468699 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.470298 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.477686 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.477957 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.478061 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-q8hmw" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.478481 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.483210 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.487259 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.625137 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"197dd5bf-f68a-4d9d-b75c-de87a54ed46b\") " pod="openstack/openstack-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.625334 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/197dd5bf-f68a-4d9d-b75c-de87a54ed46b-config-data-default\") pod \"openstack-galera-0\" (UID: \"197dd5bf-f68a-4d9d-b75c-de87a54ed46b\") " pod="openstack/openstack-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.625487 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/197dd5bf-f68a-4d9d-b75c-de87a54ed46b-config-data-generated\") pod \"openstack-galera-0\" (UID: \"197dd5bf-f68a-4d9d-b75c-de87a54ed46b\") " pod="openstack/openstack-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.625824 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/197dd5bf-f68a-4d9d-b75c-de87a54ed46b-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"197dd5bf-f68a-4d9d-b75c-de87a54ed46b\") " pod="openstack/openstack-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.625922 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r44km\" (UniqueName: \"kubernetes.io/projected/197dd5bf-f68a-4d9d-b75c-de87a54ed46b-kube-api-access-r44km\") pod \"openstack-galera-0\" (UID: \"197dd5bf-f68a-4d9d-b75c-de87a54ed46b\") " pod="openstack/openstack-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.626013 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/197dd5bf-f68a-4d9d-b75c-de87a54ed46b-kolla-config\") pod \"openstack-galera-0\" (UID: \"197dd5bf-f68a-4d9d-b75c-de87a54ed46b\") " pod="openstack/openstack-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.626036 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/197dd5bf-f68a-4d9d-b75c-de87a54ed46b-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"197dd5bf-f68a-4d9d-b75c-de87a54ed46b\") " pod="openstack/openstack-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.626210 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/197dd5bf-f68a-4d9d-b75c-de87a54ed46b-operator-scripts\") pod \"openstack-galera-0\" (UID: \"197dd5bf-f68a-4d9d-b75c-de87a54ed46b\") " pod="openstack/openstack-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.728743 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"197dd5bf-f68a-4d9d-b75c-de87a54ed46b\") " pod="openstack/openstack-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.728868 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/197dd5bf-f68a-4d9d-b75c-de87a54ed46b-config-data-default\") pod \"openstack-galera-0\" (UID: \"197dd5bf-f68a-4d9d-b75c-de87a54ed46b\") " pod="openstack/openstack-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.728946 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/197dd5bf-f68a-4d9d-b75c-de87a54ed46b-config-data-generated\") pod \"openstack-galera-0\" (UID: \"197dd5bf-f68a-4d9d-b75c-de87a54ed46b\") " pod="openstack/openstack-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.729097 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/197dd5bf-f68a-4d9d-b75c-de87a54ed46b-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"197dd5bf-f68a-4d9d-b75c-de87a54ed46b\") " pod="openstack/openstack-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.729093 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"197dd5bf-f68a-4d9d-b75c-de87a54ed46b\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.729175 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r44km\" (UniqueName: \"kubernetes.io/projected/197dd5bf-f68a-4d9d-b75c-de87a54ed46b-kube-api-access-r44km\") pod \"openstack-galera-0\" (UID: \"197dd5bf-f68a-4d9d-b75c-de87a54ed46b\") " pod="openstack/openstack-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.729250 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/197dd5bf-f68a-4d9d-b75c-de87a54ed46b-kolla-config\") pod \"openstack-galera-0\" (UID: \"197dd5bf-f68a-4d9d-b75c-de87a54ed46b\") " pod="openstack/openstack-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.729276 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/197dd5bf-f68a-4d9d-b75c-de87a54ed46b-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"197dd5bf-f68a-4d9d-b75c-de87a54ed46b\") " pod="openstack/openstack-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.729351 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/197dd5bf-f68a-4d9d-b75c-de87a54ed46b-operator-scripts\") pod \"openstack-galera-0\" (UID: \"197dd5bf-f68a-4d9d-b75c-de87a54ed46b\") " pod="openstack/openstack-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.730031 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/197dd5bf-f68a-4d9d-b75c-de87a54ed46b-config-data-default\") pod \"openstack-galera-0\" (UID: \"197dd5bf-f68a-4d9d-b75c-de87a54ed46b\") " pod="openstack/openstack-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.730294 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/197dd5bf-f68a-4d9d-b75c-de87a54ed46b-kolla-config\") pod \"openstack-galera-0\" (UID: \"197dd5bf-f68a-4d9d-b75c-de87a54ed46b\") " pod="openstack/openstack-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.732019 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/197dd5bf-f68a-4d9d-b75c-de87a54ed46b-operator-scripts\") pod \"openstack-galera-0\" (UID: \"197dd5bf-f68a-4d9d-b75c-de87a54ed46b\") " pod="openstack/openstack-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.733302 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/197dd5bf-f68a-4d9d-b75c-de87a54ed46b-config-data-generated\") pod \"openstack-galera-0\" (UID: \"197dd5bf-f68a-4d9d-b75c-de87a54ed46b\") " pod="openstack/openstack-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.739264 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/197dd5bf-f68a-4d9d-b75c-de87a54ed46b-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"197dd5bf-f68a-4d9d-b75c-de87a54ed46b\") " pod="openstack/openstack-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.742273 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/197dd5bf-f68a-4d9d-b75c-de87a54ed46b-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"197dd5bf-f68a-4d9d-b75c-de87a54ed46b\") " pod="openstack/openstack-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.749499 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r44km\" (UniqueName: \"kubernetes.io/projected/197dd5bf-f68a-4d9d-b75c-de87a54ed46b-kube-api-access-r44km\") pod \"openstack-galera-0\" (UID: \"197dd5bf-f68a-4d9d-b75c-de87a54ed46b\") " pod="openstack/openstack-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.807158 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"197dd5bf-f68a-4d9d-b75c-de87a54ed46b\") " pod="openstack/openstack-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:41.900914 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"078c2368-b247-49d4-8723-fd93918e99b1","Type":"ContainerStarted","Data":"cb426b0ea6a917959cdcac6b6915e9a598cb2f51672af4e37994bc672acc84c9"} Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.103161 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.432374 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.435094 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.442724 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.442956 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.443053 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-mgnz4" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.443145 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.546438 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/cd1973a5-773b-438b-aab7-709fb828324d-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"cd1973a5-773b-438b-aab7-709fb828324d\") " pod="openstack/openstack-cell1-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.546500 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4phxd\" (UniqueName: \"kubernetes.io/projected/cd1973a5-773b-438b-aab7-709fb828324d-kube-api-access-4phxd\") pod \"openstack-cell1-galera-0\" (UID: \"cd1973a5-773b-438b-aab7-709fb828324d\") " pod="openstack/openstack-cell1-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.546551 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/cd1973a5-773b-438b-aab7-709fb828324d-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"cd1973a5-773b-438b-aab7-709fb828324d\") " pod="openstack/openstack-cell1-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.546622 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"cd1973a5-773b-438b-aab7-709fb828324d\") " pod="openstack/openstack-cell1-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.546640 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd1973a5-773b-438b-aab7-709fb828324d-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"cd1973a5-773b-438b-aab7-709fb828324d\") " pod="openstack/openstack-cell1-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.546729 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd1973a5-773b-438b-aab7-709fb828324d-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"cd1973a5-773b-438b-aab7-709fb828324d\") " pod="openstack/openstack-cell1-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.546805 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cd1973a5-773b-438b-aab7-709fb828324d-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"cd1973a5-773b-438b-aab7-709fb828324d\") " pod="openstack/openstack-cell1-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.546842 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd1973a5-773b-438b-aab7-709fb828324d-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"cd1973a5-773b-438b-aab7-709fb828324d\") " pod="openstack/openstack-cell1-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.619794 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.777277 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"cd1973a5-773b-438b-aab7-709fb828324d\") " pod="openstack/openstack-cell1-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.777826 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd1973a5-773b-438b-aab7-709fb828324d-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"cd1973a5-773b-438b-aab7-709fb828324d\") " pod="openstack/openstack-cell1-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.777945 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd1973a5-773b-438b-aab7-709fb828324d-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"cd1973a5-773b-438b-aab7-709fb828324d\") " pod="openstack/openstack-cell1-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.778100 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cd1973a5-773b-438b-aab7-709fb828324d-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"cd1973a5-773b-438b-aab7-709fb828324d\") " pod="openstack/openstack-cell1-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.778205 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd1973a5-773b-438b-aab7-709fb828324d-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"cd1973a5-773b-438b-aab7-709fb828324d\") " pod="openstack/openstack-cell1-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.778256 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/cd1973a5-773b-438b-aab7-709fb828324d-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"cd1973a5-773b-438b-aab7-709fb828324d\") " pod="openstack/openstack-cell1-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.778288 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4phxd\" (UniqueName: \"kubernetes.io/projected/cd1973a5-773b-438b-aab7-709fb828324d-kube-api-access-4phxd\") pod \"openstack-cell1-galera-0\" (UID: \"cd1973a5-773b-438b-aab7-709fb828324d\") " pod="openstack/openstack-cell1-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.781130 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"cd1973a5-773b-438b-aab7-709fb828324d\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/openstack-cell1-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.788560 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd1973a5-773b-438b-aab7-709fb828324d-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"cd1973a5-773b-438b-aab7-709fb828324d\") " pod="openstack/openstack-cell1-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.790255 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cd1973a5-773b-438b-aab7-709fb828324d-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"cd1973a5-773b-438b-aab7-709fb828324d\") " pod="openstack/openstack-cell1-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.790283 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/cd1973a5-773b-438b-aab7-709fb828324d-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"cd1973a5-773b-438b-aab7-709fb828324d\") " pod="openstack/openstack-cell1-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.793448 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/cd1973a5-773b-438b-aab7-709fb828324d-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"cd1973a5-773b-438b-aab7-709fb828324d\") " pod="openstack/openstack-cell1-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.794934 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/cd1973a5-773b-438b-aab7-709fb828324d-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"cd1973a5-773b-438b-aab7-709fb828324d\") " pod="openstack/openstack-cell1-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.796202 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd1973a5-773b-438b-aab7-709fb828324d-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"cd1973a5-773b-438b-aab7-709fb828324d\") " pod="openstack/openstack-cell1-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.796397 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.798632 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.800394 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/cd1973a5-773b-438b-aab7-709fb828324d-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"cd1973a5-773b-438b-aab7-709fb828324d\") " pod="openstack/openstack-cell1-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.815244 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-t9dg7" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.815468 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.815730 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.821213 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"cd1973a5-773b-438b-aab7-709fb828324d\") " pod="openstack/openstack-cell1-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.851329 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4phxd\" (UniqueName: \"kubernetes.io/projected/cd1973a5-773b-438b-aab7-709fb828324d-kube-api-access-4phxd\") pod \"openstack-cell1-galera-0\" (UID: \"cd1973a5-773b-438b-aab7-709fb828324d\") " pod="openstack/openstack-cell1-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.851418 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.908909 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.920236 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.953366 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-notifications-server-0"] Jan 21 11:16:42 crc kubenswrapper[4881]: W0121 11:16:42.982172 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44bcf219_3358_4596_9d1e_88a51c415266.slice/crio-16c5e3afc533af42a0c79aba5b8ac657c33f906308b39274db955a90bb51ea58 WatchSource:0}: Error finding container 16c5e3afc533af42a0c79aba5b8ac657c33f906308b39274db955a90bb51ea58: Status 404 returned error can't find the container with id 16c5e3afc533af42a0c79aba5b8ac657c33f906308b39274db955a90bb51ea58 Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.998957 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/7960c16a-de64-4154-9072-aee49e3bd573-memcached-tls-certs\") pod \"memcached-0\" (UID: \"7960c16a-de64-4154-9072-aee49e3bd573\") " pod="openstack/memcached-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.999024 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7960c16a-de64-4154-9072-aee49e3bd573-kolla-config\") pod \"memcached-0\" (UID: \"7960c16a-de64-4154-9072-aee49e3bd573\") " pod="openstack/memcached-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.999061 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7960c16a-de64-4154-9072-aee49e3bd573-config-data\") pod \"memcached-0\" (UID: \"7960c16a-de64-4154-9072-aee49e3bd573\") " pod="openstack/memcached-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.999085 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7960c16a-de64-4154-9072-aee49e3bd573-combined-ca-bundle\") pod \"memcached-0\" (UID: \"7960c16a-de64-4154-9072-aee49e3bd573\") " pod="openstack/memcached-0" Jan 21 11:16:42 crc kubenswrapper[4881]: I0121 11:16:42.999108 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g444t\" (UniqueName: \"kubernetes.io/projected/7960c16a-de64-4154-9072-aee49e3bd573-kube-api-access-g444t\") pod \"memcached-0\" (UID: \"7960c16a-de64-4154-9072-aee49e3bd573\") " pod="openstack/memcached-0" Jan 21 11:16:43 crc kubenswrapper[4881]: W0121 11:16:43.037000 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7e90972_9be1_4d3e_852e_e7f7df6e6623.slice/crio-0407be0eb8897677e11cb341e14b52b133b745f624185504d845fdccc7ff50c4 WatchSource:0}: Error finding container 0407be0eb8897677e11cb341e14b52b133b745f624185504d845fdccc7ff50c4: Status 404 returned error can't find the container with id 0407be0eb8897677e11cb341e14b52b133b745f624185504d845fdccc7ff50c4 Jan 21 11:16:43 crc kubenswrapper[4881]: I0121 11:16:43.100418 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7960c16a-de64-4154-9072-aee49e3bd573-config-data\") pod \"memcached-0\" (UID: \"7960c16a-de64-4154-9072-aee49e3bd573\") " pod="openstack/memcached-0" Jan 21 11:16:43 crc kubenswrapper[4881]: I0121 11:16:43.100618 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7960c16a-de64-4154-9072-aee49e3bd573-combined-ca-bundle\") pod \"memcached-0\" (UID: \"7960c16a-de64-4154-9072-aee49e3bd573\") " pod="openstack/memcached-0" Jan 21 11:16:43 crc kubenswrapper[4881]: I0121 11:16:43.100726 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g444t\" (UniqueName: \"kubernetes.io/projected/7960c16a-de64-4154-9072-aee49e3bd573-kube-api-access-g444t\") pod \"memcached-0\" (UID: \"7960c16a-de64-4154-9072-aee49e3bd573\") " pod="openstack/memcached-0" Jan 21 11:16:43 crc kubenswrapper[4881]: I0121 11:16:43.102611 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7960c16a-de64-4154-9072-aee49e3bd573-config-data\") pod \"memcached-0\" (UID: \"7960c16a-de64-4154-9072-aee49e3bd573\") " pod="openstack/memcached-0" Jan 21 11:16:43 crc kubenswrapper[4881]: I0121 11:16:43.102709 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/7960c16a-de64-4154-9072-aee49e3bd573-memcached-tls-certs\") pod \"memcached-0\" (UID: \"7960c16a-de64-4154-9072-aee49e3bd573\") " pod="openstack/memcached-0" Jan 21 11:16:43 crc kubenswrapper[4881]: I0121 11:16:43.102796 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7960c16a-de64-4154-9072-aee49e3bd573-kolla-config\") pod \"memcached-0\" (UID: \"7960c16a-de64-4154-9072-aee49e3bd573\") " pod="openstack/memcached-0" Jan 21 11:16:43 crc kubenswrapper[4881]: I0121 11:16:43.104216 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7960c16a-de64-4154-9072-aee49e3bd573-kolla-config\") pod \"memcached-0\" (UID: \"7960c16a-de64-4154-9072-aee49e3bd573\") " pod="openstack/memcached-0" Jan 21 11:16:43 crc kubenswrapper[4881]: I0121 11:16:43.109803 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7960c16a-de64-4154-9072-aee49e3bd573-combined-ca-bundle\") pod \"memcached-0\" (UID: \"7960c16a-de64-4154-9072-aee49e3bd573\") " pod="openstack/memcached-0" Jan 21 11:16:43 crc kubenswrapper[4881]: I0121 11:16:43.111005 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/7960c16a-de64-4154-9072-aee49e3bd573-memcached-tls-certs\") pod \"memcached-0\" (UID: \"7960c16a-de64-4154-9072-aee49e3bd573\") " pod="openstack/memcached-0" Jan 21 11:16:43 crc kubenswrapper[4881]: I0121 11:16:43.144956 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g444t\" (UniqueName: \"kubernetes.io/projected/7960c16a-de64-4154-9072-aee49e3bd573-kube-api-access-g444t\") pod \"memcached-0\" (UID: \"7960c16a-de64-4154-9072-aee49e3bd573\") " pod="openstack/memcached-0" Jan 21 11:16:43 crc kubenswrapper[4881]: I0121 11:16:43.243677 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 21 11:16:43 crc kubenswrapper[4881]: I0121 11:16:43.451574 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 21 11:16:43 crc kubenswrapper[4881]: W0121 11:16:43.751846 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod197dd5bf_f68a_4d9d_b75c_de87a54ed46b.slice/crio-8a08ae4a936f9bbaf1abb307c032317c77dd53689a6e37ad792df8ddb1603258 WatchSource:0}: Error finding container 8a08ae4a936f9bbaf1abb307c032317c77dd53689a6e37ad792df8ddb1603258: Status 404 returned error can't find the container with id 8a08ae4a936f9bbaf1abb307c032317c77dd53689a6e37ad792df8ddb1603258 Jan 21 11:16:43 crc kubenswrapper[4881]: I0121 11:16:43.844820 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 11:16:43 crc kubenswrapper[4881]: I0121 11:16:43.990776 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f7e90972-9be1-4d3e-852e-e7f7df6e6623","Type":"ContainerStarted","Data":"0407be0eb8897677e11cb341e14b52b133b745f624185504d845fdccc7ff50c4"} Jan 21 11:16:43 crc kubenswrapper[4881]: I0121 11:16:43.995183 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"197dd5bf-f68a-4d9d-b75c-de87a54ed46b","Type":"ContainerStarted","Data":"8a08ae4a936f9bbaf1abb307c032317c77dd53689a6e37ad792df8ddb1603258"} Jan 21 11:16:44 crc kubenswrapper[4881]: I0121 11:16:44.006059 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"cd1973a5-773b-438b-aab7-709fb828324d","Type":"ContainerStarted","Data":"32b56190a0a8319e5d34df079d4aefc4527f4f97d92ba67b2ab0a2552ab4c75b"} Jan 21 11:16:44 crc kubenswrapper[4881]: I0121 11:16:44.028843 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"44bcf219-3358-4596-9d1e-88a51c415266","Type":"ContainerStarted","Data":"16c5e3afc533af42a0c79aba5b8ac657c33f906308b39274db955a90bb51ea58"} Jan 21 11:16:44 crc kubenswrapper[4881]: I0121 11:16:44.381507 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 21 11:16:44 crc kubenswrapper[4881]: W0121 11:16:44.398765 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7960c16a_de64_4154_9072_aee49e3bd573.slice/crio-6823e1cf605be543d2ea341657a2ff74c8a83ab32d1b0fd041ebf61158f070cf WatchSource:0}: Error finding container 6823e1cf605be543d2ea341657a2ff74c8a83ab32d1b0fd041ebf61158f070cf: Status 404 returned error can't find the container with id 6823e1cf605be543d2ea341657a2ff74c8a83ab32d1b0fd041ebf61158f070cf Jan 21 11:16:44 crc kubenswrapper[4881]: I0121 11:16:44.578501 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 11:16:44 crc kubenswrapper[4881]: I0121 11:16:44.583341 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 11:16:44 crc kubenswrapper[4881]: I0121 11:16:44.586888 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-bs89w" Jan 21 11:16:44 crc kubenswrapper[4881]: I0121 11:16:44.598016 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 11:16:44 crc kubenswrapper[4881]: I0121 11:16:44.606093 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25992\" (UniqueName: \"kubernetes.io/projected/c5b6c25e-e882-4ea4-a284-6f55bfe75093-kube-api-access-25992\") pod \"kube-state-metrics-0\" (UID: \"c5b6c25e-e882-4ea4-a284-6f55bfe75093\") " pod="openstack/kube-state-metrics-0" Jan 21 11:16:44 crc kubenswrapper[4881]: I0121 11:16:44.710252 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25992\" (UniqueName: \"kubernetes.io/projected/c5b6c25e-e882-4ea4-a284-6f55bfe75093-kube-api-access-25992\") pod \"kube-state-metrics-0\" (UID: \"c5b6c25e-e882-4ea4-a284-6f55bfe75093\") " pod="openstack/kube-state-metrics-0" Jan 21 11:16:44 crc kubenswrapper[4881]: I0121 11:16:44.765872 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25992\" (UniqueName: \"kubernetes.io/projected/c5b6c25e-e882-4ea4-a284-6f55bfe75093-kube-api-access-25992\") pod \"kube-state-metrics-0\" (UID: \"c5b6c25e-e882-4ea4-a284-6f55bfe75093\") " pod="openstack/kube-state-metrics-0" Jan 21 11:16:44 crc kubenswrapper[4881]: I0121 11:16:44.945065 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 11:16:45 crc kubenswrapper[4881]: I0121 11:16:45.048876 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"7960c16a-de64-4154-9072-aee49e3bd573","Type":"ContainerStarted","Data":"6823e1cf605be543d2ea341657a2ff74c8a83ab32d1b0fd041ebf61158f070cf"} Jan 21 11:16:45 crc kubenswrapper[4881]: I0121 11:16:45.934267 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 11:16:45 crc kubenswrapper[4881]: I0121 11:16:45.939475 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:45 crc kubenswrapper[4881]: I0121 11:16:45.945370 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 11:16:45 crc kubenswrapper[4881]: I0121 11:16:45.947457 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 21 11:16:45 crc kubenswrapper[4881]: I0121 11:16:45.947750 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 21 11:16:45 crc kubenswrapper[4881]: I0121 11:16:45.947733 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 21 11:16:45 crc kubenswrapper[4881]: I0121 11:16:45.948264 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 21 11:16:45 crc kubenswrapper[4881]: I0121 11:16:45.947903 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 21 11:16:45 crc kubenswrapper[4881]: I0121 11:16:45.948880 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-jwvdx" Jan 21 11:16:45 crc kubenswrapper[4881]: I0121 11:16:45.949069 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 21 11:16:45 crc kubenswrapper[4881]: I0121 11:16:45.949210 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.021266 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/75733567-f2a6-4331-bdea-147126213437-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.021544 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/75733567-f2a6-4331-bdea-147126213437-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.021608 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/75733567-f2a6-4331-bdea-147126213437-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.022016 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/75733567-f2a6-4331-bdea-147126213437-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.022107 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/75733567-f2a6-4331-bdea-147126213437-config\") pod \"prometheus-metric-storage-0\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.022301 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/75733567-f2a6-4331-bdea-147126213437-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.022417 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/75733567-f2a6-4331-bdea-147126213437-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.022563 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2vkg\" (UniqueName: \"kubernetes.io/projected/75733567-f2a6-4331-bdea-147126213437-kube-api-access-n2vkg\") pod \"prometheus-metric-storage-0\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.022666 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\") pod \"prometheus-metric-storage-0\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.022722 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/75733567-f2a6-4331-bdea-147126213437-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.102230 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.128745 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2vkg\" (UniqueName: \"kubernetes.io/projected/75733567-f2a6-4331-bdea-147126213437-kube-api-access-n2vkg\") pod \"prometheus-metric-storage-0\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.128845 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\") pod \"prometheus-metric-storage-0\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.128877 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/75733567-f2a6-4331-bdea-147126213437-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.128908 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/75733567-f2a6-4331-bdea-147126213437-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.128961 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/75733567-f2a6-4331-bdea-147126213437-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.128979 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/75733567-f2a6-4331-bdea-147126213437-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.129011 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/75733567-f2a6-4331-bdea-147126213437-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.129031 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/75733567-f2a6-4331-bdea-147126213437-config\") pod \"prometheus-metric-storage-0\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.129060 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/75733567-f2a6-4331-bdea-147126213437-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.129085 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/75733567-f2a6-4331-bdea-147126213437-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.130748 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/75733567-f2a6-4331-bdea-147126213437-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.133578 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/75733567-f2a6-4331-bdea-147126213437-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.134087 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/75733567-f2a6-4331-bdea-147126213437-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.150970 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/75733567-f2a6-4331-bdea-147126213437-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.151678 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/75733567-f2a6-4331-bdea-147126213437-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.158414 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/75733567-f2a6-4331-bdea-147126213437-config\") pod \"prometheus-metric-storage-0\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.177981 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2vkg\" (UniqueName: \"kubernetes.io/projected/75733567-f2a6-4331-bdea-147126213437-kube-api-access-n2vkg\") pod \"prometheus-metric-storage-0\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.187558 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/75733567-f2a6-4331-bdea-147126213437-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.209872 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/75733567-f2a6-4331-bdea-147126213437-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.218997 4881 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.219064 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\") pod \"prometheus-metric-storage-0\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3c91253029fdcc57c7bcc13c4ee1dc503079fe71761fa62e5d04837e0b8b075e/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.366609 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\") pod \"prometheus-metric-storage-0\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:46 crc kubenswrapper[4881]: I0121 11:16:46.623056 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 21 11:16:47 crc kubenswrapper[4881]: I0121 11:16:47.147528 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c5b6c25e-e882-4ea4-a284-6f55bfe75093","Type":"ContainerStarted","Data":"a902e47db0ad78d4b1a0c530458a8cc5f24a6bbadf9cb6042572a73fad768c2d"} Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.129122 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 11:16:48 crc kubenswrapper[4881]: W0121 11:16:48.160924 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75733567_f2a6_4331_bdea_147126213437.slice/crio-648f9884533415a5c2309f4dd9efc2ccd6cbaeb098dca1475cdb0221de466d52 WatchSource:0}: Error finding container 648f9884533415a5c2309f4dd9efc2ccd6cbaeb098dca1475cdb0221de466d52: Status 404 returned error can't find the container with id 648f9884533415a5c2309f4dd9efc2ccd6cbaeb098dca1475cdb0221de466d52 Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.559776 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-s642n"] Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.561087 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-s642n" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.565016 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-kxx24" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.565208 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.565310 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-2rtl8"] Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.566729 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-2rtl8" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.567026 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.571588 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-s642n"] Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.625372 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/256e0b4a-baac-415c-94c6-09f08fa09c7c-ovn-controller-tls-certs\") pod \"ovn-controller-s642n\" (UID: \"256e0b4a-baac-415c-94c6-09f08fa09c7c\") " pod="openstack/ovn-controller-s642n" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.625433 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/9ff4a63e-40e5-4133-967e-9ba083f3603b-etc-ovs\") pod \"ovn-controller-ovs-2rtl8\" (UID: \"9ff4a63e-40e5-4133-967e-9ba083f3603b\") " pod="openstack/ovn-controller-ovs-2rtl8" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.625464 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnx8p\" (UniqueName: \"kubernetes.io/projected/9ff4a63e-40e5-4133-967e-9ba083f3603b-kube-api-access-bnx8p\") pod \"ovn-controller-ovs-2rtl8\" (UID: \"9ff4a63e-40e5-4133-967e-9ba083f3603b\") " pod="openstack/ovn-controller-ovs-2rtl8" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.625497 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/256e0b4a-baac-415c-94c6-09f08fa09c7c-var-run\") pod \"ovn-controller-s642n\" (UID: \"256e0b4a-baac-415c-94c6-09f08fa09c7c\") " pod="openstack/ovn-controller-s642n" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.625513 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/9ff4a63e-40e5-4133-967e-9ba083f3603b-var-log\") pod \"ovn-controller-ovs-2rtl8\" (UID: \"9ff4a63e-40e5-4133-967e-9ba083f3603b\") " pod="openstack/ovn-controller-ovs-2rtl8" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.625527 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/256e0b4a-baac-415c-94c6-09f08fa09c7c-var-run-ovn\") pod \"ovn-controller-s642n\" (UID: \"256e0b4a-baac-415c-94c6-09f08fa09c7c\") " pod="openstack/ovn-controller-s642n" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.625544 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/256e0b4a-baac-415c-94c6-09f08fa09c7c-var-log-ovn\") pod \"ovn-controller-s642n\" (UID: \"256e0b4a-baac-415c-94c6-09f08fa09c7c\") " pod="openstack/ovn-controller-s642n" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.625572 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcpzd\" (UniqueName: \"kubernetes.io/projected/256e0b4a-baac-415c-94c6-09f08fa09c7c-kube-api-access-kcpzd\") pod \"ovn-controller-s642n\" (UID: \"256e0b4a-baac-415c-94c6-09f08fa09c7c\") " pod="openstack/ovn-controller-s642n" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.625596 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/9ff4a63e-40e5-4133-967e-9ba083f3603b-var-lib\") pod \"ovn-controller-ovs-2rtl8\" (UID: \"9ff4a63e-40e5-4133-967e-9ba083f3603b\") " pod="openstack/ovn-controller-ovs-2rtl8" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.625632 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9ff4a63e-40e5-4133-967e-9ba083f3603b-var-run\") pod \"ovn-controller-ovs-2rtl8\" (UID: \"9ff4a63e-40e5-4133-967e-9ba083f3603b\") " pod="openstack/ovn-controller-ovs-2rtl8" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.625648 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ff4a63e-40e5-4133-967e-9ba083f3603b-scripts\") pod \"ovn-controller-ovs-2rtl8\" (UID: \"9ff4a63e-40e5-4133-967e-9ba083f3603b\") " pod="openstack/ovn-controller-ovs-2rtl8" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.625665 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/256e0b4a-baac-415c-94c6-09f08fa09c7c-scripts\") pod \"ovn-controller-s642n\" (UID: \"256e0b4a-baac-415c-94c6-09f08fa09c7c\") " pod="openstack/ovn-controller-s642n" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.625718 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/256e0b4a-baac-415c-94c6-09f08fa09c7c-combined-ca-bundle\") pod \"ovn-controller-s642n\" (UID: \"256e0b4a-baac-415c-94c6-09f08fa09c7c\") " pod="openstack/ovn-controller-s642n" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.648563 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-2rtl8"] Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.727839 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/9ff4a63e-40e5-4133-967e-9ba083f3603b-var-lib\") pod \"ovn-controller-ovs-2rtl8\" (UID: \"9ff4a63e-40e5-4133-967e-9ba083f3603b\") " pod="openstack/ovn-controller-ovs-2rtl8" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.728053 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9ff4a63e-40e5-4133-967e-9ba083f3603b-var-run\") pod \"ovn-controller-ovs-2rtl8\" (UID: \"9ff4a63e-40e5-4133-967e-9ba083f3603b\") " pod="openstack/ovn-controller-ovs-2rtl8" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.728086 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ff4a63e-40e5-4133-967e-9ba083f3603b-scripts\") pod \"ovn-controller-ovs-2rtl8\" (UID: \"9ff4a63e-40e5-4133-967e-9ba083f3603b\") " pod="openstack/ovn-controller-ovs-2rtl8" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.728135 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/256e0b4a-baac-415c-94c6-09f08fa09c7c-scripts\") pod \"ovn-controller-s642n\" (UID: \"256e0b4a-baac-415c-94c6-09f08fa09c7c\") " pod="openstack/ovn-controller-s642n" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.728160 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/256e0b4a-baac-415c-94c6-09f08fa09c7c-combined-ca-bundle\") pod \"ovn-controller-s642n\" (UID: \"256e0b4a-baac-415c-94c6-09f08fa09c7c\") " pod="openstack/ovn-controller-s642n" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.729154 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/256e0b4a-baac-415c-94c6-09f08fa09c7c-ovn-controller-tls-certs\") pod \"ovn-controller-s642n\" (UID: \"256e0b4a-baac-415c-94c6-09f08fa09c7c\") " pod="openstack/ovn-controller-s642n" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.729317 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/9ff4a63e-40e5-4133-967e-9ba083f3603b-etc-ovs\") pod \"ovn-controller-ovs-2rtl8\" (UID: \"9ff4a63e-40e5-4133-967e-9ba083f3603b\") " pod="openstack/ovn-controller-ovs-2rtl8" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.729450 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnx8p\" (UniqueName: \"kubernetes.io/projected/9ff4a63e-40e5-4133-967e-9ba083f3603b-kube-api-access-bnx8p\") pod \"ovn-controller-ovs-2rtl8\" (UID: \"9ff4a63e-40e5-4133-967e-9ba083f3603b\") " pod="openstack/ovn-controller-ovs-2rtl8" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.729570 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/256e0b4a-baac-415c-94c6-09f08fa09c7c-var-run\") pod \"ovn-controller-s642n\" (UID: \"256e0b4a-baac-415c-94c6-09f08fa09c7c\") " pod="openstack/ovn-controller-s642n" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.729657 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/256e0b4a-baac-415c-94c6-09f08fa09c7c-var-run-ovn\") pod \"ovn-controller-s642n\" (UID: \"256e0b4a-baac-415c-94c6-09f08fa09c7c\") " pod="openstack/ovn-controller-s642n" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.729696 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/9ff4a63e-40e5-4133-967e-9ba083f3603b-var-log\") pod \"ovn-controller-ovs-2rtl8\" (UID: \"9ff4a63e-40e5-4133-967e-9ba083f3603b\") " pod="openstack/ovn-controller-ovs-2rtl8" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.729740 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/256e0b4a-baac-415c-94c6-09f08fa09c7c-var-log-ovn\") pod \"ovn-controller-s642n\" (UID: \"256e0b4a-baac-415c-94c6-09f08fa09c7c\") " pod="openstack/ovn-controller-s642n" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.729900 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcpzd\" (UniqueName: \"kubernetes.io/projected/256e0b4a-baac-415c-94c6-09f08fa09c7c-kube-api-access-kcpzd\") pod \"ovn-controller-s642n\" (UID: \"256e0b4a-baac-415c-94c6-09f08fa09c7c\") " pod="openstack/ovn-controller-s642n" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.731439 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/9ff4a63e-40e5-4133-967e-9ba083f3603b-var-lib\") pod \"ovn-controller-ovs-2rtl8\" (UID: \"9ff4a63e-40e5-4133-967e-9ba083f3603b\") " pod="openstack/ovn-controller-ovs-2rtl8" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.884238 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/9ff4a63e-40e5-4133-967e-9ba083f3603b-etc-ovs\") pod \"ovn-controller-ovs-2rtl8\" (UID: \"9ff4a63e-40e5-4133-967e-9ba083f3603b\") " pod="openstack/ovn-controller-ovs-2rtl8" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.886442 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ff4a63e-40e5-4133-967e-9ba083f3603b-scripts\") pod \"ovn-controller-ovs-2rtl8\" (UID: \"9ff4a63e-40e5-4133-967e-9ba083f3603b\") " pod="openstack/ovn-controller-ovs-2rtl8" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.886914 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/9ff4a63e-40e5-4133-967e-9ba083f3603b-var-log\") pod \"ovn-controller-ovs-2rtl8\" (UID: \"9ff4a63e-40e5-4133-967e-9ba083f3603b\") " pod="openstack/ovn-controller-ovs-2rtl8" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.890044 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/256e0b4a-baac-415c-94c6-09f08fa09c7c-var-log-ovn\") pod \"ovn-controller-s642n\" (UID: \"256e0b4a-baac-415c-94c6-09f08fa09c7c\") " pod="openstack/ovn-controller-s642n" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.891337 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/256e0b4a-baac-415c-94c6-09f08fa09c7c-scripts\") pod \"ovn-controller-s642n\" (UID: \"256e0b4a-baac-415c-94c6-09f08fa09c7c\") " pod="openstack/ovn-controller-s642n" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.901371 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/256e0b4a-baac-415c-94c6-09f08fa09c7c-ovn-controller-tls-certs\") pod \"ovn-controller-s642n\" (UID: \"256e0b4a-baac-415c-94c6-09f08fa09c7c\") " pod="openstack/ovn-controller-s642n" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.909861 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.913521 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.925283 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcpzd\" (UniqueName: \"kubernetes.io/projected/256e0b4a-baac-415c-94c6-09f08fa09c7c-kube-api-access-kcpzd\") pod \"ovn-controller-s642n\" (UID: \"256e0b4a-baac-415c-94c6-09f08fa09c7c\") " pod="openstack/ovn-controller-s642n" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.925500 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnx8p\" (UniqueName: \"kubernetes.io/projected/9ff4a63e-40e5-4133-967e-9ba083f3603b-kube-api-access-bnx8p\") pod \"ovn-controller-ovs-2rtl8\" (UID: \"9ff4a63e-40e5-4133-967e-9ba083f3603b\") " pod="openstack/ovn-controller-ovs-2rtl8" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.925586 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.925801 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.926078 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.925836 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-4pbz9" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.925872 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.952395 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/256e0b4a-baac-415c-94c6-09f08fa09c7c-combined-ca-bundle\") pod \"ovn-controller-s642n\" (UID: \"256e0b4a-baac-415c-94c6-09f08fa09c7c\") " pod="openstack/ovn-controller-s642n" Jan 21 11:16:48 crc kubenswrapper[4881]: I0121 11:16:48.956711 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.053203 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9ff4a63e-40e5-4133-967e-9ba083f3603b-var-run\") pod \"ovn-controller-ovs-2rtl8\" (UID: \"9ff4a63e-40e5-4133-967e-9ba083f3603b\") " pod="openstack/ovn-controller-ovs-2rtl8" Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.053232 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/256e0b4a-baac-415c-94c6-09f08fa09c7c-var-run\") pod \"ovn-controller-s642n\" (UID: \"256e0b4a-baac-415c-94c6-09f08fa09c7c\") " pod="openstack/ovn-controller-s642n" Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.053307 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/256e0b4a-baac-415c-94c6-09f08fa09c7c-var-run-ovn\") pod \"ovn-controller-s642n\" (UID: \"256e0b4a-baac-415c-94c6-09f08fa09c7c\") " pod="openstack/ovn-controller-s642n" Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.125502 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/24136f67-aca3-4e40-b3c2-b36b7623475f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"24136f67-aca3-4e40-b3c2-b36b7623475f\") " pod="openstack/ovsdbserver-nb-0" Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.125562 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ldh7\" (UniqueName: \"kubernetes.io/projected/24136f67-aca3-4e40-b3c2-b36b7623475f-kube-api-access-8ldh7\") pod \"ovsdbserver-nb-0\" (UID: \"24136f67-aca3-4e40-b3c2-b36b7623475f\") " pod="openstack/ovsdbserver-nb-0" Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.125643 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24136f67-aca3-4e40-b3c2-b36b7623475f-config\") pod \"ovsdbserver-nb-0\" (UID: \"24136f67-aca3-4e40-b3c2-b36b7623475f\") " pod="openstack/ovsdbserver-nb-0" Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.125676 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/24136f67-aca3-4e40-b3c2-b36b7623475f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"24136f67-aca3-4e40-b3c2-b36b7623475f\") " pod="openstack/ovsdbserver-nb-0" Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.125771 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/24136f67-aca3-4e40-b3c2-b36b7623475f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"24136f67-aca3-4e40-b3c2-b36b7623475f\") " pod="openstack/ovsdbserver-nb-0" Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.125916 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/24136f67-aca3-4e40-b3c2-b36b7623475f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"24136f67-aca3-4e40-b3c2-b36b7623475f\") " pod="openstack/ovsdbserver-nb-0" Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.126258 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"24136f67-aca3-4e40-b3c2-b36b7623475f\") " pod="openstack/ovsdbserver-nb-0" Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.126354 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24136f67-aca3-4e40-b3c2-b36b7623475f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"24136f67-aca3-4e40-b3c2-b36b7623475f\") " pod="openstack/ovsdbserver-nb-0" Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.228628 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/24136f67-aca3-4e40-b3c2-b36b7623475f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"24136f67-aca3-4e40-b3c2-b36b7623475f\") " pod="openstack/ovsdbserver-nb-0" Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.228763 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/24136f67-aca3-4e40-b3c2-b36b7623475f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"24136f67-aca3-4e40-b3c2-b36b7623475f\") " pod="openstack/ovsdbserver-nb-0" Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.228836 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/24136f67-aca3-4e40-b3c2-b36b7623475f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"24136f67-aca3-4e40-b3c2-b36b7623475f\") " pod="openstack/ovsdbserver-nb-0" Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.228872 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"24136f67-aca3-4e40-b3c2-b36b7623475f\") " pod="openstack/ovsdbserver-nb-0" Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.228900 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24136f67-aca3-4e40-b3c2-b36b7623475f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"24136f67-aca3-4e40-b3c2-b36b7623475f\") " pod="openstack/ovsdbserver-nb-0" Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.229051 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/24136f67-aca3-4e40-b3c2-b36b7623475f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"24136f67-aca3-4e40-b3c2-b36b7623475f\") " pod="openstack/ovsdbserver-nb-0" Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.229094 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ldh7\" (UniqueName: \"kubernetes.io/projected/24136f67-aca3-4e40-b3c2-b36b7623475f-kube-api-access-8ldh7\") pod \"ovsdbserver-nb-0\" (UID: \"24136f67-aca3-4e40-b3c2-b36b7623475f\") " pod="openstack/ovsdbserver-nb-0" Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.229146 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24136f67-aca3-4e40-b3c2-b36b7623475f-config\") pod \"ovsdbserver-nb-0\" (UID: \"24136f67-aca3-4e40-b3c2-b36b7623475f\") " pod="openstack/ovsdbserver-nb-0" Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.230532 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/24136f67-aca3-4e40-b3c2-b36b7623475f-config\") pod \"ovsdbserver-nb-0\" (UID: \"24136f67-aca3-4e40-b3c2-b36b7623475f\") " pod="openstack/ovsdbserver-nb-0" Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.231799 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/24136f67-aca3-4e40-b3c2-b36b7623475f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"24136f67-aca3-4e40-b3c2-b36b7623475f\") " pod="openstack/ovsdbserver-nb-0" Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.237956 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/24136f67-aca3-4e40-b3c2-b36b7623475f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"24136f67-aca3-4e40-b3c2-b36b7623475f\") " pod="openstack/ovsdbserver-nb-0" Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.238447 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"24136f67-aca3-4e40-b3c2-b36b7623475f\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/ovsdbserver-nb-0" Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.264128 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-s642n" Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.265078 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/24136f67-aca3-4e40-b3c2-b36b7623475f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"24136f67-aca3-4e40-b3c2-b36b7623475f\") " pod="openstack/ovsdbserver-nb-0" Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.265550 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24136f67-aca3-4e40-b3c2-b36b7623475f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"24136f67-aca3-4e40-b3c2-b36b7623475f\") " pod="openstack/ovsdbserver-nb-0" Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.273183 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"75733567-f2a6-4331-bdea-147126213437","Type":"ContainerStarted","Data":"648f9884533415a5c2309f4dd9efc2ccd6cbaeb098dca1475cdb0221de466d52"} Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.275887 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ldh7\" (UniqueName: \"kubernetes.io/projected/24136f67-aca3-4e40-b3c2-b36b7623475f-kube-api-access-8ldh7\") pod \"ovsdbserver-nb-0\" (UID: \"24136f67-aca3-4e40-b3c2-b36b7623475f\") " pod="openstack/ovsdbserver-nb-0" Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.279338 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-nb-0\" (UID: \"24136f67-aca3-4e40-b3c2-b36b7623475f\") " pod="openstack/ovsdbserver-nb-0" Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.280013 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/24136f67-aca3-4e40-b3c2-b36b7623475f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"24136f67-aca3-4e40-b3c2-b36b7623475f\") " pod="openstack/ovsdbserver-nb-0" Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.299967 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-2rtl8" Jan 21 11:16:49 crc kubenswrapper[4881]: I0121 11:16:49.375711 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 21 11:16:50 crc kubenswrapper[4881]: I0121 11:16:50.785919 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-s642n"] Jan 21 11:16:50 crc kubenswrapper[4881]: W0121 11:16:50.804014 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod256e0b4a_baac_415c_94c6_09f08fa09c7c.slice/crio-792637da0f41910247ec89409c055d88e952498fc8631144ebd9d17e5ca5afee WatchSource:0}: Error finding container 792637da0f41910247ec89409c055d88e952498fc8631144ebd9d17e5ca5afee: Status 404 returned error can't find the container with id 792637da0f41910247ec89409c055d88e952498fc8631144ebd9d17e5ca5afee Jan 21 11:16:51 crc kubenswrapper[4881]: I0121 11:16:51.011864 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 11:16:51 crc kubenswrapper[4881]: W0121 11:16:51.054629 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24136f67_aca3_4e40_b3c2_b36b7623475f.slice/crio-0c0a112b00c037b00e1b246da95812e106e2db48db41ce77888ffd489bdc7c92 WatchSource:0}: Error finding container 0c0a112b00c037b00e1b246da95812e106e2db48db41ce77888ffd489bdc7c92: Status 404 returned error can't find the container with id 0c0a112b00c037b00e1b246da95812e106e2db48db41ce77888ffd489bdc7c92 Jan 21 11:16:51 crc kubenswrapper[4881]: I0121 11:16:51.263248 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-2rtl8"] Jan 21 11:16:51 crc kubenswrapper[4881]: I0121 11:16:51.371522 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"24136f67-aca3-4e40-b3c2-b36b7623475f","Type":"ContainerStarted","Data":"0c0a112b00c037b00e1b246da95812e106e2db48db41ce77888ffd489bdc7c92"} Jan 21 11:16:51 crc kubenswrapper[4881]: I0121 11:16:51.376156 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-s642n" event={"ID":"256e0b4a-baac-415c-94c6-09f08fa09c7c","Type":"ContainerStarted","Data":"792637da0f41910247ec89409c055d88e952498fc8631144ebd9d17e5ca5afee"} Jan 21 11:16:51 crc kubenswrapper[4881]: I0121 11:16:51.382367 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-2rtl8" event={"ID":"9ff4a63e-40e5-4133-967e-9ba083f3603b","Type":"ContainerStarted","Data":"d1dcf19190c032a44507986d2f5617f115b9bb86905eadaa8c6882cc529a7d3c"} Jan 21 11:16:51 crc kubenswrapper[4881]: I0121 11:16:51.652255 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 11:16:51 crc kubenswrapper[4881]: I0121 11:16:51.656687 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 21 11:16:51 crc kubenswrapper[4881]: I0121 11:16:51.664191 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 21 11:16:51 crc kubenswrapper[4881]: I0121 11:16:51.664860 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 21 11:16:51 crc kubenswrapper[4881]: I0121 11:16:51.665413 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-zdddp" Jan 21 11:16:51 crc kubenswrapper[4881]: I0121 11:16:51.667580 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 21 11:16:51 crc kubenswrapper[4881]: I0121 11:16:51.679072 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 11:16:51 crc kubenswrapper[4881]: I0121 11:16:51.738590 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-5dzhr"] Jan 21 11:16:51 crc kubenswrapper[4881]: I0121 11:16:51.740087 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-5dzhr" Jan 21 11:16:51 crc kubenswrapper[4881]: I0121 11:16:51.749806 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 21 11:16:51 crc kubenswrapper[4881]: I0121 11:16:51.759257 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-5dzhr"] Jan 21 11:16:51 crc kubenswrapper[4881]: I0121 11:16:51.898214 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3884c64-25d6-42b5-a309-7eafa170719e-config\") pod \"ovsdbserver-sb-0\" (UID: \"c3884c64-25d6-42b5-a309-7eafa170719e\") " pod="openstack/ovsdbserver-sb-0" Jan 21 11:16:51 crc kubenswrapper[4881]: I0121 11:16:51.898374 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9bd229b-588d-477e-8501-cd976b539e3a-combined-ca-bundle\") pod \"ovn-controller-metrics-5dzhr\" (UID: \"b9bd229b-588d-477e-8501-cd976b539e3a\") " pod="openstack/ovn-controller-metrics-5dzhr" Jan 21 11:16:51 crc kubenswrapper[4881]: I0121 11:16:51.898406 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/b9bd229b-588d-477e-8501-cd976b539e3a-ovs-rundir\") pod \"ovn-controller-metrics-5dzhr\" (UID: \"b9bd229b-588d-477e-8501-cd976b539e3a\") " pod="openstack/ovn-controller-metrics-5dzhr" Jan 21 11:16:51 crc kubenswrapper[4881]: I0121 11:16:51.898510 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3884c64-25d6-42b5-a309-7eafa170719e-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c3884c64-25d6-42b5-a309-7eafa170719e\") " pod="openstack/ovsdbserver-sb-0" Jan 21 11:16:51 crc kubenswrapper[4881]: I0121 11:16:51.898535 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tlx9\" (UniqueName: \"kubernetes.io/projected/b9bd229b-588d-477e-8501-cd976b539e3a-kube-api-access-7tlx9\") pod \"ovn-controller-metrics-5dzhr\" (UID: \"b9bd229b-588d-477e-8501-cd976b539e3a\") " pod="openstack/ovn-controller-metrics-5dzhr" Jan 21 11:16:51 crc kubenswrapper[4881]: I0121 11:16:51.898673 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9bd229b-588d-477e-8501-cd976b539e3a-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-5dzhr\" (UID: \"b9bd229b-588d-477e-8501-cd976b539e3a\") " pod="openstack/ovn-controller-metrics-5dzhr" Jan 21 11:16:51 crc kubenswrapper[4881]: I0121 11:16:51.898721 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgk6r\" (UniqueName: \"kubernetes.io/projected/c3884c64-25d6-42b5-a309-7eafa170719e-kube-api-access-vgk6r\") pod \"ovsdbserver-sb-0\" (UID: \"c3884c64-25d6-42b5-a309-7eafa170719e\") " pod="openstack/ovsdbserver-sb-0" Jan 21 11:16:51 crc kubenswrapper[4881]: I0121 11:16:51.898801 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c3884c64-25d6-42b5-a309-7eafa170719e\") " pod="openstack/ovsdbserver-sb-0" Jan 21 11:16:51 crc kubenswrapper[4881]: I0121 11:16:51.899158 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3884c64-25d6-42b5-a309-7eafa170719e-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c3884c64-25d6-42b5-a309-7eafa170719e\") " pod="openstack/ovsdbserver-sb-0" Jan 21 11:16:51 crc kubenswrapper[4881]: I0121 11:16:51.899254 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c3884c64-25d6-42b5-a309-7eafa170719e-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c3884c64-25d6-42b5-a309-7eafa170719e\") " pod="openstack/ovsdbserver-sb-0" Jan 21 11:16:51 crc kubenswrapper[4881]: I0121 11:16:51.899312 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/b9bd229b-588d-477e-8501-cd976b539e3a-ovn-rundir\") pod \"ovn-controller-metrics-5dzhr\" (UID: \"b9bd229b-588d-477e-8501-cd976b539e3a\") " pod="openstack/ovn-controller-metrics-5dzhr" Jan 21 11:16:51 crc kubenswrapper[4881]: I0121 11:16:51.899355 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3884c64-25d6-42b5-a309-7eafa170719e-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c3884c64-25d6-42b5-a309-7eafa170719e\") " pod="openstack/ovsdbserver-sb-0" Jan 21 11:16:51 crc kubenswrapper[4881]: I0121 11:16:51.899408 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9bd229b-588d-477e-8501-cd976b539e3a-config\") pod \"ovn-controller-metrics-5dzhr\" (UID: \"b9bd229b-588d-477e-8501-cd976b539e3a\") " pod="openstack/ovn-controller-metrics-5dzhr" Jan 21 11:16:51 crc kubenswrapper[4881]: I0121 11:16:51.899485 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c3884c64-25d6-42b5-a309-7eafa170719e-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c3884c64-25d6-42b5-a309-7eafa170719e\") " pod="openstack/ovsdbserver-sb-0" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.003888 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3884c64-25d6-42b5-a309-7eafa170719e-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c3884c64-25d6-42b5-a309-7eafa170719e\") " pod="openstack/ovsdbserver-sb-0" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.003950 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tlx9\" (UniqueName: \"kubernetes.io/projected/b9bd229b-588d-477e-8501-cd976b539e3a-kube-api-access-7tlx9\") pod \"ovn-controller-metrics-5dzhr\" (UID: \"b9bd229b-588d-477e-8501-cd976b539e3a\") " pod="openstack/ovn-controller-metrics-5dzhr" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.004023 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9bd229b-588d-477e-8501-cd976b539e3a-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-5dzhr\" (UID: \"b9bd229b-588d-477e-8501-cd976b539e3a\") " pod="openstack/ovn-controller-metrics-5dzhr" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.004057 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgk6r\" (UniqueName: \"kubernetes.io/projected/c3884c64-25d6-42b5-a309-7eafa170719e-kube-api-access-vgk6r\") pod \"ovsdbserver-sb-0\" (UID: \"c3884c64-25d6-42b5-a309-7eafa170719e\") " pod="openstack/ovsdbserver-sb-0" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.004102 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c3884c64-25d6-42b5-a309-7eafa170719e\") " pod="openstack/ovsdbserver-sb-0" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.004143 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3884c64-25d6-42b5-a309-7eafa170719e-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c3884c64-25d6-42b5-a309-7eafa170719e\") " pod="openstack/ovsdbserver-sb-0" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.004180 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c3884c64-25d6-42b5-a309-7eafa170719e-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c3884c64-25d6-42b5-a309-7eafa170719e\") " pod="openstack/ovsdbserver-sb-0" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.004254 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/b9bd229b-588d-477e-8501-cd976b539e3a-ovn-rundir\") pod \"ovn-controller-metrics-5dzhr\" (UID: \"b9bd229b-588d-477e-8501-cd976b539e3a\") " pod="openstack/ovn-controller-metrics-5dzhr" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.004285 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3884c64-25d6-42b5-a309-7eafa170719e-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c3884c64-25d6-42b5-a309-7eafa170719e\") " pod="openstack/ovsdbserver-sb-0" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.004318 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9bd229b-588d-477e-8501-cd976b539e3a-config\") pod \"ovn-controller-metrics-5dzhr\" (UID: \"b9bd229b-588d-477e-8501-cd976b539e3a\") " pod="openstack/ovn-controller-metrics-5dzhr" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.004376 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c3884c64-25d6-42b5-a309-7eafa170719e-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c3884c64-25d6-42b5-a309-7eafa170719e\") " pod="openstack/ovsdbserver-sb-0" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.004417 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3884c64-25d6-42b5-a309-7eafa170719e-config\") pod \"ovsdbserver-sb-0\" (UID: \"c3884c64-25d6-42b5-a309-7eafa170719e\") " pod="openstack/ovsdbserver-sb-0" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.004460 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9bd229b-588d-477e-8501-cd976b539e3a-combined-ca-bundle\") pod \"ovn-controller-metrics-5dzhr\" (UID: \"b9bd229b-588d-477e-8501-cd976b539e3a\") " pod="openstack/ovn-controller-metrics-5dzhr" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.004484 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/b9bd229b-588d-477e-8501-cd976b539e3a-ovs-rundir\") pod \"ovn-controller-metrics-5dzhr\" (UID: \"b9bd229b-588d-477e-8501-cd976b539e3a\") " pod="openstack/ovn-controller-metrics-5dzhr" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.004957 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/b9bd229b-588d-477e-8501-cd976b539e3a-ovs-rundir\") pod \"ovn-controller-metrics-5dzhr\" (UID: \"b9bd229b-588d-477e-8501-cd976b539e3a\") " pod="openstack/ovn-controller-metrics-5dzhr" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.006349 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c3884c64-25d6-42b5-a309-7eafa170719e-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c3884c64-25d6-42b5-a309-7eafa170719e\") " pod="openstack/ovsdbserver-sb-0" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.006901 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c3884c64-25d6-42b5-a309-7eafa170719e\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/ovsdbserver-sb-0" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.007139 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6557d744f-gt5cx"] Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.007366 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9bd229b-588d-477e-8501-cd976b539e3a-config\") pod \"ovn-controller-metrics-5dzhr\" (UID: \"b9bd229b-588d-477e-8501-cd976b539e3a\") " pod="openstack/ovn-controller-metrics-5dzhr" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.009883 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/b9bd229b-588d-477e-8501-cd976b539e3a-ovn-rundir\") pod \"ovn-controller-metrics-5dzhr\" (UID: \"b9bd229b-588d-477e-8501-cd976b539e3a\") " pod="openstack/ovn-controller-metrics-5dzhr" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.010823 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c3884c64-25d6-42b5-a309-7eafa170719e-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c3884c64-25d6-42b5-a309-7eafa170719e\") " pod="openstack/ovsdbserver-sb-0" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.012046 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3884c64-25d6-42b5-a309-7eafa170719e-config\") pod \"ovsdbserver-sb-0\" (UID: \"c3884c64-25d6-42b5-a309-7eafa170719e\") " pod="openstack/ovsdbserver-sb-0" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.029214 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9bd229b-588d-477e-8501-cd976b539e3a-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-5dzhr\" (UID: \"b9bd229b-588d-477e-8501-cd976b539e3a\") " pod="openstack/ovn-controller-metrics-5dzhr" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.029933 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3884c64-25d6-42b5-a309-7eafa170719e-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c3884c64-25d6-42b5-a309-7eafa170719e\") " pod="openstack/ovsdbserver-sb-0" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.031095 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3884c64-25d6-42b5-a309-7eafa170719e-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c3884c64-25d6-42b5-a309-7eafa170719e\") " pod="openstack/ovsdbserver-sb-0" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.031493 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9bd229b-588d-477e-8501-cd976b539e3a-combined-ca-bundle\") pod \"ovn-controller-metrics-5dzhr\" (UID: \"b9bd229b-588d-477e-8501-cd976b539e3a\") " pod="openstack/ovn-controller-metrics-5dzhr" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.060059 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tlx9\" (UniqueName: \"kubernetes.io/projected/b9bd229b-588d-477e-8501-cd976b539e3a-kube-api-access-7tlx9\") pod \"ovn-controller-metrics-5dzhr\" (UID: \"b9bd229b-588d-477e-8501-cd976b539e3a\") " pod="openstack/ovn-controller-metrics-5dzhr" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.093316 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgk6r\" (UniqueName: \"kubernetes.io/projected/c3884c64-25d6-42b5-a309-7eafa170719e-kube-api-access-vgk6r\") pod \"ovsdbserver-sb-0\" (UID: \"c3884c64-25d6-42b5-a309-7eafa170719e\") " pod="openstack/ovsdbserver-sb-0" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.101365 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-fd8d879fc-flqh9"] Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.103093 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fd8d879fc-flqh9" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.108396 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.113165 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c3884c64-25d6-42b5-a309-7eafa170719e\") " pod="openstack/ovsdbserver-sb-0" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.168681 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fd8d879fc-flqh9"] Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.263077 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-5dzhr" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.263460 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3884c64-25d6-42b5-a309-7eafa170719e-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c3884c64-25d6-42b5-a309-7eafa170719e\") " pod="openstack/ovsdbserver-sb-0" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.332618 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.394188 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42132c17-6a2d-48d1-a636-3eae7558d55c-config\") pod \"dnsmasq-dns-fd8d879fc-flqh9\" (UID: \"42132c17-6a2d-48d1-a636-3eae7558d55c\") " pod="openstack/dnsmasq-dns-fd8d879fc-flqh9" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.394515 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42132c17-6a2d-48d1-a636-3eae7558d55c-ovsdbserver-nb\") pod \"dnsmasq-dns-fd8d879fc-flqh9\" (UID: \"42132c17-6a2d-48d1-a636-3eae7558d55c\") " pod="openstack/dnsmasq-dns-fd8d879fc-flqh9" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.394591 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4lhq\" (UniqueName: \"kubernetes.io/projected/42132c17-6a2d-48d1-a636-3eae7558d55c-kube-api-access-x4lhq\") pod \"dnsmasq-dns-fd8d879fc-flqh9\" (UID: \"42132c17-6a2d-48d1-a636-3eae7558d55c\") " pod="openstack/dnsmasq-dns-fd8d879fc-flqh9" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.394676 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42132c17-6a2d-48d1-a636-3eae7558d55c-dns-svc\") pod \"dnsmasq-dns-fd8d879fc-flqh9\" (UID: \"42132c17-6a2d-48d1-a636-3eae7558d55c\") " pod="openstack/dnsmasq-dns-fd8d879fc-flqh9" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.551697 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42132c17-6a2d-48d1-a636-3eae7558d55c-config\") pod \"dnsmasq-dns-fd8d879fc-flqh9\" (UID: \"42132c17-6a2d-48d1-a636-3eae7558d55c\") " pod="openstack/dnsmasq-dns-fd8d879fc-flqh9" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.552284 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42132c17-6a2d-48d1-a636-3eae7558d55c-ovsdbserver-nb\") pod \"dnsmasq-dns-fd8d879fc-flqh9\" (UID: \"42132c17-6a2d-48d1-a636-3eae7558d55c\") " pod="openstack/dnsmasq-dns-fd8d879fc-flqh9" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.552341 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4lhq\" (UniqueName: \"kubernetes.io/projected/42132c17-6a2d-48d1-a636-3eae7558d55c-kube-api-access-x4lhq\") pod \"dnsmasq-dns-fd8d879fc-flqh9\" (UID: \"42132c17-6a2d-48d1-a636-3eae7558d55c\") " pod="openstack/dnsmasq-dns-fd8d879fc-flqh9" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.552369 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42132c17-6a2d-48d1-a636-3eae7558d55c-dns-svc\") pod \"dnsmasq-dns-fd8d879fc-flqh9\" (UID: \"42132c17-6a2d-48d1-a636-3eae7558d55c\") " pod="openstack/dnsmasq-dns-fd8d879fc-flqh9" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.553558 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42132c17-6a2d-48d1-a636-3eae7558d55c-dns-svc\") pod \"dnsmasq-dns-fd8d879fc-flqh9\" (UID: \"42132c17-6a2d-48d1-a636-3eae7558d55c\") " pod="openstack/dnsmasq-dns-fd8d879fc-flqh9" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.558902 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42132c17-6a2d-48d1-a636-3eae7558d55c-config\") pod \"dnsmasq-dns-fd8d879fc-flqh9\" (UID: \"42132c17-6a2d-48d1-a636-3eae7558d55c\") " pod="openstack/dnsmasq-dns-fd8d879fc-flqh9" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.559501 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42132c17-6a2d-48d1-a636-3eae7558d55c-ovsdbserver-nb\") pod \"dnsmasq-dns-fd8d879fc-flqh9\" (UID: \"42132c17-6a2d-48d1-a636-3eae7558d55c\") " pod="openstack/dnsmasq-dns-fd8d879fc-flqh9" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.581975 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4lhq\" (UniqueName: \"kubernetes.io/projected/42132c17-6a2d-48d1-a636-3eae7558d55c-kube-api-access-x4lhq\") pod \"dnsmasq-dns-fd8d879fc-flqh9\" (UID: \"42132c17-6a2d-48d1-a636-3eae7558d55c\") " pod="openstack/dnsmasq-dns-fd8d879fc-flqh9" Jan 21 11:16:52 crc kubenswrapper[4881]: I0121 11:16:52.607252 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fd8d879fc-flqh9" Jan 21 11:16:55 crc kubenswrapper[4881]: I0121 11:16:55.114764 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 11:16:55 crc kubenswrapper[4881]: I0121 11:16:55.604777 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-5dzhr"] Jan 21 11:16:58 crc kubenswrapper[4881]: W0121 11:16:58.378670 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3884c64_25d6_42b5_a309_7eafa170719e.slice/crio-e70f087787468da8f67f380f8c1a171bd117d7c55ff0c085df1f8c6975cbc30b WatchSource:0}: Error finding container e70f087787468da8f67f380f8c1a171bd117d7c55ff0c085df1f8c6975cbc30b: Status 404 returned error can't find the container with id e70f087787468da8f67f380f8c1a171bd117d7c55ff0c085df1f8c6975cbc30b Jan 21 11:16:59 crc kubenswrapper[4881]: I0121 11:16:59.274682 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c3884c64-25d6-42b5-a309-7eafa170719e","Type":"ContainerStarted","Data":"e70f087787468da8f67f380f8c1a171bd117d7c55ff0c085df1f8c6975cbc30b"} Jan 21 11:16:59 crc kubenswrapper[4881]: I0121 11:16:59.392585 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fd8d879fc-flqh9"] Jan 21 11:16:59 crc kubenswrapper[4881]: I0121 11:16:59.850942 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:16:59 crc kubenswrapper[4881]: I0121 11:16:59.851010 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:17:15 crc kubenswrapper[4881]: W0121 11:17:15.800395 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb9bd229b_588d_477e_8501_cd976b539e3a.slice/crio-be2a0d6b1ba15f8d0d2b6045bf47f1d37d53e641d993a41a219ad2098fcd13ed WatchSource:0}: Error finding container be2a0d6b1ba15f8d0d2b6045bf47f1d37d53e641d993a41a219ad2098fcd13ed: Status 404 returned error can't find the container with id be2a0d6b1ba15f8d0d2b6045bf47f1d37d53e641d993a41a219ad2098fcd13ed Jan 21 11:17:15 crc kubenswrapper[4881]: W0121 11:17:15.954373 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42132c17_6a2d_48d1_a636_3eae7558d55c.slice/crio-0a92f372c9af6d73af85424fa74f5bca2b7445ea9a9d2271fd330b7797ed5b0d WatchSource:0}: Error finding container 0a92f372c9af6d73af85424fa74f5bca2b7445ea9a9d2271fd330b7797ed5b0d: Status 404 returned error can't find the container with id 0a92f372c9af6d73af85424fa74f5bca2b7445ea9a9d2271fd330b7797ed5b0d Jan 21 11:17:16 crc kubenswrapper[4881]: I0121 11:17:16.427191 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-5dzhr" event={"ID":"b9bd229b-588d-477e-8501-cd976b539e3a","Type":"ContainerStarted","Data":"be2a0d6b1ba15f8d0d2b6045bf47f1d37d53e641d993a41a219ad2098fcd13ed"} Jan 21 11:17:16 crc kubenswrapper[4881]: I0121 11:17:16.429031 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fd8d879fc-flqh9" event={"ID":"42132c17-6a2d-48d1-a636-3eae7558d55c","Type":"ContainerStarted","Data":"0a92f372c9af6d73af85424fa74f5bca2b7445ea9a9d2271fd330b7797ed5b0d"} Jan 21 11:17:16 crc kubenswrapper[4881]: E0121 11:17:16.555265 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = reading blob sha256:b1970a675905d0a72c5f2ca8159fa3f2ae8bf77ab674ec2f465e7e95d0e8167b: Get \"http://38.102.83.182:5001/v2/podified-master-centos10/openstack-rabbitmq/blobs/sha256:b1970a675905d0a72c5f2ca8159fa3f2ae8bf77ab674ec2f465e7e95d0e8167b\": context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest" Jan 21 11:17:16 crc kubenswrapper[4881]: E0121 11:17:16.555336 4881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = reading blob sha256:b1970a675905d0a72c5f2ca8159fa3f2ae8bf77ab674ec2f465e7e95d0e8167b: Get \"http://38.102.83.182:5001/v2/podified-master-centos10/openstack-rabbitmq/blobs/sha256:b1970a675905d0a72c5f2ca8159fa3f2ae8bf77ab674ec2f465e7e95d0e8167b\": context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest" Jan 21 11:17:16 crc kubenswrapper[4881]: E0121 11:17:16.555547 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:38.102.83.182:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bmd5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(078c2368-b247-49d4-8723-fd93918e99b1): ErrImagePull: rpc error: code = Canceled desc = reading blob sha256:b1970a675905d0a72c5f2ca8159fa3f2ae8bf77ab674ec2f465e7e95d0e8167b: Get \"http://38.102.83.182:5001/v2/podified-master-centos10/openstack-rabbitmq/blobs/sha256:b1970a675905d0a72c5f2ca8159fa3f2ae8bf77ab674ec2f465e7e95d0e8167b\": context canceled" logger="UnhandledError" Jan 21 11:17:16 crc kubenswrapper[4881]: E0121 11:17:16.556891 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = reading blob sha256:b1970a675905d0a72c5f2ca8159fa3f2ae8bf77ab674ec2f465e7e95d0e8167b: Get \\\"http://38.102.83.182:5001/v2/podified-master-centos10/openstack-rabbitmq/blobs/sha256:b1970a675905d0a72c5f2ca8159fa3f2ae8bf77ab674ec2f465e7e95d0e8167b\\\": context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="078c2368-b247-49d4-8723-fd93918e99b1" Jan 21 11:17:17 crc kubenswrapper[4881]: E0121 11:17:17.442753 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.182:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="078c2368-b247-49d4-8723-fd93918e99b1" Jan 21 11:17:18 crc kubenswrapper[4881]: E0121 11:17:18.025380 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = reading blob sha256:0961b6750dea9d7809f870d1b513a1f88673a4f8bb098afb340a90426edbefe5: Get \"http://38.102.83.182:5001/v2/podified-master-centos10/openstack-ovn-nb-db-server/blobs/sha256:0961b6750dea9d7809f870d1b513a1f88673a4f8bb098afb340a90426edbefe5\": context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-ovn-nb-db-server:watcher_latest" Jan 21 11:17:18 crc kubenswrapper[4881]: E0121 11:17:18.025479 4881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = reading blob sha256:0961b6750dea9d7809f870d1b513a1f88673a4f8bb098afb340a90426edbefe5: Get \"http://38.102.83.182:5001/v2/podified-master-centos10/openstack-ovn-nb-db-server/blobs/sha256:0961b6750dea9d7809f870d1b513a1f88673a4f8bb098afb340a90426edbefe5\": context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-ovn-nb-db-server:watcher_latest" Jan 21 11:17:18 crc kubenswrapper[4881]: E0121 11:17:18.025667 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-nb,Image:38.102.83.182:5001/podified-master-centos10/openstack-ovn-nb-db-server:watcher_latest,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ncbh5ffh56dh7chbdh75h58h5d4h5bfh596h576h5ddh7bh86h56dh677h58dh687h66bh676h67ch55ch667h68hf4h78h555h79h5fch67bh95h698q,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-nb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8ldh7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-nb-0_openstack(24136f67-aca3-4e40-b3c2-b36b7623475f): ErrImagePull: rpc error: code = Canceled desc = reading blob sha256:0961b6750dea9d7809f870d1b513a1f88673a4f8bb098afb340a90426edbefe5: Get \"http://38.102.83.182:5001/v2/podified-master-centos10/openstack-ovn-nb-db-server/blobs/sha256:0961b6750dea9d7809f870d1b513a1f88673a4f8bb098afb340a90426edbefe5\": context canceled" logger="UnhandledError" Jan 21 11:17:18 crc kubenswrapper[4881]: E0121 11:17:18.039916 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-mariadb:watcher_latest" Jan 21 11:17:18 crc kubenswrapper[4881]: E0121 11:17:18.039999 4881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-mariadb:watcher_latest" Jan 21 11:17:18 crc kubenswrapper[4881]: E0121 11:17:18.040133 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:38.102.83.182:5001/podified-master-centos10/openstack-mariadb:watcher_latest,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r44km,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(197dd5bf-f68a-4d9d-b75c-de87a54ed46b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:17:18 crc kubenswrapper[4881]: E0121 11:17:18.041413 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="197dd5bf-f68a-4d9d-b75c-de87a54ed46b" Jan 21 11:17:18 crc kubenswrapper[4881]: E0121 11:17:18.056200 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-mariadb:watcher_latest" Jan 21 11:17:18 crc kubenswrapper[4881]: E0121 11:17:18.056292 4881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-mariadb:watcher_latest" Jan 21 11:17:18 crc kubenswrapper[4881]: E0121 11:17:18.056551 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:38.102.83.182:5001/podified-master-centos10/openstack-mariadb:watcher_latest,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4phxd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(cd1973a5-773b-438b-aab7-709fb828324d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:17:18 crc kubenswrapper[4881]: E0121 11:17:18.057899 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="cd1973a5-773b-438b-aab7-709fb828324d" Jan 21 11:17:18 crc kubenswrapper[4881]: E0121 11:17:18.452174 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.182:5001/podified-master-centos10/openstack-mariadb:watcher_latest\\\"\"" pod="openstack/openstack-galera-0" podUID="197dd5bf-f68a-4d9d-b75c-de87a54ed46b" Jan 21 11:17:18 crc kubenswrapper[4881]: E0121 11:17:18.453442 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.182:5001/podified-master-centos10/openstack-mariadb:watcher_latest\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="cd1973a5-773b-438b-aab7-709fb828324d" Jan 21 11:17:18 crc kubenswrapper[4881]: E0121 11:17:18.675230 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a" Jan 21 11:17:18 crc kubenswrapper[4881]: E0121 11:17:18.675556 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init-config-reloader,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a,Command:[/bin/prometheus-config-reloader],Args:[--watch-interval=0 --listen-address=:8081 --config-file=/etc/prometheus/config/prometheus.yaml.gz --config-envsubst-file=/etc/prometheus/config_out/prometheus.env.yaml --watched-dir=/etc/prometheus/rules/prometheus-metric-storage-rulefiles-0 --watched-dir=/etc/prometheus/rules/prometheus-metric-storage-rulefiles-1 --watched-dir=/etc/prometheus/rules/prometheus-metric-storage-rulefiles-2],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:reloader-init,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:SHARD,Value:0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/prometheus/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-out,ReadOnly:false,MountPath:/etc/prometheus/config_out,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-0,ReadOnly:false,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-1,ReadOnly:false,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-1,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-2,ReadOnly:false,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-2,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n2vkg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-metric-storage-0_openstack(75733567-f2a6-4331-bdea-147126213437): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 11:17:18 crc kubenswrapper[4881]: E0121 11:17:18.676970 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init-config-reloader\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/prometheus-metric-storage-0" podUID="75733567-f2a6-4331-bdea-147126213437" Jan 21 11:17:19 crc kubenswrapper[4881]: E0121 11:17:19.460174 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init-config-reloader\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="75733567-f2a6-4331-bdea-147126213437" Jan 21 11:17:23 crc kubenswrapper[4881]: E0121 11:17:23.329886 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest" Jan 21 11:17:23 crc kubenswrapper[4881]: E0121 11:17:23.330251 4881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest" Jan 21 11:17:23 crc kubenswrapper[4881]: E0121 11:17:23.330436 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:38.102.83.182:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tjgnd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(f7e90972-9be1-4d3e-852e-e7f7df6e6623): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:17:23 crc kubenswrapper[4881]: E0121 11:17:23.331654 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="f7e90972-9be1-4d3e-852e-e7f7df6e6623" Jan 21 11:17:23 crc kubenswrapper[4881]: E0121 11:17:23.499646 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.182:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest\\\"\"" pod="openstack/rabbitmq-server-0" podUID="f7e90972-9be1-4d3e-852e-e7f7df6e6623" Jan 21 11:17:24 crc kubenswrapper[4881]: E0121 11:17:24.019539 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-memcached:watcher_latest" Jan 21 11:17:24 crc kubenswrapper[4881]: E0121 11:17:24.019603 4881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-memcached:watcher_latest" Jan 21 11:17:24 crc kubenswrapper[4881]: E0121 11:17:24.019768 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:38.102.83.182:5001/podified-master-centos10/openstack-memcached:watcher_latest,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n55fh5bch5f7hc7h556h5d5h95h678h54dh7fh6bh5b7h95h59bh65h66ch89hc4h599hbbh685h676hd8hf4h84h5b7h686h55bh65h55ch5c8h658q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g444t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(7960c16a-de64-4154-9072-aee49e3bd573): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:17:24 crc kubenswrapper[4881]: E0121 11:17:24.021351 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="7960c16a-de64-4154-9072-aee49e3bd573" Jan 21 11:17:24 crc kubenswrapper[4881]: E0121 11:17:24.237183 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest" Jan 21 11:17:24 crc kubenswrapper[4881]: E0121 11:17:24.237253 4881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest" Jan 21 11:17:24 crc kubenswrapper[4881]: E0121 11:17:24.237499 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:38.102.83.182:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q5n6k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-notifications-server-0_openstack(44bcf219-3358-4596-9d1e-88a51c415266): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:17:24 crc kubenswrapper[4881]: E0121 11:17:24.239916 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-notifications-server-0" podUID="44bcf219-3358-4596-9d1e-88a51c415266" Jan 21 11:17:24 crc kubenswrapper[4881]: E0121 11:17:24.505886 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.182:5001/podified-master-centos10/openstack-memcached:watcher_latest\\\"\"" pod="openstack/memcached-0" podUID="7960c16a-de64-4154-9072-aee49e3bd573" Jan 21 11:17:24 crc kubenswrapper[4881]: E0121 11:17:24.506007 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.182:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest\\\"\"" pod="openstack/rabbitmq-notifications-server-0" podUID="44bcf219-3358-4596-9d1e-88a51c415266" Jan 21 11:17:25 crc kubenswrapper[4881]: E0121 11:17:25.474007 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-ovn-base:watcher_latest" Jan 21 11:17:25 crc kubenswrapper[4881]: E0121 11:17:25.474123 4881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-ovn-base:watcher_latest" Jan 21 11:17:25 crc kubenswrapper[4881]: E0121 11:17:25.474304 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:ovsdb-server-init,Image:38.102.83.182:5001/podified-master-centos10/openstack-ovn-base:watcher_latest,Command:[/usr/local/bin/container-scripts/init-ovsdb-server.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n7h694h5f6h59bh566h87h9h7h686h54fhbfh668h599h596hbfh595h5bfh65ch54fh8bh64bh587h559h569hcdhddh54dh56bh5c8hfdh65dh57dq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-ovs,ReadOnly:false,MountPath:/etc/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log,ReadOnly:false,MountPath:/var/log/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib,ReadOnly:false,MountPath:/var/lib/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bnx8p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-ovs-2rtl8_openstack(9ff4a63e-40e5-4133-967e-9ba083f3603b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:17:25 crc kubenswrapper[4881]: E0121 11:17:25.476251 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-ovs-2rtl8" podUID="9ff4a63e-40e5-4133-967e-9ba083f3603b" Jan 21 11:17:25 crc kubenswrapper[4881]: E0121 11:17:25.511328 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.182:5001/podified-master-centos10/openstack-ovn-base:watcher_latest\\\"\"" pod="openstack/ovn-controller-ovs-2rtl8" podUID="9ff4a63e-40e5-4133-967e-9ba083f3603b" Jan 21 11:17:29 crc kubenswrapper[4881]: I0121 11:17:29.852171 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:17:29 crc kubenswrapper[4881]: I0121 11:17:29.852765 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:17:35 crc kubenswrapper[4881]: E0121 11:17:35.683810 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 21 11:17:35 crc kubenswrapper[4881]: E0121 11:17:35.684318 4881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 21 11:17:35 crc kubenswrapper[4881]: E0121 11:17:35.684447 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.182:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gj4sc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-66b6fdbd65-2qwr2_openstack(5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:17:35 crc kubenswrapper[4881]: E0121 11:17:35.685653 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-66b6fdbd65-2qwr2" podUID="5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338" Jan 21 11:17:35 crc kubenswrapper[4881]: E0121 11:17:35.800411 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest" Jan 21 11:17:35 crc kubenswrapper[4881]: E0121 11:17:35.800476 4881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest" Jan 21 11:17:35 crc kubenswrapper[4881]: E0121 11:17:35.800657 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-controller,Image:38.102.83.182:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest,Command:[ovn-controller --pidfile unix:/run/openvswitch/db.sock --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n7h694h5f6h59bh566h87h9h7h686h54fhbfh668h599h596hbfh595h5bfh65ch54fh8bh64bh587h559h569hcdhddh54dh56bh5c8hfdh65dh57dq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-ovn,ReadOnly:false,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log-ovn,ReadOnly:false,MountPath:/var/log/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kcpzd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_liveness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_readiness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/share/ovn/scripts/ovn-ctl stop_controller],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-s642n_openstack(256e0b4a-baac-415c-94c6-09f08fa09c7c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:17:35 crc kubenswrapper[4881]: E0121 11:17:35.801905 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-s642n" podUID="256e0b4a-baac-415c-94c6-09f08fa09c7c" Jan 21 11:17:35 crc kubenswrapper[4881]: E0121 11:17:35.846712 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 21 11:17:35 crc kubenswrapper[4881]: E0121 11:17:35.846768 4881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 21 11:17:35 crc kubenswrapper[4881]: E0121 11:17:35.846916 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.182:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4gf75,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-7457897f45-vkp6c_openstack(99aba8a6-cc58-43be-9607-8ae1fcb57257): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:17:35 crc kubenswrapper[4881]: E0121 11:17:35.848076 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-7457897f45-vkp6c" podUID="99aba8a6-cc58-43be-9607-8ae1fcb57257" Jan 21 11:17:35 crc kubenswrapper[4881]: E0121 11:17:35.849938 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 21 11:17:35 crc kubenswrapper[4881]: E0121 11:17:35.849973 4881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 21 11:17:35 crc kubenswrapper[4881]: E0121 11:17:35.850076 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.182:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n59dh59h578h67chf9h6h5cch694h9ch677h67fh657h5bfh65dh67fhb8h68dh5dfhf9h55bhcfh84h698h549h5b9h59bh5c8h647h557h9dh57bh5d5q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x4lhq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-fd8d879fc-flqh9_openstack(42132c17-6a2d-48d1-a636-3eae7558d55c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:17:35 crc kubenswrapper[4881]: E0121 11:17:35.851513 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-fd8d879fc-flqh9" podUID="42132c17-6a2d-48d1-a636-3eae7558d55c" Jan 21 11:17:36 crc kubenswrapper[4881]: E0121 11:17:36.206961 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-ovn-sb-db-server:watcher_latest" Jan 21 11:17:36 crc kubenswrapper[4881]: E0121 11:17:36.207026 4881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-ovn-sb-db-server:watcher_latest" Jan 21 11:17:36 crc kubenswrapper[4881]: E0121 11:17:36.207194 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-sb,Image:38.102.83.182:5001/podified-master-centos10/openstack-ovn-sb-db-server:watcher_latest,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nf7h674h67dh544h54ch679h557h59ch545h59ch547h69hfch5f8h5f7h575h57fh79h5d7h8ch569h679h5cch5fh5cch56ch5d4hdch645h596h66hd6q,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-sb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vgk6r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-sb-0_openstack(c3884c64-25d6-42b5-a309-7eafa170719e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:17:36 crc kubenswrapper[4881]: E0121 11:17:36.251445 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 21 11:17:36 crc kubenswrapper[4881]: E0121 11:17:36.251585 4881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 21 11:17:36 crc kubenswrapper[4881]: E0121 11:17:36.251726 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.182:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5c7h56dh5cfh8bh54fhbbhf4h5b9hdch67fhd7h55fh55fh6ch9h548h54ch665h647h6h8fhd6h5dfh5cdh58bh577h66fh695h5fbh55h77h5fcq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dnn2q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-6557d744f-gt5cx_openstack(aec91505-d39a-41cf-90af-1593bcb02e68): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:17:36 crc kubenswrapper[4881]: E0121 11:17:36.253005 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-6557d744f-gt5cx" podUID="aec91505-d39a-41cf-90af-1593bcb02e68" Jan 21 11:17:36 crc kubenswrapper[4881]: E0121 11:17:36.400854 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified" Jan 21 11:17:36 crc kubenswrapper[4881]: E0121 11:17:36.401024 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstack-network-exporter,Image:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,Command:[/app/openstack-network-exporter],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPENSTACK_NETWORK_EXPORTER_YAML,Value:/etc/config/openstack-network-exporter.yaml,ValueFrom:nil,},EnvVar{Name:CONFIG_HASH,Value:nc9h8ch67h5bdh5fch589h98h67bh99h548h59ch558h7ch65fh76hf9hf9h99h5h5fh56bhd9hd7h64h67ch65hb9h65bh76h569h6bhcfq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovs-rundir,ReadOnly:true,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-rundir,ReadOnly:true,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovnmetrics.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovnmetrics.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7tlx9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-metrics-5dzhr_openstack(b9bd229b-588d-477e-8501-cd976b539e3a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:17:36 crc kubenswrapper[4881]: E0121 11:17:36.402215 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-metrics-5dzhr" podUID="b9bd229b-588d-477e-8501-cd976b539e3a" Jan 21 11:17:36 crc kubenswrapper[4881]: E0121 11:17:36.426984 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified" Jan 21 11:17:36 crc kubenswrapper[4881]: E0121 11:17:36.427183 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstack-network-exporter,Image:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,Command:[/app/openstack-network-exporter],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPENSTACK_NETWORK_EXPORTER_YAML,Value:/etc/config/openstack-network-exporter.yaml,ValueFrom:nil,},EnvVar{Name:CONFIG_HASH,Value:ncbh5ffh56dh7chbdh75h58h5d4h5bfh596h576h5ddh7bh86h56dh677h58dh687h66bh676h67ch55ch667h68hf4h78h555h79h5fch67bh95h698q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovnmetrics.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovnmetrics.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8ldh7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-nb-0_openstack(24136f67-aca3-4e40-b3c2-b36b7623475f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:17:36 crc kubenswrapper[4881]: E0121 11:17:36.428389 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ovsdbserver-nb\" with ErrImagePull: \"rpc error: code = Canceled desc = reading blob sha256:0961b6750dea9d7809f870d1b513a1f88673a4f8bb098afb340a90426edbefe5: Get \\\"http://38.102.83.182:5001/v2/podified-master-centos10/openstack-ovn-nb-db-server/blobs/sha256:0961b6750dea9d7809f870d1b513a1f88673a4f8bb098afb340a90426edbefe5\\\": context canceled\", failed to \"StartContainer\" for \"openstack-network-exporter\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack/ovsdbserver-nb-0" podUID="24136f67-aca3-4e40-b3c2-b36b7623475f" Jan 21 11:17:36 crc kubenswrapper[4881]: E0121 11:17:36.431176 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 21 11:17:36 crc kubenswrapper[4881]: E0121 11:17:36.431209 4881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 21 11:17:36 crc kubenswrapper[4881]: E0121 11:17:36.431291 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.182:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6zwhs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-6fc7fbc9b9-cj7zb_openstack(eb0e6ce6-181c-4edb-b4b3-d169c41c63a8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:17:36 crc kubenswrapper[4881]: E0121 11:17:36.432545 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-6fc7fbc9b9-cj7zb" podUID="eb0e6ce6-181c-4edb-b4b3-d169c41c63a8" Jan 21 11:17:36 crc kubenswrapper[4881]: E0121 11:17:36.449808 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 21 11:17:36 crc kubenswrapper[4881]: E0121 11:17:36.449868 4881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Jan 21 11:17:36 crc kubenswrapper[4881]: E0121 11:17:36.450002 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.182:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z7nlz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5cd6c77d8f-6z4pf_openstack(ef08c5f4-dc05-46a7-bb1b-8039ba0117aa): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:17:36 crc kubenswrapper[4881]: E0121 11:17:36.451197 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5cd6c77d8f-6z4pf" podUID="ef08c5f4-dc05-46a7-bb1b-8039ba0117aa" Jan 21 11:17:36 crc kubenswrapper[4881]: E0121 11:17:36.609355 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.182:5001/podified-master-centos10/openstack-neutron-server:watcher_latest\\\"\"" pod="openstack/dnsmasq-dns-7457897f45-vkp6c" podUID="99aba8a6-cc58-43be-9607-8ae1fcb57257" Jan 21 11:17:36 crc kubenswrapper[4881]: E0121 11:17:36.609935 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.182:5001/podified-master-centos10/openstack-neutron-server:watcher_latest\\\"\"" pod="openstack/dnsmasq-dns-fd8d879fc-flqh9" podUID="42132c17-6a2d-48d1-a636-3eae7558d55c" Jan 21 11:17:36 crc kubenswrapper[4881]: E0121 11:17:36.609997 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovn-controller-metrics-5dzhr" podUID="b9bd229b-588d-477e-8501-cd976b539e3a" Jan 21 11:17:36 crc kubenswrapper[4881]: E0121 11:17:36.610381 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.182:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest\\\"\"" pod="openstack/ovn-controller-s642n" podUID="256e0b4a-baac-415c-94c6-09f08fa09c7c" Jan 21 11:17:37 crc kubenswrapper[4881]: E0121 11:17:37.268155 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 21 11:17:37 crc kubenswrapper[4881]: E0121 11:17:37.268898 4881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 21 11:17:37 crc kubenswrapper[4881]: E0121 11:17:37.269103 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-25992,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(c5b6c25e-e882-4ea4-a284-6f55bfe75093): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 11:17:37 crc kubenswrapper[4881]: E0121 11:17:37.270481 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="c5b6c25e-e882-4ea4-a284-6f55bfe75093" Jan 21 11:17:37 crc kubenswrapper[4881]: I0121 11:17:37.454887 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fc7fbc9b9-cj7zb" Jan 21 11:17:37 crc kubenswrapper[4881]: I0121 11:17:37.550180 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zwhs\" (UniqueName: \"kubernetes.io/projected/eb0e6ce6-181c-4edb-b4b3-d169c41c63a8-kube-api-access-6zwhs\") pod \"eb0e6ce6-181c-4edb-b4b3-d169c41c63a8\" (UID: \"eb0e6ce6-181c-4edb-b4b3-d169c41c63a8\") " Jan 21 11:17:37 crc kubenswrapper[4881]: I0121 11:17:37.550289 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb0e6ce6-181c-4edb-b4b3-d169c41c63a8-config\") pod \"eb0e6ce6-181c-4edb-b4b3-d169c41c63a8\" (UID: \"eb0e6ce6-181c-4edb-b4b3-d169c41c63a8\") " Jan 21 11:17:37 crc kubenswrapper[4881]: I0121 11:17:37.550822 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb0e6ce6-181c-4edb-b4b3-d169c41c63a8-config" (OuterVolumeSpecName: "config") pod "eb0e6ce6-181c-4edb-b4b3-d169c41c63a8" (UID: "eb0e6ce6-181c-4edb-b4b3-d169c41c63a8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:17:37 crc kubenswrapper[4881]: I0121 11:17:37.553720 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb0e6ce6-181c-4edb-b4b3-d169c41c63a8-kube-api-access-6zwhs" (OuterVolumeSpecName: "kube-api-access-6zwhs") pod "eb0e6ce6-181c-4edb-b4b3-d169c41c63a8" (UID: "eb0e6ce6-181c-4edb-b4b3-d169c41c63a8"). InnerVolumeSpecName "kube-api-access-6zwhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:17:37 crc kubenswrapper[4881]: I0121 11:17:37.616630 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6557d744f-gt5cx" event={"ID":"aec91505-d39a-41cf-90af-1593bcb02e68","Type":"ContainerDied","Data":"11e9d0f8032d3e65513f2d8249ce3ac74bc1a4ddfcd269afe6c654eddabc71b8"} Jan 21 11:17:37 crc kubenswrapper[4881]: I0121 11:17:37.616681 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11e9d0f8032d3e65513f2d8249ce3ac74bc1a4ddfcd269afe6c654eddabc71b8" Jan 21 11:17:37 crc kubenswrapper[4881]: I0121 11:17:37.618191 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fc7fbc9b9-cj7zb" event={"ID":"eb0e6ce6-181c-4edb-b4b3-d169c41c63a8","Type":"ContainerDied","Data":"8b64289332b9bf6e24ce3af64b2717f89e14cd1b712818252df454ed0a94562c"} Jan 21 11:17:37 crc kubenswrapper[4881]: I0121 11:17:37.618213 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fc7fbc9b9-cj7zb" Jan 21 11:17:37 crc kubenswrapper[4881]: I0121 11:17:37.619876 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66b6fdbd65-2qwr2" event={"ID":"5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338","Type":"ContainerDied","Data":"b01fee828c93da9e7f8d614e402f96983135c404e70276a21ff9ec11bf276820"} Jan 21 11:17:37 crc kubenswrapper[4881]: I0121 11:17:37.620637 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b01fee828c93da9e7f8d614e402f96983135c404e70276a21ff9ec11bf276820" Jan 21 11:17:37 crc kubenswrapper[4881]: I0121 11:17:37.621836 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cd6c77d8f-6z4pf" event={"ID":"ef08c5f4-dc05-46a7-bb1b-8039ba0117aa","Type":"ContainerDied","Data":"385e3ff947423b95dcd5a48ddbdf919434e21551c87e247766e40b37cfc15a72"} Jan 21 11:17:37 crc kubenswrapper[4881]: I0121 11:17:37.621874 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="385e3ff947423b95dcd5a48ddbdf919434e21551c87e247766e40b37cfc15a72" Jan 21 11:17:37 crc kubenswrapper[4881]: E0121 11:17:37.623357 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="c5b6c25e-e882-4ea4-a284-6f55bfe75093" Jan 21 11:17:37 crc kubenswrapper[4881]: I0121 11:17:37.651904 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb0e6ce6-181c-4edb-b4b3-d169c41c63a8-dns-svc\") pod \"eb0e6ce6-181c-4edb-b4b3-d169c41c63a8\" (UID: \"eb0e6ce6-181c-4edb-b4b3-d169c41c63a8\") " Jan 21 11:17:37 crc kubenswrapper[4881]: I0121 11:17:37.652289 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb0e6ce6-181c-4edb-b4b3-d169c41c63a8-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:17:37 crc kubenswrapper[4881]: I0121 11:17:37.652306 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zwhs\" (UniqueName: \"kubernetes.io/projected/eb0e6ce6-181c-4edb-b4b3-d169c41c63a8-kube-api-access-6zwhs\") on node \"crc\" DevicePath \"\"" Jan 21 11:17:37 crc kubenswrapper[4881]: I0121 11:17:37.652511 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb0e6ce6-181c-4edb-b4b3-d169c41c63a8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "eb0e6ce6-181c-4edb-b4b3-d169c41c63a8" (UID: "eb0e6ce6-181c-4edb-b4b3-d169c41c63a8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:17:37 crc kubenswrapper[4881]: I0121 11:17:37.728796 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66b6fdbd65-2qwr2" Jan 21 11:17:37 crc kubenswrapper[4881]: I0121 11:17:37.753146 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338-dns-svc\") pod \"5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338\" (UID: \"5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338\") " Jan 21 11:17:37 crc kubenswrapper[4881]: I0121 11:17:37.753247 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gj4sc\" (UniqueName: \"kubernetes.io/projected/5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338-kube-api-access-gj4sc\") pod \"5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338\" (UID: \"5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338\") " Jan 21 11:17:37 crc kubenswrapper[4881]: I0121 11:17:37.753271 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338-config\") pod \"5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338\" (UID: \"5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338\") " Jan 21 11:17:37 crc kubenswrapper[4881]: I0121 11:17:37.753550 4881 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb0e6ce6-181c-4edb-b4b3-d169c41c63a8-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 11:17:37 crc kubenswrapper[4881]: I0121 11:17:37.753797 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338" (UID: "5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:17:37 crc kubenswrapper[4881]: I0121 11:17:37.753947 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338-config" (OuterVolumeSpecName: "config") pod "5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338" (UID: "5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:17:37 crc kubenswrapper[4881]: I0121 11:17:37.760858 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338-kube-api-access-gj4sc" (OuterVolumeSpecName: "kube-api-access-gj4sc") pod "5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338" (UID: "5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338"). InnerVolumeSpecName "kube-api-access-gj4sc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:17:37 crc kubenswrapper[4881]: I0121 11:17:37.854611 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gj4sc\" (UniqueName: \"kubernetes.io/projected/5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338-kube-api-access-gj4sc\") on node \"crc\" DevicePath \"\"" Jan 21 11:17:37 crc kubenswrapper[4881]: I0121 11:17:37.854916 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:17:37 crc kubenswrapper[4881]: I0121 11:17:37.854929 4881 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 11:17:37 crc kubenswrapper[4881]: I0121 11:17:37.961172 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6557d744f-gt5cx" Jan 21 11:17:38 crc kubenswrapper[4881]: I0121 11:17:38.010554 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cd6c77d8f-6z4pf" Jan 21 11:17:38 crc kubenswrapper[4881]: I0121 11:17:38.058359 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef08c5f4-dc05-46a7-bb1b-8039ba0117aa-config\") pod \"ef08c5f4-dc05-46a7-bb1b-8039ba0117aa\" (UID: \"ef08c5f4-dc05-46a7-bb1b-8039ba0117aa\") " Jan 21 11:17:38 crc kubenswrapper[4881]: I0121 11:17:38.058684 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnn2q\" (UniqueName: \"kubernetes.io/projected/aec91505-d39a-41cf-90af-1593bcb02e68-kube-api-access-dnn2q\") pod \"aec91505-d39a-41cf-90af-1593bcb02e68\" (UID: \"aec91505-d39a-41cf-90af-1593bcb02e68\") " Jan 21 11:17:38 crc kubenswrapper[4881]: I0121 11:17:38.058926 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aec91505-d39a-41cf-90af-1593bcb02e68-config\") pod \"aec91505-d39a-41cf-90af-1593bcb02e68\" (UID: \"aec91505-d39a-41cf-90af-1593bcb02e68\") " Jan 21 11:17:38 crc kubenswrapper[4881]: I0121 11:17:38.059011 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7nlz\" (UniqueName: \"kubernetes.io/projected/ef08c5f4-dc05-46a7-bb1b-8039ba0117aa-kube-api-access-z7nlz\") pod \"ef08c5f4-dc05-46a7-bb1b-8039ba0117aa\" (UID: \"ef08c5f4-dc05-46a7-bb1b-8039ba0117aa\") " Jan 21 11:17:38 crc kubenswrapper[4881]: I0121 11:17:38.059091 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef08c5f4-dc05-46a7-bb1b-8039ba0117aa-config" (OuterVolumeSpecName: "config") pod "ef08c5f4-dc05-46a7-bb1b-8039ba0117aa" (UID: "ef08c5f4-dc05-46a7-bb1b-8039ba0117aa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:17:38 crc kubenswrapper[4881]: I0121 11:17:38.059260 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6fc7fbc9b9-cj7zb"] Jan 21 11:17:38 crc kubenswrapper[4881]: I0121 11:17:38.060394 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aec91505-d39a-41cf-90af-1593bcb02e68-config" (OuterVolumeSpecName: "config") pod "aec91505-d39a-41cf-90af-1593bcb02e68" (UID: "aec91505-d39a-41cf-90af-1593bcb02e68"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:17:38 crc kubenswrapper[4881]: I0121 11:17:38.060507 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aec91505-d39a-41cf-90af-1593bcb02e68-dns-svc\") pod \"aec91505-d39a-41cf-90af-1593bcb02e68\" (UID: \"aec91505-d39a-41cf-90af-1593bcb02e68\") " Jan 21 11:17:38 crc kubenswrapper[4881]: I0121 11:17:38.062717 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aec91505-d39a-41cf-90af-1593bcb02e68-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:17:38 crc kubenswrapper[4881]: I0121 11:17:38.062751 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef08c5f4-dc05-46a7-bb1b-8039ba0117aa-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:17:38 crc kubenswrapper[4881]: I0121 11:17:38.063512 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aec91505-d39a-41cf-90af-1593bcb02e68-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "aec91505-d39a-41cf-90af-1593bcb02e68" (UID: "aec91505-d39a-41cf-90af-1593bcb02e68"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:17:38 crc kubenswrapper[4881]: I0121 11:17:38.067650 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef08c5f4-dc05-46a7-bb1b-8039ba0117aa-kube-api-access-z7nlz" (OuterVolumeSpecName: "kube-api-access-z7nlz") pod "ef08c5f4-dc05-46a7-bb1b-8039ba0117aa" (UID: "ef08c5f4-dc05-46a7-bb1b-8039ba0117aa"). InnerVolumeSpecName "kube-api-access-z7nlz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:17:38 crc kubenswrapper[4881]: E0121 11:17:38.070058 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-sb-0" podUID="c3884c64-25d6-42b5-a309-7eafa170719e" Jan 21 11:17:38 crc kubenswrapper[4881]: I0121 11:17:38.070564 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6fc7fbc9b9-cj7zb"] Jan 21 11:17:38 crc kubenswrapper[4881]: I0121 11:17:38.165689 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7nlz\" (UniqueName: \"kubernetes.io/projected/ef08c5f4-dc05-46a7-bb1b-8039ba0117aa-kube-api-access-z7nlz\") on node \"crc\" DevicePath \"\"" Jan 21 11:17:38 crc kubenswrapper[4881]: I0121 11:17:38.165730 4881 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aec91505-d39a-41cf-90af-1593bcb02e68-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 11:17:38 crc kubenswrapper[4881]: I0121 11:17:38.185187 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aec91505-d39a-41cf-90af-1593bcb02e68-kube-api-access-dnn2q" (OuterVolumeSpecName: "kube-api-access-dnn2q") pod "aec91505-d39a-41cf-90af-1593bcb02e68" (UID: "aec91505-d39a-41cf-90af-1593bcb02e68"). InnerVolumeSpecName "kube-api-access-dnn2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:17:38 crc kubenswrapper[4881]: I0121 11:17:38.269357 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnn2q\" (UniqueName: \"kubernetes.io/projected/aec91505-d39a-41cf-90af-1593bcb02e68-kube-api-access-dnn2q\") on node \"crc\" DevicePath \"\"" Jan 21 11:17:38 crc kubenswrapper[4881]: I0121 11:17:38.631127 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"cd1973a5-773b-438b-aab7-709fb828324d","Type":"ContainerStarted","Data":"c99268feb4be13da4c28dce5e7226cf0ad72747240ed4a74ebf64b92b1589637"} Jan 21 11:17:38 crc kubenswrapper[4881]: I0121 11:17:38.635249 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"24136f67-aca3-4e40-b3c2-b36b7623475f","Type":"ContainerStarted","Data":"46db8c0233464dda2d06ac7ab4fb2083b484520aa4d757acf2a0f0cfdf7dba09"} Jan 21 11:17:38 crc kubenswrapper[4881]: I0121 11:17:38.635301 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"24136f67-aca3-4e40-b3c2-b36b7623475f","Type":"ContainerStarted","Data":"36a3d53d3d86579821540be368a11ff270a5f9c5df2f78eb854b7b4d9a92c5fc"} Jan 21 11:17:38 crc kubenswrapper[4881]: I0121 11:17:38.639361 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c3884c64-25d6-42b5-a309-7eafa170719e","Type":"ContainerStarted","Data":"bef679e00f68571570a88bad8e19d777782851e71aebb0a71fcd128786dbe4c6"} Jan 21 11:17:38 crc kubenswrapper[4881]: E0121 11:17:38.640504 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.182:5001/podified-master-centos10/openstack-ovn-sb-db-server:watcher_latest\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="c3884c64-25d6-42b5-a309-7eafa170719e" Jan 21 11:17:38 crc kubenswrapper[4881]: I0121 11:17:38.643250 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"197dd5bf-f68a-4d9d-b75c-de87a54ed46b","Type":"ContainerStarted","Data":"66e36374643a43e11b9a7ebef5758dd162f141744e75e5606bc7931a3eae58b2"} Jan 21 11:17:38 crc kubenswrapper[4881]: I0121 11:17:38.643296 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66b6fdbd65-2qwr2" Jan 21 11:17:38 crc kubenswrapper[4881]: I0121 11:17:38.643377 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6557d744f-gt5cx" Jan 21 11:17:38 crc kubenswrapper[4881]: I0121 11:17:38.643538 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cd6c77d8f-6z4pf" Jan 21 11:17:38 crc kubenswrapper[4881]: I0121 11:17:38.739706 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=5.046302884 podStartE2EDuration="51.739679685s" podCreationTimestamp="2026-01-21 11:16:47 +0000 UTC" firstStartedPulling="2026-01-21 11:16:51.109261447 +0000 UTC m=+1198.369217916" lastFinishedPulling="2026-01-21 11:17:37.802638248 +0000 UTC m=+1245.062594717" observedRunningTime="2026-01-21 11:17:38.736990827 +0000 UTC m=+1245.996947296" watchObservedRunningTime="2026-01-21 11:17:38.739679685 +0000 UTC m=+1245.999636154" Jan 21 11:17:38 crc kubenswrapper[4881]: I0121 11:17:38.909101 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66b6fdbd65-2qwr2"] Jan 21 11:17:38 crc kubenswrapper[4881]: I0121 11:17:38.920700 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-66b6fdbd65-2qwr2"] Jan 21 11:17:38 crc kubenswrapper[4881]: I0121 11:17:38.971563 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6557d744f-gt5cx"] Jan 21 11:17:38 crc kubenswrapper[4881]: I0121 11:17:38.986401 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6557d744f-gt5cx"] Jan 21 11:17:39 crc kubenswrapper[4881]: I0121 11:17:39.003484 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cd6c77d8f-6z4pf"] Jan 21 11:17:39 crc kubenswrapper[4881]: I0121 11:17:39.010057 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5cd6c77d8f-6z4pf"] Jan 21 11:17:39 crc kubenswrapper[4881]: I0121 11:17:39.332828 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338" path="/var/lib/kubelet/pods/5d59d9e0-8dd3-4bbd-ab3c-01e0e4a3b338/volumes" Jan 21 11:17:39 crc kubenswrapper[4881]: I0121 11:17:39.335625 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aec91505-d39a-41cf-90af-1593bcb02e68" path="/var/lib/kubelet/pods/aec91505-d39a-41cf-90af-1593bcb02e68/volumes" Jan 21 11:17:39 crc kubenswrapper[4881]: I0121 11:17:39.337076 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb0e6ce6-181c-4edb-b4b3-d169c41c63a8" path="/var/lib/kubelet/pods/eb0e6ce6-181c-4edb-b4b3-d169c41c63a8/volumes" Jan 21 11:17:39 crc kubenswrapper[4881]: I0121 11:17:39.339853 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef08c5f4-dc05-46a7-bb1b-8039ba0117aa" path="/var/lib/kubelet/pods/ef08c5f4-dc05-46a7-bb1b-8039ba0117aa/volumes" Jan 21 11:17:39 crc kubenswrapper[4881]: I0121 11:17:39.376209 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 21 11:17:39 crc kubenswrapper[4881]: E0121 11:17:39.654043 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.182:5001/podified-master-centos10/openstack-ovn-sb-db-server:watcher_latest\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="c3884c64-25d6-42b5-a309-7eafa170719e" Jan 21 11:17:40 crc kubenswrapper[4881]: I0121 11:17:40.376382 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 21 11:17:41 crc kubenswrapper[4881]: I0121 11:17:41.668123 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"7960c16a-de64-4154-9072-aee49e3bd573","Type":"ContainerStarted","Data":"bf654632f9f8c849b39eb3984824a19d60064f46a9fcc4111fd748206bfe3c81"} Jan 21 11:17:41 crc kubenswrapper[4881]: I0121 11:17:41.670024 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"078c2368-b247-49d4-8723-fd93918e99b1","Type":"ContainerStarted","Data":"26f697deade0e9783aed3c09129f2f0589fbb10b53e3501c212b7fcc5f5b5d86"} Jan 21 11:17:42 crc kubenswrapper[4881]: I0121 11:17:42.681092 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"44bcf219-3358-4596-9d1e-88a51c415266","Type":"ContainerStarted","Data":"49c33a525e9cb9bae99d4cbbbfd17980a01d8ffda81efc8033434da5404beb26"} Jan 21 11:17:42 crc kubenswrapper[4881]: I0121 11:17:42.683595 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"75733567-f2a6-4331-bdea-147126213437","Type":"ContainerStarted","Data":"3d2c36495c41eb6152a1fc9a05412fce52a5f353e0b59004227d5efed6039fb6"} Jan 21 11:17:42 crc kubenswrapper[4881]: I0121 11:17:42.685685 4881 generic.go:334] "Generic (PLEG): container finished" podID="9ff4a63e-40e5-4133-967e-9ba083f3603b" containerID="d08adb83e3199d21288d6a66e8b2fdb972f8aa4b701580661048ab458692f76e" exitCode=0 Jan 21 11:17:42 crc kubenswrapper[4881]: I0121 11:17:42.685754 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-2rtl8" event={"ID":"9ff4a63e-40e5-4133-967e-9ba083f3603b","Type":"ContainerDied","Data":"d08adb83e3199d21288d6a66e8b2fdb972f8aa4b701580661048ab458692f76e"} Jan 21 11:17:42 crc kubenswrapper[4881]: I0121 11:17:42.689115 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f7e90972-9be1-4d3e-852e-e7f7df6e6623","Type":"ContainerStarted","Data":"b30e547e2506fcebf2f8ac627808ad3f0382510a160b2079a570164ee838adfc"} Jan 21 11:17:42 crc kubenswrapper[4881]: I0121 11:17:42.689299 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 21 11:17:42 crc kubenswrapper[4881]: I0121 11:17:42.763347 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=6.719856925 podStartE2EDuration="1m0.763305656s" podCreationTimestamp="2026-01-21 11:16:42 +0000 UTC" firstStartedPulling="2026-01-21 11:16:44.403327444 +0000 UTC m=+1191.663283913" lastFinishedPulling="2026-01-21 11:17:38.446776175 +0000 UTC m=+1245.706732644" observedRunningTime="2026-01-21 11:17:42.761182443 +0000 UTC m=+1250.021138932" watchObservedRunningTime="2026-01-21 11:17:42.763305656 +0000 UTC m=+1250.023262145" Jan 21 11:17:43 crc kubenswrapper[4881]: I0121 11:17:43.429846 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 21 11:17:43 crc kubenswrapper[4881]: I0121 11:17:43.473492 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 21 11:17:43 crc kubenswrapper[4881]: I0121 11:17:43.709745 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-2rtl8" event={"ID":"9ff4a63e-40e5-4133-967e-9ba083f3603b","Type":"ContainerStarted","Data":"c71bf5326117e72b17dca906525ab6979082c71793baa1784d2c5afcb9955660"} Jan 21 11:17:43 crc kubenswrapper[4881]: I0121 11:17:43.709823 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-2rtl8" event={"ID":"9ff4a63e-40e5-4133-967e-9ba083f3603b","Type":"ContainerStarted","Data":"1f99aca2252816b539bcce6eac5a0cfde8f99abcbc456e54343721aa5860f099"} Jan 21 11:17:43 crc kubenswrapper[4881]: I0121 11:17:43.712617 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-2rtl8" Jan 21 11:17:43 crc kubenswrapper[4881]: I0121 11:17:43.712652 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-2rtl8" Jan 21 11:17:43 crc kubenswrapper[4881]: I0121 11:17:43.812016 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-2rtl8" podStartSLOduration=8.704535212 podStartE2EDuration="55.811991611s" podCreationTimestamp="2026-01-21 11:16:48 +0000 UTC" firstStartedPulling="2026-01-21 11:16:51.339840899 +0000 UTC m=+1198.599797368" lastFinishedPulling="2026-01-21 11:17:38.447297298 +0000 UTC m=+1245.707253767" observedRunningTime="2026-01-21 11:17:43.73543605 +0000 UTC m=+1250.995392519" watchObservedRunningTime="2026-01-21 11:17:43.811991611 +0000 UTC m=+1251.071948080" Jan 21 11:17:46 crc kubenswrapper[4881]: I0121 11:17:46.732163 4881 generic.go:334] "Generic (PLEG): container finished" podID="197dd5bf-f68a-4d9d-b75c-de87a54ed46b" containerID="66e36374643a43e11b9a7ebef5758dd162f141744e75e5606bc7931a3eae58b2" exitCode=0 Jan 21 11:17:46 crc kubenswrapper[4881]: I0121 11:17:46.732268 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"197dd5bf-f68a-4d9d-b75c-de87a54ed46b","Type":"ContainerDied","Data":"66e36374643a43e11b9a7ebef5758dd162f141744e75e5606bc7931a3eae58b2"} Jan 21 11:17:47 crc kubenswrapper[4881]: I0121 11:17:47.742214 4881 generic.go:334] "Generic (PLEG): container finished" podID="99aba8a6-cc58-43be-9607-8ae1fcb57257" containerID="1e57b157cf3ee5972a66bda532a4febde866d6c3d74c1e97f0eda2d339b8bfd2" exitCode=0 Jan 21 11:17:47 crc kubenswrapper[4881]: I0121 11:17:47.742291 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7457897f45-vkp6c" event={"ID":"99aba8a6-cc58-43be-9607-8ae1fcb57257","Type":"ContainerDied","Data":"1e57b157cf3ee5972a66bda532a4febde866d6c3d74c1e97f0eda2d339b8bfd2"} Jan 21 11:17:47 crc kubenswrapper[4881]: I0121 11:17:47.746264 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"197dd5bf-f68a-4d9d-b75c-de87a54ed46b","Type":"ContainerStarted","Data":"20fb37ae9dffc2e25ae633ff1ba434f72c1307a7af1496049c2520d4028c8da9"} Jan 21 11:17:47 crc kubenswrapper[4881]: I0121 11:17:47.749026 4881 generic.go:334] "Generic (PLEG): container finished" podID="cd1973a5-773b-438b-aab7-709fb828324d" containerID="c99268feb4be13da4c28dce5e7226cf0ad72747240ed4a74ebf64b92b1589637" exitCode=0 Jan 21 11:17:47 crc kubenswrapper[4881]: I0121 11:17:47.749069 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"cd1973a5-773b-438b-aab7-709fb828324d","Type":"ContainerDied","Data":"c99268feb4be13da4c28dce5e7226cf0ad72747240ed4a74ebf64b92b1589637"} Jan 21 11:17:47 crc kubenswrapper[4881]: I0121 11:17:47.816993 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=14.325754327 podStartE2EDuration="1m7.816967228s" podCreationTimestamp="2026-01-21 11:16:40 +0000 UTC" firstStartedPulling="2026-01-21 11:16:43.771286439 +0000 UTC m=+1191.031243068" lastFinishedPulling="2026-01-21 11:17:37.2624995 +0000 UTC m=+1244.522455969" observedRunningTime="2026-01-21 11:17:47.808171469 +0000 UTC m=+1255.068127948" watchObservedRunningTime="2026-01-21 11:17:47.816967228 +0000 UTC m=+1255.076923697" Jan 21 11:17:48 crc kubenswrapper[4881]: I0121 11:17:48.245183 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 21 11:17:48 crc kubenswrapper[4881]: I0121 11:17:48.759856 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"cd1973a5-773b-438b-aab7-709fb828324d","Type":"ContainerStarted","Data":"df9cea89f7c13797a23ebce6211650407ff192590f3a5f152f0c4ad0510a66d9"} Jan 21 11:17:48 crc kubenswrapper[4881]: I0121 11:17:48.762314 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7457897f45-vkp6c" event={"ID":"99aba8a6-cc58-43be-9607-8ae1fcb57257","Type":"ContainerStarted","Data":"3b550ef95b5c642befe5d47915b7748fa9b72e7044ab0f6f21d753c37168b189"} Jan 21 11:17:48 crc kubenswrapper[4881]: I0121 11:17:48.763289 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7457897f45-vkp6c" Jan 21 11:17:48 crc kubenswrapper[4881]: I0121 11:17:48.792001 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=14.406460386 podStartE2EDuration="1m7.791984545s" podCreationTimestamp="2026-01-21 11:16:41 +0000 UTC" firstStartedPulling="2026-01-21 11:16:43.87696748 +0000 UTC m=+1191.136923949" lastFinishedPulling="2026-01-21 11:17:37.262491639 +0000 UTC m=+1244.522448108" observedRunningTime="2026-01-21 11:17:48.783857801 +0000 UTC m=+1256.043814260" watchObservedRunningTime="2026-01-21 11:17:48.791984545 +0000 UTC m=+1256.051941004" Jan 21 11:17:48 crc kubenswrapper[4881]: I0121 11:17:48.814099 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7457897f45-vkp6c" podStartSLOduration=3.546233435 podStartE2EDuration="1m10.814077356s" podCreationTimestamp="2026-01-21 11:16:38 +0000 UTC" firstStartedPulling="2026-01-21 11:16:40.112647669 +0000 UTC m=+1187.372604138" lastFinishedPulling="2026-01-21 11:17:47.38049159 +0000 UTC m=+1254.640448059" observedRunningTime="2026-01-21 11:17:48.808260031 +0000 UTC m=+1256.068216500" watchObservedRunningTime="2026-01-21 11:17:48.814077356 +0000 UTC m=+1256.074033835" Jan 21 11:17:49 crc kubenswrapper[4881]: I0121 11:17:49.773436 4881 generic.go:334] "Generic (PLEG): container finished" podID="75733567-f2a6-4331-bdea-147126213437" containerID="3d2c36495c41eb6152a1fc9a05412fce52a5f353e0b59004227d5efed6039fb6" exitCode=0 Jan 21 11:17:49 crc kubenswrapper[4881]: I0121 11:17:49.774879 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"75733567-f2a6-4331-bdea-147126213437","Type":"ContainerDied","Data":"3d2c36495c41eb6152a1fc9a05412fce52a5f353e0b59004227d5efed6039fb6"} Jan 21 11:17:50 crc kubenswrapper[4881]: I0121 11:17:50.795983 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-5dzhr" event={"ID":"b9bd229b-588d-477e-8501-cd976b539e3a","Type":"ContainerStarted","Data":"fb18542d1e8bd27716d9eec28470aaccf2304a790f5a134063b4326b705bf1f8"} Jan 21 11:17:50 crc kubenswrapper[4881]: I0121 11:17:50.829752 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-5dzhr" podStartSLOduration=-9223371977.025053 podStartE2EDuration="59.829722458s" podCreationTimestamp="2026-01-21 11:16:51 +0000 UTC" firstStartedPulling="2026-01-21 11:17:15.942306475 +0000 UTC m=+1223.202262944" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:17:50.825670527 +0000 UTC m=+1258.085627006" watchObservedRunningTime="2026-01-21 11:17:50.829722458 +0000 UTC m=+1258.089678927" Jan 21 11:17:51 crc kubenswrapper[4881]: I0121 11:17:51.456752 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7457897f45-vkp6c"] Jan 21 11:17:51 crc kubenswrapper[4881]: I0121 11:17:51.457012 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7457897f45-vkp6c" podUID="99aba8a6-cc58-43be-9607-8ae1fcb57257" containerName="dnsmasq-dns" containerID="cri-o://3b550ef95b5c642befe5d47915b7748fa9b72e7044ab0f6f21d753c37168b189" gracePeriod=10 Jan 21 11:17:51 crc kubenswrapper[4881]: I0121 11:17:51.518337 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bbbc7b58c-8f8v7"] Jan 21 11:17:51 crc kubenswrapper[4881]: I0121 11:17:51.523749 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bbbc7b58c-8f8v7" Jan 21 11:17:51 crc kubenswrapper[4881]: I0121 11:17:51.528328 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 21 11:17:51 crc kubenswrapper[4881]: I0121 11:17:51.556287 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bbbc7b58c-8f8v7"] Jan 21 11:17:51 crc kubenswrapper[4881]: I0121 11:17:51.675276 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/efbfd001-4602-47b8-8c93-750ee3526e9e-ovsdbserver-nb\") pod \"dnsmasq-dns-5bbbc7b58c-8f8v7\" (UID: \"efbfd001-4602-47b8-8c93-750ee3526e9e\") " pod="openstack/dnsmasq-dns-5bbbc7b58c-8f8v7" Jan 21 11:17:51 crc kubenswrapper[4881]: I0121 11:17:51.675332 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efbfd001-4602-47b8-8c93-750ee3526e9e-config\") pod \"dnsmasq-dns-5bbbc7b58c-8f8v7\" (UID: \"efbfd001-4602-47b8-8c93-750ee3526e9e\") " pod="openstack/dnsmasq-dns-5bbbc7b58c-8f8v7" Jan 21 11:17:51 crc kubenswrapper[4881]: I0121 11:17:51.675354 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/efbfd001-4602-47b8-8c93-750ee3526e9e-dns-svc\") pod \"dnsmasq-dns-5bbbc7b58c-8f8v7\" (UID: \"efbfd001-4602-47b8-8c93-750ee3526e9e\") " pod="openstack/dnsmasq-dns-5bbbc7b58c-8f8v7" Jan 21 11:17:51 crc kubenswrapper[4881]: I0121 11:17:51.675380 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krc4s\" (UniqueName: \"kubernetes.io/projected/efbfd001-4602-47b8-8c93-750ee3526e9e-kube-api-access-krc4s\") pod \"dnsmasq-dns-5bbbc7b58c-8f8v7\" (UID: \"efbfd001-4602-47b8-8c93-750ee3526e9e\") " pod="openstack/dnsmasq-dns-5bbbc7b58c-8f8v7" Jan 21 11:17:51 crc kubenswrapper[4881]: I0121 11:17:51.675415 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/efbfd001-4602-47b8-8c93-750ee3526e9e-ovsdbserver-sb\") pod \"dnsmasq-dns-5bbbc7b58c-8f8v7\" (UID: \"efbfd001-4602-47b8-8c93-750ee3526e9e\") " pod="openstack/dnsmasq-dns-5bbbc7b58c-8f8v7" Jan 21 11:17:51 crc kubenswrapper[4881]: I0121 11:17:51.777397 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/efbfd001-4602-47b8-8c93-750ee3526e9e-ovsdbserver-nb\") pod \"dnsmasq-dns-5bbbc7b58c-8f8v7\" (UID: \"efbfd001-4602-47b8-8c93-750ee3526e9e\") " pod="openstack/dnsmasq-dns-5bbbc7b58c-8f8v7" Jan 21 11:17:51 crc kubenswrapper[4881]: I0121 11:17:51.777524 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efbfd001-4602-47b8-8c93-750ee3526e9e-config\") pod \"dnsmasq-dns-5bbbc7b58c-8f8v7\" (UID: \"efbfd001-4602-47b8-8c93-750ee3526e9e\") " pod="openstack/dnsmasq-dns-5bbbc7b58c-8f8v7" Jan 21 11:17:51 crc kubenswrapper[4881]: I0121 11:17:51.777562 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/efbfd001-4602-47b8-8c93-750ee3526e9e-dns-svc\") pod \"dnsmasq-dns-5bbbc7b58c-8f8v7\" (UID: \"efbfd001-4602-47b8-8c93-750ee3526e9e\") " pod="openstack/dnsmasq-dns-5bbbc7b58c-8f8v7" Jan 21 11:17:51 crc kubenswrapper[4881]: I0121 11:17:51.777641 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krc4s\" (UniqueName: \"kubernetes.io/projected/efbfd001-4602-47b8-8c93-750ee3526e9e-kube-api-access-krc4s\") pod \"dnsmasq-dns-5bbbc7b58c-8f8v7\" (UID: \"efbfd001-4602-47b8-8c93-750ee3526e9e\") " pod="openstack/dnsmasq-dns-5bbbc7b58c-8f8v7" Jan 21 11:17:51 crc kubenswrapper[4881]: I0121 11:17:51.777762 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/efbfd001-4602-47b8-8c93-750ee3526e9e-ovsdbserver-sb\") pod \"dnsmasq-dns-5bbbc7b58c-8f8v7\" (UID: \"efbfd001-4602-47b8-8c93-750ee3526e9e\") " pod="openstack/dnsmasq-dns-5bbbc7b58c-8f8v7" Jan 21 11:17:51 crc kubenswrapper[4881]: I0121 11:17:51.778289 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/efbfd001-4602-47b8-8c93-750ee3526e9e-ovsdbserver-nb\") pod \"dnsmasq-dns-5bbbc7b58c-8f8v7\" (UID: \"efbfd001-4602-47b8-8c93-750ee3526e9e\") " pod="openstack/dnsmasq-dns-5bbbc7b58c-8f8v7" Jan 21 11:17:51 crc kubenswrapper[4881]: I0121 11:17:51.778632 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/efbfd001-4602-47b8-8c93-750ee3526e9e-dns-svc\") pod \"dnsmasq-dns-5bbbc7b58c-8f8v7\" (UID: \"efbfd001-4602-47b8-8c93-750ee3526e9e\") " pod="openstack/dnsmasq-dns-5bbbc7b58c-8f8v7" Jan 21 11:17:51 crc kubenswrapper[4881]: I0121 11:17:51.779030 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efbfd001-4602-47b8-8c93-750ee3526e9e-config\") pod \"dnsmasq-dns-5bbbc7b58c-8f8v7\" (UID: \"efbfd001-4602-47b8-8c93-750ee3526e9e\") " pod="openstack/dnsmasq-dns-5bbbc7b58c-8f8v7" Jan 21 11:17:51 crc kubenswrapper[4881]: I0121 11:17:51.779093 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/efbfd001-4602-47b8-8c93-750ee3526e9e-ovsdbserver-sb\") pod \"dnsmasq-dns-5bbbc7b58c-8f8v7\" (UID: \"efbfd001-4602-47b8-8c93-750ee3526e9e\") " pod="openstack/dnsmasq-dns-5bbbc7b58c-8f8v7" Jan 21 11:17:51 crc kubenswrapper[4881]: I0121 11:17:51.802047 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krc4s\" (UniqueName: \"kubernetes.io/projected/efbfd001-4602-47b8-8c93-750ee3526e9e-kube-api-access-krc4s\") pod \"dnsmasq-dns-5bbbc7b58c-8f8v7\" (UID: \"efbfd001-4602-47b8-8c93-750ee3526e9e\") " pod="openstack/dnsmasq-dns-5bbbc7b58c-8f8v7" Jan 21 11:17:51 crc kubenswrapper[4881]: I0121 11:17:51.824428 4881 generic.go:334] "Generic (PLEG): container finished" podID="99aba8a6-cc58-43be-9607-8ae1fcb57257" containerID="3b550ef95b5c642befe5d47915b7748fa9b72e7044ab0f6f21d753c37168b189" exitCode=0 Jan 21 11:17:51 crc kubenswrapper[4881]: I0121 11:17:51.824557 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7457897f45-vkp6c" event={"ID":"99aba8a6-cc58-43be-9607-8ae1fcb57257","Type":"ContainerDied","Data":"3b550ef95b5c642befe5d47915b7748fa9b72e7044ab0f6f21d753c37168b189"} Jan 21 11:17:51 crc kubenswrapper[4881]: I0121 11:17:51.835750 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c3884c64-25d6-42b5-a309-7eafa170719e","Type":"ContainerStarted","Data":"344d4bc77e52408b60bf5a0ceb6757cad2bade731efde2d13b814ff370df019f"} Jan 21 11:17:51 crc kubenswrapper[4881]: I0121 11:17:51.841685 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fd8d879fc-flqh9" event={"ID":"42132c17-6a2d-48d1-a636-3eae7558d55c","Type":"ContainerStarted","Data":"b42c99eace0541ffcc9144f5cc0186de9eb99d26b014e66fda937ea6fd9eb8a8"} Jan 21 11:17:51 crc kubenswrapper[4881]: I0121 11:17:51.891278 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=8.748487912 podStartE2EDuration="1m1.891251976s" podCreationTimestamp="2026-01-21 11:16:50 +0000 UTC" firstStartedPulling="2026-01-21 11:16:58.381790658 +0000 UTC m=+1205.641747127" lastFinishedPulling="2026-01-21 11:17:51.524554722 +0000 UTC m=+1258.784511191" observedRunningTime="2026-01-21 11:17:51.866350093 +0000 UTC m=+1259.126306562" watchObservedRunningTime="2026-01-21 11:17:51.891251976 +0000 UTC m=+1259.151208445" Jan 21 11:17:51 crc kubenswrapper[4881]: I0121 11:17:51.916173 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7457897f45-vkp6c" Jan 21 11:17:51 crc kubenswrapper[4881]: I0121 11:17:51.986544 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bbbc7b58c-8f8v7" Jan 21 11:17:52 crc kubenswrapper[4881]: I0121 11:17:52.090152 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gf75\" (UniqueName: \"kubernetes.io/projected/99aba8a6-cc58-43be-9607-8ae1fcb57257-kube-api-access-4gf75\") pod \"99aba8a6-cc58-43be-9607-8ae1fcb57257\" (UID: \"99aba8a6-cc58-43be-9607-8ae1fcb57257\") " Jan 21 11:17:52 crc kubenswrapper[4881]: I0121 11:17:52.090322 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99aba8a6-cc58-43be-9607-8ae1fcb57257-config\") pod \"99aba8a6-cc58-43be-9607-8ae1fcb57257\" (UID: \"99aba8a6-cc58-43be-9607-8ae1fcb57257\") " Jan 21 11:17:52 crc kubenswrapper[4881]: I0121 11:17:52.090382 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99aba8a6-cc58-43be-9607-8ae1fcb57257-dns-svc\") pod \"99aba8a6-cc58-43be-9607-8ae1fcb57257\" (UID: \"99aba8a6-cc58-43be-9607-8ae1fcb57257\") " Jan 21 11:17:52 crc kubenswrapper[4881]: I0121 11:17:52.096027 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99aba8a6-cc58-43be-9607-8ae1fcb57257-kube-api-access-4gf75" (OuterVolumeSpecName: "kube-api-access-4gf75") pod "99aba8a6-cc58-43be-9607-8ae1fcb57257" (UID: "99aba8a6-cc58-43be-9607-8ae1fcb57257"). InnerVolumeSpecName "kube-api-access-4gf75". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:17:52 crc kubenswrapper[4881]: I0121 11:17:52.103567 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 21 11:17:52 crc kubenswrapper[4881]: I0121 11:17:52.103600 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 21 11:17:52 crc kubenswrapper[4881]: I0121 11:17:52.151384 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99aba8a6-cc58-43be-9607-8ae1fcb57257-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "99aba8a6-cc58-43be-9607-8ae1fcb57257" (UID: "99aba8a6-cc58-43be-9607-8ae1fcb57257"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:17:52 crc kubenswrapper[4881]: I0121 11:17:52.165461 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99aba8a6-cc58-43be-9607-8ae1fcb57257-config" (OuterVolumeSpecName: "config") pod "99aba8a6-cc58-43be-9607-8ae1fcb57257" (UID: "99aba8a6-cc58-43be-9607-8ae1fcb57257"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:17:52 crc kubenswrapper[4881]: I0121 11:17:52.194078 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gf75\" (UniqueName: \"kubernetes.io/projected/99aba8a6-cc58-43be-9607-8ae1fcb57257-kube-api-access-4gf75\") on node \"crc\" DevicePath \"\"" Jan 21 11:17:52 crc kubenswrapper[4881]: I0121 11:17:52.194106 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99aba8a6-cc58-43be-9607-8ae1fcb57257-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:17:52 crc kubenswrapper[4881]: I0121 11:17:52.194117 4881 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/99aba8a6-cc58-43be-9607-8ae1fcb57257-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 11:17:52 crc kubenswrapper[4881]: I0121 11:17:52.333395 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 21 11:17:52 crc kubenswrapper[4881]: I0121 11:17:52.333748 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 21 11:17:52 crc kubenswrapper[4881]: I0121 11:17:52.564501 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bbbc7b58c-8f8v7"] Jan 21 11:17:52 crc kubenswrapper[4881]: W0121 11:17:52.577430 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podefbfd001_4602_47b8_8c93_750ee3526e9e.slice/crio-0d2501cc7f927d66e1b692f30c322a8fe23a8259355cb2568f67f16617966fc3 WatchSource:0}: Error finding container 0d2501cc7f927d66e1b692f30c322a8fe23a8259355cb2568f67f16617966fc3: Status 404 returned error can't find the container with id 0d2501cc7f927d66e1b692f30c322a8fe23a8259355cb2568f67f16617966fc3 Jan 21 11:17:52 crc kubenswrapper[4881]: I0121 11:17:52.863169 4881 generic.go:334] "Generic (PLEG): container finished" podID="efbfd001-4602-47b8-8c93-750ee3526e9e" containerID="cdc12a4dbe29fc14fdd129b9c5c90a6d695123d10dd8715736366c33c786a70d" exitCode=0 Jan 21 11:17:52 crc kubenswrapper[4881]: I0121 11:17:52.863466 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bbbc7b58c-8f8v7" event={"ID":"efbfd001-4602-47b8-8c93-750ee3526e9e","Type":"ContainerDied","Data":"cdc12a4dbe29fc14fdd129b9c5c90a6d695123d10dd8715736366c33c786a70d"} Jan 21 11:17:52 crc kubenswrapper[4881]: I0121 11:17:52.863501 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bbbc7b58c-8f8v7" event={"ID":"efbfd001-4602-47b8-8c93-750ee3526e9e","Type":"ContainerStarted","Data":"0d2501cc7f927d66e1b692f30c322a8fe23a8259355cb2568f67f16617966fc3"} Jan 21 11:17:52 crc kubenswrapper[4881]: I0121 11:17:52.868147 4881 generic.go:334] "Generic (PLEG): container finished" podID="42132c17-6a2d-48d1-a636-3eae7558d55c" containerID="b42c99eace0541ffcc9144f5cc0186de9eb99d26b014e66fda937ea6fd9eb8a8" exitCode=0 Jan 21 11:17:52 crc kubenswrapper[4881]: I0121 11:17:52.868203 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fd8d879fc-flqh9" event={"ID":"42132c17-6a2d-48d1-a636-3eae7558d55c","Type":"ContainerDied","Data":"b42c99eace0541ffcc9144f5cc0186de9eb99d26b014e66fda937ea6fd9eb8a8"} Jan 21 11:17:52 crc kubenswrapper[4881]: I0121 11:17:52.879380 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7457897f45-vkp6c" Jan 21 11:17:52 crc kubenswrapper[4881]: I0121 11:17:52.879660 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7457897f45-vkp6c" event={"ID":"99aba8a6-cc58-43be-9607-8ae1fcb57257","Type":"ContainerDied","Data":"3ca12aa1fc94ac25d568434ebdd78b6fc24b1d504a1ce7b61d9ef849d50cf128"} Jan 21 11:17:52 crc kubenswrapper[4881]: I0121 11:17:52.879721 4881 scope.go:117] "RemoveContainer" containerID="3b550ef95b5c642befe5d47915b7748fa9b72e7044ab0f6f21d753c37168b189" Jan 21 11:17:52 crc kubenswrapper[4881]: I0121 11:17:52.913719 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 21 11:17:52 crc kubenswrapper[4881]: I0121 11:17:52.914132 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 21 11:17:52 crc kubenswrapper[4881]: I0121 11:17:52.954219 4881 scope.go:117] "RemoveContainer" containerID="1e57b157cf3ee5972a66bda532a4febde866d6c3d74c1e97f0eda2d339b8bfd2" Jan 21 11:17:52 crc kubenswrapper[4881]: I0121 11:17:52.955478 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7457897f45-vkp6c"] Jan 21 11:17:52 crc kubenswrapper[4881]: I0121 11:17:52.963317 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7457897f45-vkp6c"] Jan 21 11:17:53 crc kubenswrapper[4881]: I0121 11:17:53.336368 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99aba8a6-cc58-43be-9607-8ae1fcb57257" path="/var/lib/kubelet/pods/99aba8a6-cc58-43be-9607-8ae1fcb57257/volumes" Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.714290 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fd8d879fc-flqh9"] Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.750596 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84cb884cf9-wmwx8"] Jan 21 11:17:54 crc kubenswrapper[4881]: E0121 11:17:54.751109 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99aba8a6-cc58-43be-9607-8ae1fcb57257" containerName="dnsmasq-dns" Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.751133 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="99aba8a6-cc58-43be-9607-8ae1fcb57257" containerName="dnsmasq-dns" Jan 21 11:17:54 crc kubenswrapper[4881]: E0121 11:17:54.751176 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99aba8a6-cc58-43be-9607-8ae1fcb57257" containerName="init" Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.751185 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="99aba8a6-cc58-43be-9607-8ae1fcb57257" containerName="init" Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.751389 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="99aba8a6-cc58-43be-9607-8ae1fcb57257" containerName="dnsmasq-dns" Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.752510 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84cb884cf9-wmwx8" Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.767369 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84cb884cf9-wmwx8"] Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.882863 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45wlj\" (UniqueName: \"kubernetes.io/projected/62435f30-e8fc-4fcd-8b96-4a604439965e-kube-api-access-45wlj\") pod \"dnsmasq-dns-84cb884cf9-wmwx8\" (UID: \"62435f30-e8fc-4fcd-8b96-4a604439965e\") " pod="openstack/dnsmasq-dns-84cb884cf9-wmwx8" Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.882947 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/62435f30-e8fc-4fcd-8b96-4a604439965e-dns-svc\") pod \"dnsmasq-dns-84cb884cf9-wmwx8\" (UID: \"62435f30-e8fc-4fcd-8b96-4a604439965e\") " pod="openstack/dnsmasq-dns-84cb884cf9-wmwx8" Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.883007 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/62435f30-e8fc-4fcd-8b96-4a604439965e-ovsdbserver-sb\") pod \"dnsmasq-dns-84cb884cf9-wmwx8\" (UID: \"62435f30-e8fc-4fcd-8b96-4a604439965e\") " pod="openstack/dnsmasq-dns-84cb884cf9-wmwx8" Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.883033 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62435f30-e8fc-4fcd-8b96-4a604439965e-config\") pod \"dnsmasq-dns-84cb884cf9-wmwx8\" (UID: \"62435f30-e8fc-4fcd-8b96-4a604439965e\") " pod="openstack/dnsmasq-dns-84cb884cf9-wmwx8" Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.883066 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/62435f30-e8fc-4fcd-8b96-4a604439965e-ovsdbserver-nb\") pod \"dnsmasq-dns-84cb884cf9-wmwx8\" (UID: \"62435f30-e8fc-4fcd-8b96-4a604439965e\") " pod="openstack/dnsmasq-dns-84cb884cf9-wmwx8" Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.912754 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fd8d879fc-flqh9" event={"ID":"42132c17-6a2d-48d1-a636-3eae7558d55c","Type":"ContainerStarted","Data":"9f94ea318cabd7d5a85bae60436f7fbbd182561901dd4e1123c0ce68a86bd03b"} Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.913355 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-fd8d879fc-flqh9" Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.913525 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-fd8d879fc-flqh9" podUID="42132c17-6a2d-48d1-a636-3eae7558d55c" containerName="dnsmasq-dns" containerID="cri-o://9f94ea318cabd7d5a85bae60436f7fbbd182561901dd4e1123c0ce68a86bd03b" gracePeriod=10 Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.918954 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c5b6c25e-e882-4ea4-a284-6f55bfe75093","Type":"ContainerStarted","Data":"af06053084a285bc01330cffd9858a387580ee179dad2789e77044a776e5acf8"} Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.919149 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.923074 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-s642n" event={"ID":"256e0b4a-baac-415c-94c6-09f08fa09c7c","Type":"ContainerStarted","Data":"6c88c5d2d2b14c7b78f92f6f0ad1feaa59a553b6ad9d1babd50678927c694980"} Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.923615 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-s642n" Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.932302 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bbbc7b58c-8f8v7" event={"ID":"efbfd001-4602-47b8-8c93-750ee3526e9e","Type":"ContainerStarted","Data":"459e19bc99c44fd2c891c741bcf902ef1564b6013c62bfcf04dec268218723e7"} Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.933524 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bbbc7b58c-8f8v7" Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.956680 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-fd8d879fc-flqh9" podStartSLOduration=-9223371973.898119 podStartE2EDuration="1m2.956657s" podCreationTimestamp="2026-01-21 11:16:52 +0000 UTC" firstStartedPulling="2026-01-21 11:17:15.958209002 +0000 UTC m=+1223.218165471" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:17:54.947653135 +0000 UTC m=+1262.207609604" watchObservedRunningTime="2026-01-21 11:17:54.956657 +0000 UTC m=+1262.216613469" Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.981060 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bbbc7b58c-8f8v7" podStartSLOduration=3.98103652 podStartE2EDuration="3.98103652s" podCreationTimestamp="2026-01-21 11:17:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:17:54.973166643 +0000 UTC m=+1262.233123112" watchObservedRunningTime="2026-01-21 11:17:54.98103652 +0000 UTC m=+1262.240992989" Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.985230 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62435f30-e8fc-4fcd-8b96-4a604439965e-config\") pod \"dnsmasq-dns-84cb884cf9-wmwx8\" (UID: \"62435f30-e8fc-4fcd-8b96-4a604439965e\") " pod="openstack/dnsmasq-dns-84cb884cf9-wmwx8" Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.985316 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/62435f30-e8fc-4fcd-8b96-4a604439965e-ovsdbserver-nb\") pod \"dnsmasq-dns-84cb884cf9-wmwx8\" (UID: \"62435f30-e8fc-4fcd-8b96-4a604439965e\") " pod="openstack/dnsmasq-dns-84cb884cf9-wmwx8" Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.985446 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45wlj\" (UniqueName: \"kubernetes.io/projected/62435f30-e8fc-4fcd-8b96-4a604439965e-kube-api-access-45wlj\") pod \"dnsmasq-dns-84cb884cf9-wmwx8\" (UID: \"62435f30-e8fc-4fcd-8b96-4a604439965e\") " pod="openstack/dnsmasq-dns-84cb884cf9-wmwx8" Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.985525 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/62435f30-e8fc-4fcd-8b96-4a604439965e-dns-svc\") pod \"dnsmasq-dns-84cb884cf9-wmwx8\" (UID: \"62435f30-e8fc-4fcd-8b96-4a604439965e\") " pod="openstack/dnsmasq-dns-84cb884cf9-wmwx8" Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.985619 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/62435f30-e8fc-4fcd-8b96-4a604439965e-ovsdbserver-sb\") pod \"dnsmasq-dns-84cb884cf9-wmwx8\" (UID: \"62435f30-e8fc-4fcd-8b96-4a604439965e\") " pod="openstack/dnsmasq-dns-84cb884cf9-wmwx8" Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.986693 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/62435f30-e8fc-4fcd-8b96-4a604439965e-dns-svc\") pod \"dnsmasq-dns-84cb884cf9-wmwx8\" (UID: \"62435f30-e8fc-4fcd-8b96-4a604439965e\") " pod="openstack/dnsmasq-dns-84cb884cf9-wmwx8" Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.986754 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/62435f30-e8fc-4fcd-8b96-4a604439965e-ovsdbserver-sb\") pod \"dnsmasq-dns-84cb884cf9-wmwx8\" (UID: \"62435f30-e8fc-4fcd-8b96-4a604439965e\") " pod="openstack/dnsmasq-dns-84cb884cf9-wmwx8" Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.986760 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/62435f30-e8fc-4fcd-8b96-4a604439965e-ovsdbserver-nb\") pod \"dnsmasq-dns-84cb884cf9-wmwx8\" (UID: \"62435f30-e8fc-4fcd-8b96-4a604439965e\") " pod="openstack/dnsmasq-dns-84cb884cf9-wmwx8" Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.986909 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62435f30-e8fc-4fcd-8b96-4a604439965e-config\") pod \"dnsmasq-dns-84cb884cf9-wmwx8\" (UID: \"62435f30-e8fc-4fcd-8b96-4a604439965e\") " pod="openstack/dnsmasq-dns-84cb884cf9-wmwx8" Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.995422 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=4.491627491 podStartE2EDuration="1m10.995403659s" podCreationTimestamp="2026-01-21 11:16:44 +0000 UTC" firstStartedPulling="2026-01-21 11:16:46.235993682 +0000 UTC m=+1193.495950151" lastFinishedPulling="2026-01-21 11:17:52.73976985 +0000 UTC m=+1259.999726319" observedRunningTime="2026-01-21 11:17:54.990135787 +0000 UTC m=+1262.250092256" watchObservedRunningTime="2026-01-21 11:17:54.995403659 +0000 UTC m=+1262.255360128" Jan 21 11:17:54 crc kubenswrapper[4881]: I0121 11:17:54.999281 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.011424 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45wlj\" (UniqueName: \"kubernetes.io/projected/62435f30-e8fc-4fcd-8b96-4a604439965e-kube-api-access-45wlj\") pod \"dnsmasq-dns-84cb884cf9-wmwx8\" (UID: \"62435f30-e8fc-4fcd-8b96-4a604439965e\") " pod="openstack/dnsmasq-dns-84cb884cf9-wmwx8" Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.015972 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-s642n" podStartSLOduration=5.307567563 podStartE2EDuration="1m7.015953793s" podCreationTimestamp="2026-01-21 11:16:48 +0000 UTC" firstStartedPulling="2026-01-21 11:16:50.807472635 +0000 UTC m=+1198.067429114" lastFinishedPulling="2026-01-21 11:17:52.515858875 +0000 UTC m=+1259.775815344" observedRunningTime="2026-01-21 11:17:55.011737037 +0000 UTC m=+1262.271693506" watchObservedRunningTime="2026-01-21 11:17:55.015953793 +0000 UTC m=+1262.275910262" Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.079291 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84cb884cf9-wmwx8" Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.164134 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.405557 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.511978 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fd8d879fc-flqh9" Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.613558 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84cb884cf9-wmwx8"] Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.628490 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42132c17-6a2d-48d1-a636-3eae7558d55c-dns-svc\") pod \"42132c17-6a2d-48d1-a636-3eae7558d55c\" (UID: \"42132c17-6a2d-48d1-a636-3eae7558d55c\") " Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.628596 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42132c17-6a2d-48d1-a636-3eae7558d55c-ovsdbserver-nb\") pod \"42132c17-6a2d-48d1-a636-3eae7558d55c\" (UID: \"42132c17-6a2d-48d1-a636-3eae7558d55c\") " Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.628665 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4lhq\" (UniqueName: \"kubernetes.io/projected/42132c17-6a2d-48d1-a636-3eae7558d55c-kube-api-access-x4lhq\") pod \"42132c17-6a2d-48d1-a636-3eae7558d55c\" (UID: \"42132c17-6a2d-48d1-a636-3eae7558d55c\") " Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.628734 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42132c17-6a2d-48d1-a636-3eae7558d55c-config\") pod \"42132c17-6a2d-48d1-a636-3eae7558d55c\" (UID: \"42132c17-6a2d-48d1-a636-3eae7558d55c\") " Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.640208 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42132c17-6a2d-48d1-a636-3eae7558d55c-kube-api-access-x4lhq" (OuterVolumeSpecName: "kube-api-access-x4lhq") pod "42132c17-6a2d-48d1-a636-3eae7558d55c" (UID: "42132c17-6a2d-48d1-a636-3eae7558d55c"). InnerVolumeSpecName "kube-api-access-x4lhq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.696461 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42132c17-6a2d-48d1-a636-3eae7558d55c-config" (OuterVolumeSpecName: "config") pod "42132c17-6a2d-48d1-a636-3eae7558d55c" (UID: "42132c17-6a2d-48d1-a636-3eae7558d55c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.699954 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42132c17-6a2d-48d1-a636-3eae7558d55c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "42132c17-6a2d-48d1-a636-3eae7558d55c" (UID: "42132c17-6a2d-48d1-a636-3eae7558d55c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.702174 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42132c17-6a2d-48d1-a636-3eae7558d55c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "42132c17-6a2d-48d1-a636-3eae7558d55c" (UID: "42132c17-6a2d-48d1-a636-3eae7558d55c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.730483 4881 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42132c17-6a2d-48d1-a636-3eae7558d55c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.730521 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/42132c17-6a2d-48d1-a636-3eae7558d55c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.730533 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4lhq\" (UniqueName: \"kubernetes.io/projected/42132c17-6a2d-48d1-a636-3eae7558d55c-kube-api-access-x4lhq\") on node \"crc\" DevicePath \"\"" Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.730543 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42132c17-6a2d-48d1-a636-3eae7558d55c-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.879225 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 21 11:17:55 crc kubenswrapper[4881]: E0121 11:17:55.879895 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42132c17-6a2d-48d1-a636-3eae7558d55c" containerName="dnsmasq-dns" Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.880002 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="42132c17-6a2d-48d1-a636-3eae7558d55c" containerName="dnsmasq-dns" Jan 21 11:17:55 crc kubenswrapper[4881]: E0121 11:17:55.880074 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42132c17-6a2d-48d1-a636-3eae7558d55c" containerName="init" Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.880135 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="42132c17-6a2d-48d1-a636-3eae7558d55c" containerName="init" Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.880348 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="42132c17-6a2d-48d1-a636-3eae7558d55c" containerName="dnsmasq-dns" Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.893197 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.895696 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-7r2bh" Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.895846 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.896722 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.896847 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.933405 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.972092 4881 generic.go:334] "Generic (PLEG): container finished" podID="42132c17-6a2d-48d1-a636-3eae7558d55c" containerID="9f94ea318cabd7d5a85bae60436f7fbbd182561901dd4e1123c0ce68a86bd03b" exitCode=0 Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.972167 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fd8d879fc-flqh9" event={"ID":"42132c17-6a2d-48d1-a636-3eae7558d55c","Type":"ContainerDied","Data":"9f94ea318cabd7d5a85bae60436f7fbbd182561901dd4e1123c0ce68a86bd03b"} Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.972199 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fd8d879fc-flqh9" event={"ID":"42132c17-6a2d-48d1-a636-3eae7558d55c","Type":"ContainerDied","Data":"0a92f372c9af6d73af85424fa74f5bca2b7445ea9a9d2271fd330b7797ed5b0d"} Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.972220 4881 scope.go:117] "RemoveContainer" containerID="9f94ea318cabd7d5a85bae60436f7fbbd182561901dd4e1123c0ce68a86bd03b" Jan 21 11:17:55 crc kubenswrapper[4881]: I0121 11:17:55.973297 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fd8d879fc-flqh9" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:55.995097 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84cb884cf9-wmwx8" event={"ID":"62435f30-e8fc-4fcd-8b96-4a604439965e","Type":"ContainerStarted","Data":"44f80926337efad13c65101fd501f43ed3467cedbf9bc0293c7241abb38a34e2"} Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.039221 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"eafb725b-4d8c-44b6-8966-4c611d4897d8\") " pod="openstack/swift-storage-0" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.039339 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/eafb725b-4d8c-44b6-8966-4c611d4897d8-cache\") pod \"swift-storage-0\" (UID: \"eafb725b-4d8c-44b6-8966-4c611d4897d8\") " pod="openstack/swift-storage-0" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.039362 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/eafb725b-4d8c-44b6-8966-4c611d4897d8-etc-swift\") pod \"swift-storage-0\" (UID: \"eafb725b-4d8c-44b6-8966-4c611d4897d8\") " pod="openstack/swift-storage-0" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.039438 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/eafb725b-4d8c-44b6-8966-4c611d4897d8-lock\") pod \"swift-storage-0\" (UID: \"eafb725b-4d8c-44b6-8966-4c611d4897d8\") " pod="openstack/swift-storage-0" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.039462 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgc7f\" (UniqueName: \"kubernetes.io/projected/eafb725b-4d8c-44b6-8966-4c611d4897d8-kube-api-access-mgc7f\") pod \"swift-storage-0\" (UID: \"eafb725b-4d8c-44b6-8966-4c611d4897d8\") " pod="openstack/swift-storage-0" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.079920 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fd8d879fc-flqh9"] Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.086688 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-fd8d879fc-flqh9"] Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.141445 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/eafb725b-4d8c-44b6-8966-4c611d4897d8-cache\") pod \"swift-storage-0\" (UID: \"eafb725b-4d8c-44b6-8966-4c611d4897d8\") " pod="openstack/swift-storage-0" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.141502 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/eafb725b-4d8c-44b6-8966-4c611d4897d8-etc-swift\") pod \"swift-storage-0\" (UID: \"eafb725b-4d8c-44b6-8966-4c611d4897d8\") " pod="openstack/swift-storage-0" Jan 21 11:17:56 crc kubenswrapper[4881]: E0121 11:17:56.141667 4881 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 11:17:56 crc kubenswrapper[4881]: E0121 11:17:56.141683 4881 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.141686 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/eafb725b-4d8c-44b6-8966-4c611d4897d8-lock\") pod \"swift-storage-0\" (UID: \"eafb725b-4d8c-44b6-8966-4c611d4897d8\") " pod="openstack/swift-storage-0" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.141707 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgc7f\" (UniqueName: \"kubernetes.io/projected/eafb725b-4d8c-44b6-8966-4c611d4897d8-kube-api-access-mgc7f\") pod \"swift-storage-0\" (UID: \"eafb725b-4d8c-44b6-8966-4c611d4897d8\") " pod="openstack/swift-storage-0" Jan 21 11:17:56 crc kubenswrapper[4881]: E0121 11:17:56.141728 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eafb725b-4d8c-44b6-8966-4c611d4897d8-etc-swift podName:eafb725b-4d8c-44b6-8966-4c611d4897d8 nodeName:}" failed. No retries permitted until 2026-01-21 11:17:56.641710905 +0000 UTC m=+1263.901667374 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/eafb725b-4d8c-44b6-8966-4c611d4897d8-etc-swift") pod "swift-storage-0" (UID: "eafb725b-4d8c-44b6-8966-4c611d4897d8") : configmap "swift-ring-files" not found Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.141839 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"eafb725b-4d8c-44b6-8966-4c611d4897d8\") " pod="openstack/swift-storage-0" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.142034 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/eafb725b-4d8c-44b6-8966-4c611d4897d8-cache\") pod \"swift-storage-0\" (UID: \"eafb725b-4d8c-44b6-8966-4c611d4897d8\") " pod="openstack/swift-storage-0" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.142604 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/eafb725b-4d8c-44b6-8966-4c611d4897d8-lock\") pod \"swift-storage-0\" (UID: \"eafb725b-4d8c-44b6-8966-4c611d4897d8\") " pod="openstack/swift-storage-0" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.143261 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"eafb725b-4d8c-44b6-8966-4c611d4897d8\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/swift-storage-0" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.175011 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgc7f\" (UniqueName: \"kubernetes.io/projected/eafb725b-4d8c-44b6-8966-4c611d4897d8-kube-api-access-mgc7f\") pod \"swift-storage-0\" (UID: \"eafb725b-4d8c-44b6-8966-4c611d4897d8\") " pod="openstack/swift-storage-0" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.186033 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"eafb725b-4d8c-44b6-8966-4c611d4897d8\") " pod="openstack/swift-storage-0" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.370874 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-v4hkf"] Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.372394 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-v4hkf" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.375195 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.375329 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.375287 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.411235 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-v4hkf"] Jan 21 11:17:56 crc kubenswrapper[4881]: E0121 11:17:56.412369 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-cxpzt ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/swift-ring-rebalance-v4hkf" podUID="7bb59cc6-16e4-4ecf-ab54-d194a079403e" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.421999 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-j29v8"] Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.424242 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-j29v8" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.434997 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-j29v8"] Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.448324 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7bb59cc6-16e4-4ecf-ab54-d194a079403e-swiftconf\") pod \"swift-ring-rebalance-v4hkf\" (UID: \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\") " pod="openstack/swift-ring-rebalance-v4hkf" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.448375 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7bb59cc6-16e4-4ecf-ab54-d194a079403e-scripts\") pod \"swift-ring-rebalance-v4hkf\" (UID: \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\") " pod="openstack/swift-ring-rebalance-v4hkf" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.448430 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7bb59cc6-16e4-4ecf-ab54-d194a079403e-dispersionconf\") pod \"swift-ring-rebalance-v4hkf\" (UID: \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\") " pod="openstack/swift-ring-rebalance-v4hkf" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.448445 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bb59cc6-16e4-4ecf-ab54-d194a079403e-combined-ca-bundle\") pod \"swift-ring-rebalance-v4hkf\" (UID: \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\") " pod="openstack/swift-ring-rebalance-v4hkf" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.448472 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7bb59cc6-16e4-4ecf-ab54-d194a079403e-ring-data-devices\") pod \"swift-ring-rebalance-v4hkf\" (UID: \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\") " pod="openstack/swift-ring-rebalance-v4hkf" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.448986 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7bb59cc6-16e4-4ecf-ab54-d194a079403e-etc-swift\") pod \"swift-ring-rebalance-v4hkf\" (UID: \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\") " pod="openstack/swift-ring-rebalance-v4hkf" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.449048 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxpzt\" (UniqueName: \"kubernetes.io/projected/7bb59cc6-16e4-4ecf-ab54-d194a079403e-kube-api-access-cxpzt\") pod \"swift-ring-rebalance-v4hkf\" (UID: \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\") " pod="openstack/swift-ring-rebalance-v4hkf" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.472212 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-v4hkf"] Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.551085 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7bb59cc6-16e4-4ecf-ab54-d194a079403e-scripts\") pod \"swift-ring-rebalance-v4hkf\" (UID: \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\") " pod="openstack/swift-ring-rebalance-v4hkf" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.551152 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/27451133-57c8-4991-aae0-ec0a82432176-scripts\") pod \"swift-ring-rebalance-j29v8\" (UID: \"27451133-57c8-4991-aae0-ec0a82432176\") " pod="openstack/swift-ring-rebalance-j29v8" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.551196 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/27451133-57c8-4991-aae0-ec0a82432176-ring-data-devices\") pod \"swift-ring-rebalance-j29v8\" (UID: \"27451133-57c8-4991-aae0-ec0a82432176\") " pod="openstack/swift-ring-rebalance-j29v8" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.551214 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7bb59cc6-16e4-4ecf-ab54-d194a079403e-dispersionconf\") pod \"swift-ring-rebalance-v4hkf\" (UID: \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\") " pod="openstack/swift-ring-rebalance-v4hkf" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.551231 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bb59cc6-16e4-4ecf-ab54-d194a079403e-combined-ca-bundle\") pod \"swift-ring-rebalance-v4hkf\" (UID: \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\") " pod="openstack/swift-ring-rebalance-v4hkf" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.551246 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/27451133-57c8-4991-aae0-ec0a82432176-dispersionconf\") pod \"swift-ring-rebalance-j29v8\" (UID: \"27451133-57c8-4991-aae0-ec0a82432176\") " pod="openstack/swift-ring-rebalance-j29v8" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.551266 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7bb59cc6-16e4-4ecf-ab54-d194a079403e-ring-data-devices\") pod \"swift-ring-rebalance-v4hkf\" (UID: \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\") " pod="openstack/swift-ring-rebalance-v4hkf" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.551307 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7bb59cc6-16e4-4ecf-ab54-d194a079403e-etc-swift\") pod \"swift-ring-rebalance-v4hkf\" (UID: \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\") " pod="openstack/swift-ring-rebalance-v4hkf" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.551343 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/27451133-57c8-4991-aae0-ec0a82432176-etc-swift\") pod \"swift-ring-rebalance-j29v8\" (UID: \"27451133-57c8-4991-aae0-ec0a82432176\") " pod="openstack/swift-ring-rebalance-j29v8" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.551358 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp4l2\" (UniqueName: \"kubernetes.io/projected/27451133-57c8-4991-aae0-ec0a82432176-kube-api-access-fp4l2\") pod \"swift-ring-rebalance-j29v8\" (UID: \"27451133-57c8-4991-aae0-ec0a82432176\") " pod="openstack/swift-ring-rebalance-j29v8" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.551380 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxpzt\" (UniqueName: \"kubernetes.io/projected/7bb59cc6-16e4-4ecf-ab54-d194a079403e-kube-api-access-cxpzt\") pod \"swift-ring-rebalance-v4hkf\" (UID: \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\") " pod="openstack/swift-ring-rebalance-v4hkf" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.551459 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7bb59cc6-16e4-4ecf-ab54-d194a079403e-swiftconf\") pod \"swift-ring-rebalance-v4hkf\" (UID: \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\") " pod="openstack/swift-ring-rebalance-v4hkf" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.551475 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/27451133-57c8-4991-aae0-ec0a82432176-swiftconf\") pod \"swift-ring-rebalance-j29v8\" (UID: \"27451133-57c8-4991-aae0-ec0a82432176\") " pod="openstack/swift-ring-rebalance-j29v8" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.551491 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27451133-57c8-4991-aae0-ec0a82432176-combined-ca-bundle\") pod \"swift-ring-rebalance-j29v8\" (UID: \"27451133-57c8-4991-aae0-ec0a82432176\") " pod="openstack/swift-ring-rebalance-j29v8" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.552174 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7bb59cc6-16e4-4ecf-ab54-d194a079403e-scripts\") pod \"swift-ring-rebalance-v4hkf\" (UID: \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\") " pod="openstack/swift-ring-rebalance-v4hkf" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.553422 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7bb59cc6-16e4-4ecf-ab54-d194a079403e-etc-swift\") pod \"swift-ring-rebalance-v4hkf\" (UID: \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\") " pod="openstack/swift-ring-rebalance-v4hkf" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.553710 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7bb59cc6-16e4-4ecf-ab54-d194a079403e-ring-data-devices\") pod \"swift-ring-rebalance-v4hkf\" (UID: \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\") " pod="openstack/swift-ring-rebalance-v4hkf" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.557674 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7bb59cc6-16e4-4ecf-ab54-d194a079403e-swiftconf\") pod \"swift-ring-rebalance-v4hkf\" (UID: \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\") " pod="openstack/swift-ring-rebalance-v4hkf" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.559308 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bb59cc6-16e4-4ecf-ab54-d194a079403e-combined-ca-bundle\") pod \"swift-ring-rebalance-v4hkf\" (UID: \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\") " pod="openstack/swift-ring-rebalance-v4hkf" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.569731 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7bb59cc6-16e4-4ecf-ab54-d194a079403e-dispersionconf\") pod \"swift-ring-rebalance-v4hkf\" (UID: \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\") " pod="openstack/swift-ring-rebalance-v4hkf" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.570618 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxpzt\" (UniqueName: \"kubernetes.io/projected/7bb59cc6-16e4-4ecf-ab54-d194a079403e-kube-api-access-cxpzt\") pod \"swift-ring-rebalance-v4hkf\" (UID: \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\") " pod="openstack/swift-ring-rebalance-v4hkf" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.653275 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/27451133-57c8-4991-aae0-ec0a82432176-swiftconf\") pod \"swift-ring-rebalance-j29v8\" (UID: \"27451133-57c8-4991-aae0-ec0a82432176\") " pod="openstack/swift-ring-rebalance-j29v8" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.653324 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27451133-57c8-4991-aae0-ec0a82432176-combined-ca-bundle\") pod \"swift-ring-rebalance-j29v8\" (UID: \"27451133-57c8-4991-aae0-ec0a82432176\") " pod="openstack/swift-ring-rebalance-j29v8" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.653389 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/27451133-57c8-4991-aae0-ec0a82432176-scripts\") pod \"swift-ring-rebalance-j29v8\" (UID: \"27451133-57c8-4991-aae0-ec0a82432176\") " pod="openstack/swift-ring-rebalance-j29v8" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.653435 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/27451133-57c8-4991-aae0-ec0a82432176-ring-data-devices\") pod \"swift-ring-rebalance-j29v8\" (UID: \"27451133-57c8-4991-aae0-ec0a82432176\") " pod="openstack/swift-ring-rebalance-j29v8" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.653457 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/27451133-57c8-4991-aae0-ec0a82432176-dispersionconf\") pod \"swift-ring-rebalance-j29v8\" (UID: \"27451133-57c8-4991-aae0-ec0a82432176\") " pod="openstack/swift-ring-rebalance-j29v8" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.653555 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/27451133-57c8-4991-aae0-ec0a82432176-etc-swift\") pod \"swift-ring-rebalance-j29v8\" (UID: \"27451133-57c8-4991-aae0-ec0a82432176\") " pod="openstack/swift-ring-rebalance-j29v8" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.653581 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fp4l2\" (UniqueName: \"kubernetes.io/projected/27451133-57c8-4991-aae0-ec0a82432176-kube-api-access-fp4l2\") pod \"swift-ring-rebalance-j29v8\" (UID: \"27451133-57c8-4991-aae0-ec0a82432176\") " pod="openstack/swift-ring-rebalance-j29v8" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.653630 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/eafb725b-4d8c-44b6-8966-4c611d4897d8-etc-swift\") pod \"swift-storage-0\" (UID: \"eafb725b-4d8c-44b6-8966-4c611d4897d8\") " pod="openstack/swift-storage-0" Jan 21 11:17:56 crc kubenswrapper[4881]: E0121 11:17:56.653797 4881 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 11:17:56 crc kubenswrapper[4881]: E0121 11:17:56.653816 4881 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 11:17:56 crc kubenswrapper[4881]: E0121 11:17:56.653873 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eafb725b-4d8c-44b6-8966-4c611d4897d8-etc-swift podName:eafb725b-4d8c-44b6-8966-4c611d4897d8 nodeName:}" failed. No retries permitted until 2026-01-21 11:17:57.653856564 +0000 UTC m=+1264.913813033 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/eafb725b-4d8c-44b6-8966-4c611d4897d8-etc-swift") pod "swift-storage-0" (UID: "eafb725b-4d8c-44b6-8966-4c611d4897d8") : configmap "swift-ring-files" not found Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.654166 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/27451133-57c8-4991-aae0-ec0a82432176-etc-swift\") pod \"swift-ring-rebalance-j29v8\" (UID: \"27451133-57c8-4991-aae0-ec0a82432176\") " pod="openstack/swift-ring-rebalance-j29v8" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.654509 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/27451133-57c8-4991-aae0-ec0a82432176-ring-data-devices\") pod \"swift-ring-rebalance-j29v8\" (UID: \"27451133-57c8-4991-aae0-ec0a82432176\") " pod="openstack/swift-ring-rebalance-j29v8" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.655309 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/27451133-57c8-4991-aae0-ec0a82432176-scripts\") pod \"swift-ring-rebalance-j29v8\" (UID: \"27451133-57c8-4991-aae0-ec0a82432176\") " pod="openstack/swift-ring-rebalance-j29v8" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.657888 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27451133-57c8-4991-aae0-ec0a82432176-combined-ca-bundle\") pod \"swift-ring-rebalance-j29v8\" (UID: \"27451133-57c8-4991-aae0-ec0a82432176\") " pod="openstack/swift-ring-rebalance-j29v8" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.658257 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/27451133-57c8-4991-aae0-ec0a82432176-swiftconf\") pod \"swift-ring-rebalance-j29v8\" (UID: \"27451133-57c8-4991-aae0-ec0a82432176\") " pod="openstack/swift-ring-rebalance-j29v8" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.660837 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/27451133-57c8-4991-aae0-ec0a82432176-dispersionconf\") pod \"swift-ring-rebalance-j29v8\" (UID: \"27451133-57c8-4991-aae0-ec0a82432176\") " pod="openstack/swift-ring-rebalance-j29v8" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.671433 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fp4l2\" (UniqueName: \"kubernetes.io/projected/27451133-57c8-4991-aae0-ec0a82432176-kube-api-access-fp4l2\") pod \"swift-ring-rebalance-j29v8\" (UID: \"27451133-57c8-4991-aae0-ec0a82432176\") " pod="openstack/swift-ring-rebalance-j29v8" Jan 21 11:17:56 crc kubenswrapper[4881]: I0121 11:17:56.752086 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-j29v8" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.003548 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-v4hkf" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.051688 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-v4hkf" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.147486 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.163995 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7bb59cc6-16e4-4ecf-ab54-d194a079403e-scripts\") pod \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\" (UID: \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\") " Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.164063 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7bb59cc6-16e4-4ecf-ab54-d194a079403e-swiftconf\") pod \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\" (UID: \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\") " Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.164089 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bb59cc6-16e4-4ecf-ab54-d194a079403e-combined-ca-bundle\") pod \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\" (UID: \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\") " Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.164175 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7bb59cc6-16e4-4ecf-ab54-d194a079403e-ring-data-devices\") pod \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\" (UID: \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\") " Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.164243 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxpzt\" (UniqueName: \"kubernetes.io/projected/7bb59cc6-16e4-4ecf-ab54-d194a079403e-kube-api-access-cxpzt\") pod \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\" (UID: \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\") " Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.164374 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7bb59cc6-16e4-4ecf-ab54-d194a079403e-dispersionconf\") pod \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\" (UID: \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\") " Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.164433 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7bb59cc6-16e4-4ecf-ab54-d194a079403e-etc-swift\") pod \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\" (UID: \"7bb59cc6-16e4-4ecf-ab54-d194a079403e\") " Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.165443 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bb59cc6-16e4-4ecf-ab54-d194a079403e-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "7bb59cc6-16e4-4ecf-ab54-d194a079403e" (UID: "7bb59cc6-16e4-4ecf-ab54-d194a079403e"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.167512 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb59cc6-16e4-4ecf-ab54-d194a079403e-scripts" (OuterVolumeSpecName: "scripts") pod "7bb59cc6-16e4-4ecf-ab54-d194a079403e" (UID: "7bb59cc6-16e4-4ecf-ab54-d194a079403e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.173587 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bb59cc6-16e4-4ecf-ab54-d194a079403e-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "7bb59cc6-16e4-4ecf-ab54-d194a079403e" (UID: "7bb59cc6-16e4-4ecf-ab54-d194a079403e"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.175079 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb59cc6-16e4-4ecf-ab54-d194a079403e-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "7bb59cc6-16e4-4ecf-ab54-d194a079403e" (UID: "7bb59cc6-16e4-4ecf-ab54-d194a079403e"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.176241 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bb59cc6-16e4-4ecf-ab54-d194a079403e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7bb59cc6-16e4-4ecf-ab54-d194a079403e" (UID: "7bb59cc6-16e4-4ecf-ab54-d194a079403e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.178193 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bb59cc6-16e4-4ecf-ab54-d194a079403e-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "7bb59cc6-16e4-4ecf-ab54-d194a079403e" (UID: "7bb59cc6-16e4-4ecf-ab54-d194a079403e"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.180166 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb59cc6-16e4-4ecf-ab54-d194a079403e-kube-api-access-cxpzt" (OuterVolumeSpecName: "kube-api-access-cxpzt") pod "7bb59cc6-16e4-4ecf-ab54-d194a079403e" (UID: "7bb59cc6-16e4-4ecf-ab54-d194a079403e"). InnerVolumeSpecName "kube-api-access-cxpzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.267165 4881 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7bb59cc6-16e4-4ecf-ab54-d194a079403e-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.267500 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxpzt\" (UniqueName: \"kubernetes.io/projected/7bb59cc6-16e4-4ecf-ab54-d194a079403e-kube-api-access-cxpzt\") on node \"crc\" DevicePath \"\"" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.267510 4881 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7bb59cc6-16e4-4ecf-ab54-d194a079403e-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.267555 4881 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7bb59cc6-16e4-4ecf-ab54-d194a079403e-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.267589 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7bb59cc6-16e4-4ecf-ab54-d194a079403e-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.267601 4881 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7bb59cc6-16e4-4ecf-ab54-d194a079403e-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.267618 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bb59cc6-16e4-4ecf-ab54-d194a079403e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.307064 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.348192 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42132c17-6a2d-48d1-a636-3eae7558d55c" path="/var/lib/kubelet/pods/42132c17-6a2d-48d1-a636-3eae7558d55c/volumes" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.400374 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.608045 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.610083 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.612883 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.612881 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-675dt" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.613452 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.613508 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.639716 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.689691 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b3882b01-10ce-4832-ae71-676a8b65b086-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"b3882b01-10ce-4832-ae71-676a8b65b086\") " pod="openstack/ovn-northd-0" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.689808 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3882b01-10ce-4832-ae71-676a8b65b086-scripts\") pod \"ovn-northd-0\" (UID: \"b3882b01-10ce-4832-ae71-676a8b65b086\") " pod="openstack/ovn-northd-0" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.689845 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3882b01-10ce-4832-ae71-676a8b65b086-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"b3882b01-10ce-4832-ae71-676a8b65b086\") " pod="openstack/ovn-northd-0" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.689903 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6z87\" (UniqueName: \"kubernetes.io/projected/b3882b01-10ce-4832-ae71-676a8b65b086-kube-api-access-b6z87\") pod \"ovn-northd-0\" (UID: \"b3882b01-10ce-4832-ae71-676a8b65b086\") " pod="openstack/ovn-northd-0" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.689938 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3882b01-10ce-4832-ae71-676a8b65b086-config\") pod \"ovn-northd-0\" (UID: \"b3882b01-10ce-4832-ae71-676a8b65b086\") " pod="openstack/ovn-northd-0" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.690104 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3882b01-10ce-4832-ae71-676a8b65b086-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"b3882b01-10ce-4832-ae71-676a8b65b086\") " pod="openstack/ovn-northd-0" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.690274 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/eafb725b-4d8c-44b6-8966-4c611d4897d8-etc-swift\") pod \"swift-storage-0\" (UID: \"eafb725b-4d8c-44b6-8966-4c611d4897d8\") " pod="openstack/swift-storage-0" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.690336 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3882b01-10ce-4832-ae71-676a8b65b086-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"b3882b01-10ce-4832-ae71-676a8b65b086\") " pod="openstack/ovn-northd-0" Jan 21 11:17:57 crc kubenswrapper[4881]: E0121 11:17:57.690413 4881 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 11:17:57 crc kubenswrapper[4881]: E0121 11:17:57.690438 4881 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 11:17:57 crc kubenswrapper[4881]: E0121 11:17:57.690498 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eafb725b-4d8c-44b6-8966-4c611d4897d8-etc-swift podName:eafb725b-4d8c-44b6-8966-4c611d4897d8 nodeName:}" failed. No retries permitted until 2026-01-21 11:17:59.690475709 +0000 UTC m=+1266.950432178 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/eafb725b-4d8c-44b6-8966-4c611d4897d8-etc-swift") pod "swift-storage-0" (UID: "eafb725b-4d8c-44b6-8966-4c611d4897d8") : configmap "swift-ring-files" not found Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.792416 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3882b01-10ce-4832-ae71-676a8b65b086-scripts\") pod \"ovn-northd-0\" (UID: \"b3882b01-10ce-4832-ae71-676a8b65b086\") " pod="openstack/ovn-northd-0" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.792487 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3882b01-10ce-4832-ae71-676a8b65b086-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"b3882b01-10ce-4832-ae71-676a8b65b086\") " pod="openstack/ovn-northd-0" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.792555 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6z87\" (UniqueName: \"kubernetes.io/projected/b3882b01-10ce-4832-ae71-676a8b65b086-kube-api-access-b6z87\") pod \"ovn-northd-0\" (UID: \"b3882b01-10ce-4832-ae71-676a8b65b086\") " pod="openstack/ovn-northd-0" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.792594 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3882b01-10ce-4832-ae71-676a8b65b086-config\") pod \"ovn-northd-0\" (UID: \"b3882b01-10ce-4832-ae71-676a8b65b086\") " pod="openstack/ovn-northd-0" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.792639 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3882b01-10ce-4832-ae71-676a8b65b086-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"b3882b01-10ce-4832-ae71-676a8b65b086\") " pod="openstack/ovn-northd-0" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.792700 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3882b01-10ce-4832-ae71-676a8b65b086-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"b3882b01-10ce-4832-ae71-676a8b65b086\") " pod="openstack/ovn-northd-0" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.793373 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b3882b01-10ce-4832-ae71-676a8b65b086-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"b3882b01-10ce-4832-ae71-676a8b65b086\") " pod="openstack/ovn-northd-0" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.793640 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3882b01-10ce-4832-ae71-676a8b65b086-scripts\") pod \"ovn-northd-0\" (UID: \"b3882b01-10ce-4832-ae71-676a8b65b086\") " pod="openstack/ovn-northd-0" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.793728 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3882b01-10ce-4832-ae71-676a8b65b086-config\") pod \"ovn-northd-0\" (UID: \"b3882b01-10ce-4832-ae71-676a8b65b086\") " pod="openstack/ovn-northd-0" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.793823 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b3882b01-10ce-4832-ae71-676a8b65b086-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"b3882b01-10ce-4832-ae71-676a8b65b086\") " pod="openstack/ovn-northd-0" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.798267 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3882b01-10ce-4832-ae71-676a8b65b086-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"b3882b01-10ce-4832-ae71-676a8b65b086\") " pod="openstack/ovn-northd-0" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.806100 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3882b01-10ce-4832-ae71-676a8b65b086-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"b3882b01-10ce-4832-ae71-676a8b65b086\") " pod="openstack/ovn-northd-0" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.811803 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6z87\" (UniqueName: \"kubernetes.io/projected/b3882b01-10ce-4832-ae71-676a8b65b086-kube-api-access-b6z87\") pod \"ovn-northd-0\" (UID: \"b3882b01-10ce-4832-ae71-676a8b65b086\") " pod="openstack/ovn-northd-0" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.813034 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3882b01-10ce-4832-ae71-676a8b65b086-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"b3882b01-10ce-4832-ae71-676a8b65b086\") " pod="openstack/ovn-northd-0" Jan 21 11:17:57 crc kubenswrapper[4881]: I0121 11:17:57.931461 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 21 11:17:58 crc kubenswrapper[4881]: I0121 11:17:58.010944 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-v4hkf" Jan 21 11:17:58 crc kubenswrapper[4881]: I0121 11:17:58.061163 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-v4hkf"] Jan 21 11:17:58 crc kubenswrapper[4881]: I0121 11:17:58.070745 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-v4hkf"] Jan 21 11:17:59 crc kubenswrapper[4881]: I0121 11:17:59.324358 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb59cc6-16e4-4ecf-ab54-d194a079403e" path="/var/lib/kubelet/pods/7bb59cc6-16e4-4ecf-ab54-d194a079403e/volumes" Jan 21 11:17:59 crc kubenswrapper[4881]: I0121 11:17:59.506993 4881 scope.go:117] "RemoveContainer" containerID="b42c99eace0541ffcc9144f5cc0186de9eb99d26b014e66fda937ea6fd9eb8a8" Jan 21 11:17:59 crc kubenswrapper[4881]: I0121 11:17:59.733828 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/eafb725b-4d8c-44b6-8966-4c611d4897d8-etc-swift\") pod \"swift-storage-0\" (UID: \"eafb725b-4d8c-44b6-8966-4c611d4897d8\") " pod="openstack/swift-storage-0" Jan 21 11:17:59 crc kubenswrapper[4881]: E0121 11:17:59.734232 4881 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 11:17:59 crc kubenswrapper[4881]: E0121 11:17:59.734259 4881 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 11:17:59 crc kubenswrapper[4881]: E0121 11:17:59.734320 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eafb725b-4d8c-44b6-8966-4c611d4897d8-etc-swift podName:eafb725b-4d8c-44b6-8966-4c611d4897d8 nodeName:}" failed. No retries permitted until 2026-01-21 11:18:03.734299995 +0000 UTC m=+1270.994256464 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/eafb725b-4d8c-44b6-8966-4c611d4897d8-etc-swift") pod "swift-storage-0" (UID: "eafb725b-4d8c-44b6-8966-4c611d4897d8") : configmap "swift-ring-files" not found Jan 21 11:17:59 crc kubenswrapper[4881]: I0121 11:17:59.807507 4881 scope.go:117] "RemoveContainer" containerID="9f94ea318cabd7d5a85bae60436f7fbbd182561901dd4e1123c0ce68a86bd03b" Jan 21 11:17:59 crc kubenswrapper[4881]: E0121 11:17:59.808371 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f94ea318cabd7d5a85bae60436f7fbbd182561901dd4e1123c0ce68a86bd03b\": container with ID starting with 9f94ea318cabd7d5a85bae60436f7fbbd182561901dd4e1123c0ce68a86bd03b not found: ID does not exist" containerID="9f94ea318cabd7d5a85bae60436f7fbbd182561901dd4e1123c0ce68a86bd03b" Jan 21 11:17:59 crc kubenswrapper[4881]: I0121 11:17:59.808420 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f94ea318cabd7d5a85bae60436f7fbbd182561901dd4e1123c0ce68a86bd03b"} err="failed to get container status \"9f94ea318cabd7d5a85bae60436f7fbbd182561901dd4e1123c0ce68a86bd03b\": rpc error: code = NotFound desc = could not find container \"9f94ea318cabd7d5a85bae60436f7fbbd182561901dd4e1123c0ce68a86bd03b\": container with ID starting with 9f94ea318cabd7d5a85bae60436f7fbbd182561901dd4e1123c0ce68a86bd03b not found: ID does not exist" Jan 21 11:17:59 crc kubenswrapper[4881]: I0121 11:17:59.808452 4881 scope.go:117] "RemoveContainer" containerID="b42c99eace0541ffcc9144f5cc0186de9eb99d26b014e66fda937ea6fd9eb8a8" Jan 21 11:17:59 crc kubenswrapper[4881]: E0121 11:17:59.808716 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b42c99eace0541ffcc9144f5cc0186de9eb99d26b014e66fda937ea6fd9eb8a8\": container with ID starting with b42c99eace0541ffcc9144f5cc0186de9eb99d26b014e66fda937ea6fd9eb8a8 not found: ID does not exist" containerID="b42c99eace0541ffcc9144f5cc0186de9eb99d26b014e66fda937ea6fd9eb8a8" Jan 21 11:17:59 crc kubenswrapper[4881]: I0121 11:17:59.808739 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b42c99eace0541ffcc9144f5cc0186de9eb99d26b014e66fda937ea6fd9eb8a8"} err="failed to get container status \"b42c99eace0541ffcc9144f5cc0186de9eb99d26b014e66fda937ea6fd9eb8a8\": rpc error: code = NotFound desc = could not find container \"b42c99eace0541ffcc9144f5cc0186de9eb99d26b014e66fda937ea6fd9eb8a8\": container with ID starting with b42c99eace0541ffcc9144f5cc0186de9eb99d26b014e66fda937ea6fd9eb8a8 not found: ID does not exist" Jan 21 11:17:59 crc kubenswrapper[4881]: I0121 11:17:59.850818 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:17:59 crc kubenswrapper[4881]: I0121 11:17:59.850878 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:17:59 crc kubenswrapper[4881]: I0121 11:17:59.850949 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 11:17:59 crc kubenswrapper[4881]: I0121 11:17:59.851762 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d0f3ab6355e31b97e337f7f21fb84796e3dea68bac874475991ce7eb43a93a82"} pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 11:17:59 crc kubenswrapper[4881]: I0121 11:17:59.851906 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" containerID="cri-o://d0f3ab6355e31b97e337f7f21fb84796e3dea68bac874475991ce7eb43a93a82" gracePeriod=600 Jan 21 11:18:00 crc kubenswrapper[4881]: I0121 11:18:00.033071 4881 generic.go:334] "Generic (PLEG): container finished" podID="3687b313-1df2-4274-80db-8c758b51bf2d" containerID="d0f3ab6355e31b97e337f7f21fb84796e3dea68bac874475991ce7eb43a93a82" exitCode=0 Jan 21 11:18:00 crc kubenswrapper[4881]: I0121 11:18:00.033417 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerDied","Data":"d0f3ab6355e31b97e337f7f21fb84796e3dea68bac874475991ce7eb43a93a82"} Jan 21 11:18:00 crc kubenswrapper[4881]: I0121 11:18:00.033457 4881 scope.go:117] "RemoveContainer" containerID="abaaf16a1930b4e2e9a1e1d952f2948a8b09bfb0c0f18add47eef44fe07067c5" Jan 21 11:18:00 crc kubenswrapper[4881]: I0121 11:18:00.233670 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 21 11:18:00 crc kubenswrapper[4881]: I0121 11:18:00.353752 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-j29v8"] Jan 21 11:18:00 crc kubenswrapper[4881]: W0121 11:18:00.369551 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27451133_57c8_4991_aae0_ec0a82432176.slice/crio-a7d4d23aa2fd8ae274e39ac46c3595d9d1bd6e0b97327033852c004b5061046a WatchSource:0}: Error finding container a7d4d23aa2fd8ae274e39ac46c3595d9d1bd6e0b97327033852c004b5061046a: Status 404 returned error can't find the container with id a7d4d23aa2fd8ae274e39ac46c3595d9d1bd6e0b97327033852c004b5061046a Jan 21 11:18:00 crc kubenswrapper[4881]: I0121 11:18:00.396307 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-cp5cl"] Jan 21 11:18:00 crc kubenswrapper[4881]: I0121 11:18:00.398168 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cp5cl" Jan 21 11:18:00 crc kubenswrapper[4881]: I0121 11:18:00.400493 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 21 11:18:00 crc kubenswrapper[4881]: I0121 11:18:00.405378 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-cp5cl"] Jan 21 11:18:00 crc kubenswrapper[4881]: I0121 11:18:00.450891 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkx6w\" (UniqueName: \"kubernetes.io/projected/07845bf5-b5f8-4a00-9d0e-b86f5062f1ec-kube-api-access-lkx6w\") pod \"root-account-create-update-cp5cl\" (UID: \"07845bf5-b5f8-4a00-9d0e-b86f5062f1ec\") " pod="openstack/root-account-create-update-cp5cl" Jan 21 11:18:00 crc kubenswrapper[4881]: I0121 11:18:00.450966 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07845bf5-b5f8-4a00-9d0e-b86f5062f1ec-operator-scripts\") pod \"root-account-create-update-cp5cl\" (UID: \"07845bf5-b5f8-4a00-9d0e-b86f5062f1ec\") " pod="openstack/root-account-create-update-cp5cl" Jan 21 11:18:00 crc kubenswrapper[4881]: I0121 11:18:00.553892 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkx6w\" (UniqueName: \"kubernetes.io/projected/07845bf5-b5f8-4a00-9d0e-b86f5062f1ec-kube-api-access-lkx6w\") pod \"root-account-create-update-cp5cl\" (UID: \"07845bf5-b5f8-4a00-9d0e-b86f5062f1ec\") " pod="openstack/root-account-create-update-cp5cl" Jan 21 11:18:00 crc kubenswrapper[4881]: I0121 11:18:00.553985 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07845bf5-b5f8-4a00-9d0e-b86f5062f1ec-operator-scripts\") pod \"root-account-create-update-cp5cl\" (UID: \"07845bf5-b5f8-4a00-9d0e-b86f5062f1ec\") " pod="openstack/root-account-create-update-cp5cl" Jan 21 11:18:00 crc kubenswrapper[4881]: I0121 11:18:00.554943 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07845bf5-b5f8-4a00-9d0e-b86f5062f1ec-operator-scripts\") pod \"root-account-create-update-cp5cl\" (UID: \"07845bf5-b5f8-4a00-9d0e-b86f5062f1ec\") " pod="openstack/root-account-create-update-cp5cl" Jan 21 11:18:00 crc kubenswrapper[4881]: I0121 11:18:00.581859 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkx6w\" (UniqueName: \"kubernetes.io/projected/07845bf5-b5f8-4a00-9d0e-b86f5062f1ec-kube-api-access-lkx6w\") pod \"root-account-create-update-cp5cl\" (UID: \"07845bf5-b5f8-4a00-9d0e-b86f5062f1ec\") " pod="openstack/root-account-create-update-cp5cl" Jan 21 11:18:00 crc kubenswrapper[4881]: I0121 11:18:00.729913 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cp5cl" Jan 21 11:18:01 crc kubenswrapper[4881]: I0121 11:18:01.094330 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"7331cbf4e5c1ebad90ff508798581f83536e17ac3c1ee9a79afc3f65f6e8ad1a"} Jan 21 11:18:01 crc kubenswrapper[4881]: I0121 11:18:01.097104 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-j29v8" event={"ID":"27451133-57c8-4991-aae0-ec0a82432176","Type":"ContainerStarted","Data":"a7d4d23aa2fd8ae274e39ac46c3595d9d1bd6e0b97327033852c004b5061046a"} Jan 21 11:18:01 crc kubenswrapper[4881]: I0121 11:18:01.109043 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"b3882b01-10ce-4832-ae71-676a8b65b086","Type":"ContainerStarted","Data":"dfc252ec226f016dfb22bc3529cd27daf8610c7a980bdeddd00c7007e0a69959"} Jan 21 11:18:01 crc kubenswrapper[4881]: I0121 11:18:01.116660 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"75733567-f2a6-4331-bdea-147126213437","Type":"ContainerStarted","Data":"a56efe39870006b796c3201c8dc3334fb4d25c094ef7e6facbf2f393bd54653c"} Jan 21 11:18:01 crc kubenswrapper[4881]: I0121 11:18:01.118834 4881 generic.go:334] "Generic (PLEG): container finished" podID="62435f30-e8fc-4fcd-8b96-4a604439965e" containerID="f24832aadef02f1c7ff84c5f003b7d3cb18bb769662ee1a6581898a328c41e06" exitCode=0 Jan 21 11:18:01 crc kubenswrapper[4881]: I0121 11:18:01.118886 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84cb884cf9-wmwx8" event={"ID":"62435f30-e8fc-4fcd-8b96-4a604439965e","Type":"ContainerDied","Data":"f24832aadef02f1c7ff84c5f003b7d3cb18bb769662ee1a6581898a328c41e06"} Jan 21 11:18:01 crc kubenswrapper[4881]: I0121 11:18:01.989088 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5bbbc7b58c-8f8v7" Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.377364 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-b4bf-account-create-update-6p74j"] Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.379529 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b4bf-account-create-update-6p74j" Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.384192 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.415357 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-nv8vf"] Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.416551 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-nv8vf" Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.423777 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-b4bf-account-create-update-6p74j"] Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.439642 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-nv8vf"] Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.512697 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l844\" (UniqueName: \"kubernetes.io/projected/331fda3a-4e64-4824-abd7-42eaef7b9b4f-kube-api-access-2l844\") pod \"keystone-b4bf-account-create-update-6p74j\" (UID: \"331fda3a-4e64-4824-abd7-42eaef7b9b4f\") " pod="openstack/keystone-b4bf-account-create-update-6p74j" Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.512865 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/331fda3a-4e64-4824-abd7-42eaef7b9b4f-operator-scripts\") pod \"keystone-b4bf-account-create-update-6p74j\" (UID: \"331fda3a-4e64-4824-abd7-42eaef7b9b4f\") " pod="openstack/keystone-b4bf-account-create-update-6p74j" Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.512901 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64bd2\" (UniqueName: \"kubernetes.io/projected/317bbc59-5154-4c0e-920a-3227d1ec4982-kube-api-access-64bd2\") pod \"keystone-db-create-nv8vf\" (UID: \"317bbc59-5154-4c0e-920a-3227d1ec4982\") " pod="openstack/keystone-db-create-nv8vf" Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.513142 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/317bbc59-5154-4c0e-920a-3227d1ec4982-operator-scripts\") pod \"keystone-db-create-nv8vf\" (UID: \"317bbc59-5154-4c0e-920a-3227d1ec4982\") " pod="openstack/keystone-db-create-nv8vf" Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.567172 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-cp5cl"] Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.614855 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/317bbc59-5154-4c0e-920a-3227d1ec4982-operator-scripts\") pod \"keystone-db-create-nv8vf\" (UID: \"317bbc59-5154-4c0e-920a-3227d1ec4982\") " pod="openstack/keystone-db-create-nv8vf" Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.614941 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2l844\" (UniqueName: \"kubernetes.io/projected/331fda3a-4e64-4824-abd7-42eaef7b9b4f-kube-api-access-2l844\") pod \"keystone-b4bf-account-create-update-6p74j\" (UID: \"331fda3a-4e64-4824-abd7-42eaef7b9b4f\") " pod="openstack/keystone-b4bf-account-create-update-6p74j" Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.615004 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/331fda3a-4e64-4824-abd7-42eaef7b9b4f-operator-scripts\") pod \"keystone-b4bf-account-create-update-6p74j\" (UID: \"331fda3a-4e64-4824-abd7-42eaef7b9b4f\") " pod="openstack/keystone-b4bf-account-create-update-6p74j" Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.615052 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64bd2\" (UniqueName: \"kubernetes.io/projected/317bbc59-5154-4c0e-920a-3227d1ec4982-kube-api-access-64bd2\") pod \"keystone-db-create-nv8vf\" (UID: \"317bbc59-5154-4c0e-920a-3227d1ec4982\") " pod="openstack/keystone-db-create-nv8vf" Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.616597 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/317bbc59-5154-4c0e-920a-3227d1ec4982-operator-scripts\") pod \"keystone-db-create-nv8vf\" (UID: \"317bbc59-5154-4c0e-920a-3227d1ec4982\") " pod="openstack/keystone-db-create-nv8vf" Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.616760 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/331fda3a-4e64-4824-abd7-42eaef7b9b4f-operator-scripts\") pod \"keystone-b4bf-account-create-update-6p74j\" (UID: \"331fda3a-4e64-4824-abd7-42eaef7b9b4f\") " pod="openstack/keystone-b4bf-account-create-update-6p74j" Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.640335 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64bd2\" (UniqueName: \"kubernetes.io/projected/317bbc59-5154-4c0e-920a-3227d1ec4982-kube-api-access-64bd2\") pod \"keystone-db-create-nv8vf\" (UID: \"317bbc59-5154-4c0e-920a-3227d1ec4982\") " pod="openstack/keystone-db-create-nv8vf" Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.644052 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2l844\" (UniqueName: \"kubernetes.io/projected/331fda3a-4e64-4824-abd7-42eaef7b9b4f-kube-api-access-2l844\") pod \"keystone-b4bf-account-create-update-6p74j\" (UID: \"331fda3a-4e64-4824-abd7-42eaef7b9b4f\") " pod="openstack/keystone-b4bf-account-create-update-6p74j" Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.702307 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b4bf-account-create-update-6p74j" Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.713682 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-smj4g"] Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.720470 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-smj4g" Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.725710 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-smj4g"] Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.746386 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-nv8vf" Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.818430 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r258x\" (UniqueName: \"kubernetes.io/projected/b6a422f0-bb4b-442c-a2d7-96ac90ffde83-kube-api-access-r258x\") pod \"placement-db-create-smj4g\" (UID: \"b6a422f0-bb4b-442c-a2d7-96ac90ffde83\") " pod="openstack/placement-db-create-smj4g" Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.818898 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6a422f0-bb4b-442c-a2d7-96ac90ffde83-operator-scripts\") pod \"placement-db-create-smj4g\" (UID: \"b6a422f0-bb4b-442c-a2d7-96ac90ffde83\") " pod="openstack/placement-db-create-smj4g" Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.921058 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r258x\" (UniqueName: \"kubernetes.io/projected/b6a422f0-bb4b-442c-a2d7-96ac90ffde83-kube-api-access-r258x\") pod \"placement-db-create-smj4g\" (UID: \"b6a422f0-bb4b-442c-a2d7-96ac90ffde83\") " pod="openstack/placement-db-create-smj4g" Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.921512 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6a422f0-bb4b-442c-a2d7-96ac90ffde83-operator-scripts\") pod \"placement-db-create-smj4g\" (UID: \"b6a422f0-bb4b-442c-a2d7-96ac90ffde83\") " pod="openstack/placement-db-create-smj4g" Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.922838 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6a422f0-bb4b-442c-a2d7-96ac90ffde83-operator-scripts\") pod \"placement-db-create-smj4g\" (UID: \"b6a422f0-bb4b-442c-a2d7-96ac90ffde83\") " pod="openstack/placement-db-create-smj4g" Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.949467 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-a34b-account-create-update-hm56c"] Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.951248 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a34b-account-create-update-hm56c" Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.956281 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r258x\" (UniqueName: \"kubernetes.io/projected/b6a422f0-bb4b-442c-a2d7-96ac90ffde83-kube-api-access-r258x\") pod \"placement-db-create-smj4g\" (UID: \"b6a422f0-bb4b-442c-a2d7-96ac90ffde83\") " pod="openstack/placement-db-create-smj4g" Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.956417 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 21 11:18:02 crc kubenswrapper[4881]: I0121 11:18:02.983016 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-a34b-account-create-update-hm56c"] Jan 21 11:18:03 crc kubenswrapper[4881]: I0121 11:18:03.023490 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c4be317-c914-45c5-8da4-1fe7d647db7e-operator-scripts\") pod \"placement-a34b-account-create-update-hm56c\" (UID: \"1c4be317-c914-45c5-8da4-1fe7d647db7e\") " pod="openstack/placement-a34b-account-create-update-hm56c" Jan 21 11:18:03 crc kubenswrapper[4881]: I0121 11:18:03.025215 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7s25\" (UniqueName: \"kubernetes.io/projected/1c4be317-c914-45c5-8da4-1fe7d647db7e-kube-api-access-h7s25\") pod \"placement-a34b-account-create-update-hm56c\" (UID: \"1c4be317-c914-45c5-8da4-1fe7d647db7e\") " pod="openstack/placement-a34b-account-create-update-hm56c" Jan 21 11:18:03 crc kubenswrapper[4881]: I0121 11:18:03.053483 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-smj4g" Jan 21 11:18:03 crc kubenswrapper[4881]: I0121 11:18:03.128926 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7s25\" (UniqueName: \"kubernetes.io/projected/1c4be317-c914-45c5-8da4-1fe7d647db7e-kube-api-access-h7s25\") pod \"placement-a34b-account-create-update-hm56c\" (UID: \"1c4be317-c914-45c5-8da4-1fe7d647db7e\") " pod="openstack/placement-a34b-account-create-update-hm56c" Jan 21 11:18:03 crc kubenswrapper[4881]: I0121 11:18:03.129044 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c4be317-c914-45c5-8da4-1fe7d647db7e-operator-scripts\") pod \"placement-a34b-account-create-update-hm56c\" (UID: \"1c4be317-c914-45c5-8da4-1fe7d647db7e\") " pod="openstack/placement-a34b-account-create-update-hm56c" Jan 21 11:18:03 crc kubenswrapper[4881]: I0121 11:18:03.130423 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c4be317-c914-45c5-8da4-1fe7d647db7e-operator-scripts\") pod \"placement-a34b-account-create-update-hm56c\" (UID: \"1c4be317-c914-45c5-8da4-1fe7d647db7e\") " pod="openstack/placement-a34b-account-create-update-hm56c" Jan 21 11:18:03 crc kubenswrapper[4881]: I0121 11:18:03.148993 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7s25\" (UniqueName: \"kubernetes.io/projected/1c4be317-c914-45c5-8da4-1fe7d647db7e-kube-api-access-h7s25\") pod \"placement-a34b-account-create-update-hm56c\" (UID: \"1c4be317-c914-45c5-8da4-1fe7d647db7e\") " pod="openstack/placement-a34b-account-create-update-hm56c" Jan 21 11:18:03 crc kubenswrapper[4881]: I0121 11:18:03.184370 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"75733567-f2a6-4331-bdea-147126213437","Type":"ContainerStarted","Data":"5833adb0117a8d41a669b51e672fa4471dd8e152778ebc0db32735d286328549"} Jan 21 11:18:03 crc kubenswrapper[4881]: I0121 11:18:03.287312 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a34b-account-create-update-hm56c" Jan 21 11:18:03 crc kubenswrapper[4881]: I0121 11:18:03.743423 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/eafb725b-4d8c-44b6-8966-4c611d4897d8-etc-swift\") pod \"swift-storage-0\" (UID: \"eafb725b-4d8c-44b6-8966-4c611d4897d8\") " pod="openstack/swift-storage-0" Jan 21 11:18:03 crc kubenswrapper[4881]: E0121 11:18:03.743692 4881 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 11:18:03 crc kubenswrapper[4881]: E0121 11:18:03.744336 4881 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 11:18:03 crc kubenswrapper[4881]: E0121 11:18:03.744522 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eafb725b-4d8c-44b6-8966-4c611d4897d8-etc-swift podName:eafb725b-4d8c-44b6-8966-4c611d4897d8 nodeName:}" failed. No retries permitted until 2026-01-21 11:18:11.74449658 +0000 UTC m=+1279.004453069 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/eafb725b-4d8c-44b6-8966-4c611d4897d8-etc-swift") pod "swift-storage-0" (UID: "eafb725b-4d8c-44b6-8966-4c611d4897d8") : configmap "swift-ring-files" not found Jan 21 11:18:04 crc kubenswrapper[4881]: I0121 11:18:04.731897 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-create-gc2qj"] Jan 21 11:18:04 crc kubenswrapper[4881]: I0121 11:18:04.735198 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-gc2qj" Jan 21 11:18:04 crc kubenswrapper[4881]: I0121 11:18:04.740270 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-gc2qj"] Jan 21 11:18:04 crc kubenswrapper[4881]: I0121 11:18:04.867648 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ecc1262-3ebf-4a17-bc42-507ce55f6d7e-operator-scripts\") pod \"watcher-db-create-gc2qj\" (UID: \"5ecc1262-3ebf-4a17-bc42-507ce55f6d7e\") " pod="openstack/watcher-db-create-gc2qj" Jan 21 11:18:04 crc kubenswrapper[4881]: I0121 11:18:04.868093 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw7gw\" (UniqueName: \"kubernetes.io/projected/5ecc1262-3ebf-4a17-bc42-507ce55f6d7e-kube-api-access-zw7gw\") pod \"watcher-db-create-gc2qj\" (UID: \"5ecc1262-3ebf-4a17-bc42-507ce55f6d7e\") " pod="openstack/watcher-db-create-gc2qj" Jan 21 11:18:04 crc kubenswrapper[4881]: I0121 11:18:04.942993 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-8d4c-account-create-update-f29tp"] Jan 21 11:18:04 crc kubenswrapper[4881]: I0121 11:18:04.944523 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-8d4c-account-create-update-f29tp" Jan 21 11:18:04 crc kubenswrapper[4881]: I0121 11:18:04.948366 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-db-secret" Jan 21 11:18:04 crc kubenswrapper[4881]: I0121 11:18:04.952245 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 21 11:18:04 crc kubenswrapper[4881]: I0121 11:18:04.953525 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-8d4c-account-create-update-f29tp"] Jan 21 11:18:04 crc kubenswrapper[4881]: I0121 11:18:04.970102 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zw7gw\" (UniqueName: \"kubernetes.io/projected/5ecc1262-3ebf-4a17-bc42-507ce55f6d7e-kube-api-access-zw7gw\") pod \"watcher-db-create-gc2qj\" (UID: \"5ecc1262-3ebf-4a17-bc42-507ce55f6d7e\") " pod="openstack/watcher-db-create-gc2qj" Jan 21 11:18:04 crc kubenswrapper[4881]: I0121 11:18:04.970216 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ecc1262-3ebf-4a17-bc42-507ce55f6d7e-operator-scripts\") pod \"watcher-db-create-gc2qj\" (UID: \"5ecc1262-3ebf-4a17-bc42-507ce55f6d7e\") " pod="openstack/watcher-db-create-gc2qj" Jan 21 11:18:04 crc kubenswrapper[4881]: I0121 11:18:04.970878 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ecc1262-3ebf-4a17-bc42-507ce55f6d7e-operator-scripts\") pod \"watcher-db-create-gc2qj\" (UID: \"5ecc1262-3ebf-4a17-bc42-507ce55f6d7e\") " pod="openstack/watcher-db-create-gc2qj" Jan 21 11:18:05 crc kubenswrapper[4881]: I0121 11:18:05.060541 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zw7gw\" (UniqueName: \"kubernetes.io/projected/5ecc1262-3ebf-4a17-bc42-507ce55f6d7e-kube-api-access-zw7gw\") pod \"watcher-db-create-gc2qj\" (UID: \"5ecc1262-3ebf-4a17-bc42-507ce55f6d7e\") " pod="openstack/watcher-db-create-gc2qj" Jan 21 11:18:05 crc kubenswrapper[4881]: I0121 11:18:05.069750 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-gc2qj" Jan 21 11:18:05 crc kubenswrapper[4881]: I0121 11:18:05.072003 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv8rw\" (UniqueName: \"kubernetes.io/projected/13ea4f5c-fa1d-485c-80b3-a260d8725e81-kube-api-access-gv8rw\") pod \"watcher-8d4c-account-create-update-f29tp\" (UID: \"13ea4f5c-fa1d-485c-80b3-a260d8725e81\") " pod="openstack/watcher-8d4c-account-create-update-f29tp" Jan 21 11:18:05 crc kubenswrapper[4881]: I0121 11:18:05.072104 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13ea4f5c-fa1d-485c-80b3-a260d8725e81-operator-scripts\") pod \"watcher-8d4c-account-create-update-f29tp\" (UID: \"13ea4f5c-fa1d-485c-80b3-a260d8725e81\") " pod="openstack/watcher-8d4c-account-create-update-f29tp" Jan 21 11:18:05 crc kubenswrapper[4881]: I0121 11:18:05.174897 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13ea4f5c-fa1d-485c-80b3-a260d8725e81-operator-scripts\") pod \"watcher-8d4c-account-create-update-f29tp\" (UID: \"13ea4f5c-fa1d-485c-80b3-a260d8725e81\") " pod="openstack/watcher-8d4c-account-create-update-f29tp" Jan 21 11:18:05 crc kubenswrapper[4881]: I0121 11:18:05.175080 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gv8rw\" (UniqueName: \"kubernetes.io/projected/13ea4f5c-fa1d-485c-80b3-a260d8725e81-kube-api-access-gv8rw\") pod \"watcher-8d4c-account-create-update-f29tp\" (UID: \"13ea4f5c-fa1d-485c-80b3-a260d8725e81\") " pod="openstack/watcher-8d4c-account-create-update-f29tp" Jan 21 11:18:05 crc kubenswrapper[4881]: I0121 11:18:05.176551 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13ea4f5c-fa1d-485c-80b3-a260d8725e81-operator-scripts\") pod \"watcher-8d4c-account-create-update-f29tp\" (UID: \"13ea4f5c-fa1d-485c-80b3-a260d8725e81\") " pod="openstack/watcher-8d4c-account-create-update-f29tp" Jan 21 11:18:05 crc kubenswrapper[4881]: I0121 11:18:05.205222 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv8rw\" (UniqueName: \"kubernetes.io/projected/13ea4f5c-fa1d-485c-80b3-a260d8725e81-kube-api-access-gv8rw\") pod \"watcher-8d4c-account-create-update-f29tp\" (UID: \"13ea4f5c-fa1d-485c-80b3-a260d8725e81\") " pod="openstack/watcher-8d4c-account-create-update-f29tp" Jan 21 11:18:05 crc kubenswrapper[4881]: I0121 11:18:05.268454 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-8d4c-account-create-update-f29tp" Jan 21 11:18:08 crc kubenswrapper[4881]: W0121 11:18:08.272741 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07845bf5_b5f8_4a00_9d0e_b86f5062f1ec.slice/crio-9ade4fe84a29987bc9e08c5c3d4f89144fde4ef8c7952c33c4574696f711b01e WatchSource:0}: Error finding container 9ade4fe84a29987bc9e08c5c3d4f89144fde4ef8c7952c33c4574696f711b01e: Status 404 returned error can't find the container with id 9ade4fe84a29987bc9e08c5c3d4f89144fde4ef8c7952c33c4574696f711b01e Jan 21 11:18:08 crc kubenswrapper[4881]: I0121 11:18:08.884028 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-gc2qj"] Jan 21 11:18:08 crc kubenswrapper[4881]: I0121 11:18:08.972397 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-a34b-account-create-update-hm56c"] Jan 21 11:18:08 crc kubenswrapper[4881]: I0121 11:18:08.983507 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-b4bf-account-create-update-6p74j"] Jan 21 11:18:08 crc kubenswrapper[4881]: W0121 11:18:08.986361 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod331fda3a_4e64_4824_abd7_42eaef7b9b4f.slice/crio-276b421549bf6d196987a877eafdaddacc3fb3a5a15f164ab2c4ad7c7b40910d WatchSource:0}: Error finding container 276b421549bf6d196987a877eafdaddacc3fb3a5a15f164ab2c4ad7c7b40910d: Status 404 returned error can't find the container with id 276b421549bf6d196987a877eafdaddacc3fb3a5a15f164ab2c4ad7c7b40910d Jan 21 11:18:09 crc kubenswrapper[4881]: I0121 11:18:09.181507 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-nv8vf"] Jan 21 11:18:09 crc kubenswrapper[4881]: W0121 11:18:09.182682 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod317bbc59_5154_4c0e_920a_3227d1ec4982.slice/crio-2daa0664d66cd137c24ccb2e8c0b5c88e27c6e03d9118e926f3e7325eeefc498 WatchSource:0}: Error finding container 2daa0664d66cd137c24ccb2e8c0b5c88e27c6e03d9118e926f3e7325eeefc498: Status 404 returned error can't find the container with id 2daa0664d66cd137c24ccb2e8c0b5c88e27c6e03d9118e926f3e7325eeefc498 Jan 21 11:18:09 crc kubenswrapper[4881]: I0121 11:18:09.242992 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-smj4g"] Jan 21 11:18:09 crc kubenswrapper[4881]: I0121 11:18:09.280708 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-8d4c-account-create-update-f29tp"] Jan 21 11:18:09 crc kubenswrapper[4881]: I0121 11:18:09.302069 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-cp5cl" event={"ID":"07845bf5-b5f8-4a00-9d0e-b86f5062f1ec","Type":"ContainerStarted","Data":"ce6a2cc0cc6379a9f8ed18cfa5d64954b4b7fdd11d37db77a73b2856418b87db"} Jan 21 11:18:09 crc kubenswrapper[4881]: I0121 11:18:09.302119 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-cp5cl" event={"ID":"07845bf5-b5f8-4a00-9d0e-b86f5062f1ec","Type":"ContainerStarted","Data":"9ade4fe84a29987bc9e08c5c3d4f89144fde4ef8c7952c33c4574696f711b01e"} Jan 21 11:18:09 crc kubenswrapper[4881]: I0121 11:18:09.339383 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a34b-account-create-update-hm56c" event={"ID":"1c4be317-c914-45c5-8da4-1fe7d647db7e","Type":"ContainerStarted","Data":"afe8f7c033a7212026d827f9755a996c22dd8a81009d9ff086f6c7998b052858"} Jan 21 11:18:09 crc kubenswrapper[4881]: I0121 11:18:09.339440 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-j29v8" event={"ID":"27451133-57c8-4991-aae0-ec0a82432176","Type":"ContainerStarted","Data":"5534ffef8705672a9dc2dcfe0651ff073211f019174a771251276741f854255a"} Jan 21 11:18:09 crc kubenswrapper[4881]: I0121 11:18:09.342332 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-cp5cl" podStartSLOduration=9.34231614 podStartE2EDuration="9.34231614s" podCreationTimestamp="2026-01-21 11:18:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:18:09.324989658 +0000 UTC m=+1276.584946137" watchObservedRunningTime="2026-01-21 11:18:09.34231614 +0000 UTC m=+1276.602272599" Jan 21 11:18:09 crc kubenswrapper[4881]: I0121 11:18:09.359734 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-nv8vf" event={"ID":"317bbc59-5154-4c0e-920a-3227d1ec4982","Type":"ContainerStarted","Data":"2daa0664d66cd137c24ccb2e8c0b5c88e27c6e03d9118e926f3e7325eeefc498"} Jan 21 11:18:09 crc kubenswrapper[4881]: I0121 11:18:09.371746 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"b3882b01-10ce-4832-ae71-676a8b65b086","Type":"ContainerStarted","Data":"e650113f6eb63d8248286db4439fd2bedd5a37053b0d0504f1ef297251b2857e"} Jan 21 11:18:09 crc kubenswrapper[4881]: I0121 11:18:09.371796 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"b3882b01-10ce-4832-ae71-676a8b65b086","Type":"ContainerStarted","Data":"8f149eb598e6f19a2fd3b5a35108a80539fb645cee3285c2ced977b3e69057dc"} Jan 21 11:18:09 crc kubenswrapper[4881]: I0121 11:18:09.372816 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 21 11:18:09 crc kubenswrapper[4881]: I0121 11:18:09.373924 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-gc2qj" event={"ID":"5ecc1262-3ebf-4a17-bc42-507ce55f6d7e","Type":"ContainerStarted","Data":"58c871aeff72223fb977bc5b168401e1ae43b57006b7711f7f615f35566c1421"} Jan 21 11:18:09 crc kubenswrapper[4881]: I0121 11:18:09.390296 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84cb884cf9-wmwx8" event={"ID":"62435f30-e8fc-4fcd-8b96-4a604439965e","Type":"ContainerStarted","Data":"a28ebc9fc60a5b5f4c6d8022f7888aae1167af104726fcaf924581e71afbdd73"} Jan 21 11:18:09 crc kubenswrapper[4881]: I0121 11:18:09.390797 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-j29v8" podStartSLOduration=5.350123515 podStartE2EDuration="13.390758421s" podCreationTimestamp="2026-01-21 11:17:56 +0000 UTC" firstStartedPulling="2026-01-21 11:18:00.372689178 +0000 UTC m=+1267.632645647" lastFinishedPulling="2026-01-21 11:18:08.413324084 +0000 UTC m=+1275.673280553" observedRunningTime="2026-01-21 11:18:09.359332065 +0000 UTC m=+1276.619288544" watchObservedRunningTime="2026-01-21 11:18:09.390758421 +0000 UTC m=+1276.650714890" Jan 21 11:18:09 crc kubenswrapper[4881]: I0121 11:18:09.391163 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-84cb884cf9-wmwx8" Jan 21 11:18:09 crc kubenswrapper[4881]: I0121 11:18:09.397526 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b4bf-account-create-update-6p74j" event={"ID":"331fda3a-4e64-4824-abd7-42eaef7b9b4f","Type":"ContainerStarted","Data":"276b421549bf6d196987a877eafdaddacc3fb3a5a15f164ab2c4ad7c7b40910d"} Jan 21 11:18:09 crc kubenswrapper[4881]: I0121 11:18:09.419422 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=4.299634443 podStartE2EDuration="12.419396917s" podCreationTimestamp="2026-01-21 11:17:57 +0000 UTC" firstStartedPulling="2026-01-21 11:18:00.235396177 +0000 UTC m=+1267.495352646" lastFinishedPulling="2026-01-21 11:18:08.355158651 +0000 UTC m=+1275.615115120" observedRunningTime="2026-01-21 11:18:09.410402562 +0000 UTC m=+1276.670359051" watchObservedRunningTime="2026-01-21 11:18:09.419396917 +0000 UTC m=+1276.679353386" Jan 21 11:18:09 crc kubenswrapper[4881]: I0121 11:18:09.444456 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-create-gc2qj" podStartSLOduration=5.444425452 podStartE2EDuration="5.444425452s" podCreationTimestamp="2026-01-21 11:18:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:18:09.433175761 +0000 UTC m=+1276.693132250" watchObservedRunningTime="2026-01-21 11:18:09.444425452 +0000 UTC m=+1276.704381921" Jan 21 11:18:09 crc kubenswrapper[4881]: I0121 11:18:09.457989 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-84cb884cf9-wmwx8" podStartSLOduration=15.457972431 podStartE2EDuration="15.457972431s" podCreationTimestamp="2026-01-21 11:17:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:18:09.451327295 +0000 UTC m=+1276.711283764" watchObservedRunningTime="2026-01-21 11:18:09.457972431 +0000 UTC m=+1276.717928900" Jan 21 11:18:10 crc kubenswrapper[4881]: I0121 11:18:10.409956 4881 generic.go:334] "Generic (PLEG): container finished" podID="5ecc1262-3ebf-4a17-bc42-507ce55f6d7e" containerID="d8dd72ec74cb8c65a23a4d5b59b35333d8b4f0429542fb48634decd408b21787" exitCode=0 Jan 21 11:18:10 crc kubenswrapper[4881]: I0121 11:18:10.410245 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-gc2qj" event={"ID":"5ecc1262-3ebf-4a17-bc42-507ce55f6d7e","Type":"ContainerDied","Data":"d8dd72ec74cb8c65a23a4d5b59b35333d8b4f0429542fb48634decd408b21787"} Jan 21 11:18:10 crc kubenswrapper[4881]: I0121 11:18:10.416674 4881 generic.go:334] "Generic (PLEG): container finished" podID="331fda3a-4e64-4824-abd7-42eaef7b9b4f" containerID="5dc89d3192dccc5bebeec553b9ca36f3b56735830fa2f8fae09494c5f8979443" exitCode=0 Jan 21 11:18:10 crc kubenswrapper[4881]: I0121 11:18:10.416830 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b4bf-account-create-update-6p74j" event={"ID":"331fda3a-4e64-4824-abd7-42eaef7b9b4f","Type":"ContainerDied","Data":"5dc89d3192dccc5bebeec553b9ca36f3b56735830fa2f8fae09494c5f8979443"} Jan 21 11:18:10 crc kubenswrapper[4881]: I0121 11:18:10.418407 4881 generic.go:334] "Generic (PLEG): container finished" podID="07845bf5-b5f8-4a00-9d0e-b86f5062f1ec" containerID="ce6a2cc0cc6379a9f8ed18cfa5d64954b4b7fdd11d37db77a73b2856418b87db" exitCode=0 Jan 21 11:18:10 crc kubenswrapper[4881]: I0121 11:18:10.418496 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-cp5cl" event={"ID":"07845bf5-b5f8-4a00-9d0e-b86f5062f1ec","Type":"ContainerDied","Data":"ce6a2cc0cc6379a9f8ed18cfa5d64954b4b7fdd11d37db77a73b2856418b87db"} Jan 21 11:18:10 crc kubenswrapper[4881]: I0121 11:18:10.419621 4881 generic.go:334] "Generic (PLEG): container finished" podID="1c4be317-c914-45c5-8da4-1fe7d647db7e" containerID="08a0b7dafd2179b30f57680020c59d606fe75966918c8bb86686a6dacf5de9ff" exitCode=0 Jan 21 11:18:10 crc kubenswrapper[4881]: I0121 11:18:10.419738 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a34b-account-create-update-hm56c" event={"ID":"1c4be317-c914-45c5-8da4-1fe7d647db7e","Type":"ContainerDied","Data":"08a0b7dafd2179b30f57680020c59d606fe75966918c8bb86686a6dacf5de9ff"} Jan 21 11:18:10 crc kubenswrapper[4881]: I0121 11:18:10.422670 4881 generic.go:334] "Generic (PLEG): container finished" podID="317bbc59-5154-4c0e-920a-3227d1ec4982" containerID="8b53d4f0258b883730ea2ab9cbc22ea1275e34223ca52f3ff089755ba0514b17" exitCode=0 Jan 21 11:18:10 crc kubenswrapper[4881]: I0121 11:18:10.422723 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-nv8vf" event={"ID":"317bbc59-5154-4c0e-920a-3227d1ec4982","Type":"ContainerDied","Data":"8b53d4f0258b883730ea2ab9cbc22ea1275e34223ca52f3ff089755ba0514b17"} Jan 21 11:18:10 crc kubenswrapper[4881]: I0121 11:18:10.428620 4881 generic.go:334] "Generic (PLEG): container finished" podID="b6a422f0-bb4b-442c-a2d7-96ac90ffde83" containerID="8e69c6e6b0d6f76b9304a07ebd26d806a9e9908cc09c50913b96d416ca2b1454" exitCode=0 Jan 21 11:18:10 crc kubenswrapper[4881]: I0121 11:18:10.428716 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-smj4g" event={"ID":"b6a422f0-bb4b-442c-a2d7-96ac90ffde83","Type":"ContainerDied","Data":"8e69c6e6b0d6f76b9304a07ebd26d806a9e9908cc09c50913b96d416ca2b1454"} Jan 21 11:18:10 crc kubenswrapper[4881]: I0121 11:18:10.428753 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-smj4g" event={"ID":"b6a422f0-bb4b-442c-a2d7-96ac90ffde83","Type":"ContainerStarted","Data":"44bbcef1140bc7525d4deb943d4b8475b95e76e49e944932a4346bc691fe09f4"} Jan 21 11:18:10 crc kubenswrapper[4881]: I0121 11:18:10.437182 4881 generic.go:334] "Generic (PLEG): container finished" podID="13ea4f5c-fa1d-485c-80b3-a260d8725e81" containerID="9ae9aa24bb02508282163c868da5d6ab7a85e49192dbd35ecea2bbccdab0b150" exitCode=0 Jan 21 11:18:10 crc kubenswrapper[4881]: I0121 11:18:10.438136 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-8d4c-account-create-update-f29tp" event={"ID":"13ea4f5c-fa1d-485c-80b3-a260d8725e81","Type":"ContainerDied","Data":"9ae9aa24bb02508282163c868da5d6ab7a85e49192dbd35ecea2bbccdab0b150"} Jan 21 11:18:10 crc kubenswrapper[4881]: I0121 11:18:10.438259 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-8d4c-account-create-update-f29tp" event={"ID":"13ea4f5c-fa1d-485c-80b3-a260d8725e81","Type":"ContainerStarted","Data":"3f180f71b7e84f243dc0e8ce19590c31eb5697d4c0625c36de20a7e3a9598f3a"} Jan 21 11:18:11 crc kubenswrapper[4881]: I0121 11:18:11.760904 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/eafb725b-4d8c-44b6-8966-4c611d4897d8-etc-swift\") pod \"swift-storage-0\" (UID: \"eafb725b-4d8c-44b6-8966-4c611d4897d8\") " pod="openstack/swift-storage-0" Jan 21 11:18:11 crc kubenswrapper[4881]: E0121 11:18:11.761095 4881 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 11:18:11 crc kubenswrapper[4881]: E0121 11:18:11.761436 4881 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 11:18:11 crc kubenswrapper[4881]: E0121 11:18:11.761498 4881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eafb725b-4d8c-44b6-8966-4c611d4897d8-etc-swift podName:eafb725b-4d8c-44b6-8966-4c611d4897d8 nodeName:}" failed. No retries permitted until 2026-01-21 11:18:27.761478906 +0000 UTC m=+1295.021435375 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/eafb725b-4d8c-44b6-8966-4c611d4897d8-etc-swift") pod "swift-storage-0" (UID: "eafb725b-4d8c-44b6-8966-4c611d4897d8") : configmap "swift-ring-files" not found Jan 21 11:18:14 crc kubenswrapper[4881]: I0121 11:18:14.347263 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-2rtl8" Jan 21 11:18:14 crc kubenswrapper[4881]: I0121 11:18:14.351994 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-2rtl8" Jan 21 11:18:14 crc kubenswrapper[4881]: I0121 11:18:14.595640 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-s642n-config-dk4k8"] Jan 21 11:18:14 crc kubenswrapper[4881]: I0121 11:18:14.602940 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-s642n-config-dk4k8" Jan 21 11:18:14 crc kubenswrapper[4881]: I0121 11:18:14.607069 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 21 11:18:14 crc kubenswrapper[4881]: I0121 11:18:14.610748 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-s642n-config-dk4k8"] Jan 21 11:18:14 crc kubenswrapper[4881]: I0121 11:18:14.829879 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-scripts\") pod \"ovn-controller-s642n-config-dk4k8\" (UID: \"bb419db7-7bc4-473f-a1ea-7878c6cc7cee\") " pod="openstack/ovn-controller-s642n-config-dk4k8" Jan 21 11:18:14 crc kubenswrapper[4881]: I0121 11:18:14.829960 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qsnq\" (UniqueName: \"kubernetes.io/projected/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-kube-api-access-2qsnq\") pod \"ovn-controller-s642n-config-dk4k8\" (UID: \"bb419db7-7bc4-473f-a1ea-7878c6cc7cee\") " pod="openstack/ovn-controller-s642n-config-dk4k8" Jan 21 11:18:14 crc kubenswrapper[4881]: I0121 11:18:14.829992 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-var-log-ovn\") pod \"ovn-controller-s642n-config-dk4k8\" (UID: \"bb419db7-7bc4-473f-a1ea-7878c6cc7cee\") " pod="openstack/ovn-controller-s642n-config-dk4k8" Jan 21 11:18:14 crc kubenswrapper[4881]: I0121 11:18:14.830046 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-var-run\") pod \"ovn-controller-s642n-config-dk4k8\" (UID: \"bb419db7-7bc4-473f-a1ea-7878c6cc7cee\") " pod="openstack/ovn-controller-s642n-config-dk4k8" Jan 21 11:18:14 crc kubenswrapper[4881]: I0121 11:18:14.830191 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-var-run-ovn\") pod \"ovn-controller-s642n-config-dk4k8\" (UID: \"bb419db7-7bc4-473f-a1ea-7878c6cc7cee\") " pod="openstack/ovn-controller-s642n-config-dk4k8" Jan 21 11:18:14 crc kubenswrapper[4881]: I0121 11:18:14.830480 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-additional-scripts\") pod \"ovn-controller-s642n-config-dk4k8\" (UID: \"bb419db7-7bc4-473f-a1ea-7878c6cc7cee\") " pod="openstack/ovn-controller-s642n-config-dk4k8" Jan 21 11:18:14 crc kubenswrapper[4881]: I0121 11:18:14.933850 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-scripts\") pod \"ovn-controller-s642n-config-dk4k8\" (UID: \"bb419db7-7bc4-473f-a1ea-7878c6cc7cee\") " pod="openstack/ovn-controller-s642n-config-dk4k8" Jan 21 11:18:14 crc kubenswrapper[4881]: I0121 11:18:14.933941 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qsnq\" (UniqueName: \"kubernetes.io/projected/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-kube-api-access-2qsnq\") pod \"ovn-controller-s642n-config-dk4k8\" (UID: \"bb419db7-7bc4-473f-a1ea-7878c6cc7cee\") " pod="openstack/ovn-controller-s642n-config-dk4k8" Jan 21 11:18:14 crc kubenswrapper[4881]: I0121 11:18:14.933985 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-var-log-ovn\") pod \"ovn-controller-s642n-config-dk4k8\" (UID: \"bb419db7-7bc4-473f-a1ea-7878c6cc7cee\") " pod="openstack/ovn-controller-s642n-config-dk4k8" Jan 21 11:18:14 crc kubenswrapper[4881]: I0121 11:18:14.934039 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-var-run\") pod \"ovn-controller-s642n-config-dk4k8\" (UID: \"bb419db7-7bc4-473f-a1ea-7878c6cc7cee\") " pod="openstack/ovn-controller-s642n-config-dk4k8" Jan 21 11:18:14 crc kubenswrapper[4881]: I0121 11:18:14.934088 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-var-run-ovn\") pod \"ovn-controller-s642n-config-dk4k8\" (UID: \"bb419db7-7bc4-473f-a1ea-7878c6cc7cee\") " pod="openstack/ovn-controller-s642n-config-dk4k8" Jan 21 11:18:14 crc kubenswrapper[4881]: I0121 11:18:14.934151 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-additional-scripts\") pod \"ovn-controller-s642n-config-dk4k8\" (UID: \"bb419db7-7bc4-473f-a1ea-7878c6cc7cee\") " pod="openstack/ovn-controller-s642n-config-dk4k8" Jan 21 11:18:14 crc kubenswrapper[4881]: I0121 11:18:14.934495 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-var-log-ovn\") pod \"ovn-controller-s642n-config-dk4k8\" (UID: \"bb419db7-7bc4-473f-a1ea-7878c6cc7cee\") " pod="openstack/ovn-controller-s642n-config-dk4k8" Jan 21 11:18:14 crc kubenswrapper[4881]: I0121 11:18:14.934536 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-var-run-ovn\") pod \"ovn-controller-s642n-config-dk4k8\" (UID: \"bb419db7-7bc4-473f-a1ea-7878c6cc7cee\") " pod="openstack/ovn-controller-s642n-config-dk4k8" Jan 21 11:18:14 crc kubenswrapper[4881]: I0121 11:18:14.934510 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-var-run\") pod \"ovn-controller-s642n-config-dk4k8\" (UID: \"bb419db7-7bc4-473f-a1ea-7878c6cc7cee\") " pod="openstack/ovn-controller-s642n-config-dk4k8" Jan 21 11:18:14 crc kubenswrapper[4881]: I0121 11:18:14.935225 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-additional-scripts\") pod \"ovn-controller-s642n-config-dk4k8\" (UID: \"bb419db7-7bc4-473f-a1ea-7878c6cc7cee\") " pod="openstack/ovn-controller-s642n-config-dk4k8" Jan 21 11:18:14 crc kubenswrapper[4881]: I0121 11:18:14.936843 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-scripts\") pod \"ovn-controller-s642n-config-dk4k8\" (UID: \"bb419db7-7bc4-473f-a1ea-7878c6cc7cee\") " pod="openstack/ovn-controller-s642n-config-dk4k8" Jan 21 11:18:14 crc kubenswrapper[4881]: I0121 11:18:14.959939 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qsnq\" (UniqueName: \"kubernetes.io/projected/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-kube-api-access-2qsnq\") pod \"ovn-controller-s642n-config-dk4k8\" (UID: \"bb419db7-7bc4-473f-a1ea-7878c6cc7cee\") " pod="openstack/ovn-controller-s642n-config-dk4k8" Jan 21 11:18:15 crc kubenswrapper[4881]: I0121 11:18:15.041217 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-s642n-config-dk4k8" Jan 21 11:18:15 crc kubenswrapper[4881]: I0121 11:18:15.088117 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-84cb884cf9-wmwx8" Jan 21 11:18:15 crc kubenswrapper[4881]: I0121 11:18:15.166477 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bbbc7b58c-8f8v7"] Jan 21 11:18:15 crc kubenswrapper[4881]: I0121 11:18:15.167264 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bbbc7b58c-8f8v7" podUID="efbfd001-4602-47b8-8c93-750ee3526e9e" containerName="dnsmasq-dns" containerID="cri-o://459e19bc99c44fd2c891c741bcf902ef1564b6013c62bfcf04dec268218723e7" gracePeriod=10 Jan 21 11:18:15 crc kubenswrapper[4881]: I0121 11:18:15.482171 4881 generic.go:334] "Generic (PLEG): container finished" podID="44bcf219-3358-4596-9d1e-88a51c415266" containerID="49c33a525e9cb9bae99d4cbbbfd17980a01d8ffda81efc8033434da5404beb26" exitCode=0 Jan 21 11:18:15 crc kubenswrapper[4881]: I0121 11:18:15.482241 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"44bcf219-3358-4596-9d1e-88a51c415266","Type":"ContainerDied","Data":"49c33a525e9cb9bae99d4cbbbfd17980a01d8ffda81efc8033434da5404beb26"} Jan 21 11:18:15 crc kubenswrapper[4881]: I0121 11:18:15.484411 4881 generic.go:334] "Generic (PLEG): container finished" podID="078c2368-b247-49d4-8723-fd93918e99b1" containerID="26f697deade0e9783aed3c09129f2f0589fbb10b53e3501c212b7fcc5f5b5d86" exitCode=0 Jan 21 11:18:15 crc kubenswrapper[4881]: I0121 11:18:15.484480 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"078c2368-b247-49d4-8723-fd93918e99b1","Type":"ContainerDied","Data":"26f697deade0e9783aed3c09129f2f0589fbb10b53e3501c212b7fcc5f5b5d86"} Jan 21 11:18:15 crc kubenswrapper[4881]: I0121 11:18:15.486000 4881 generic.go:334] "Generic (PLEG): container finished" podID="f7e90972-9be1-4d3e-852e-e7f7df6e6623" containerID="b30e547e2506fcebf2f8ac627808ad3f0382510a160b2079a570164ee838adfc" exitCode=0 Jan 21 11:18:15 crc kubenswrapper[4881]: I0121 11:18:15.486035 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f7e90972-9be1-4d3e-852e-e7f7df6e6623","Type":"ContainerDied","Data":"b30e547e2506fcebf2f8ac627808ad3f0382510a160b2079a570164ee838adfc"} Jan 21 11:18:16 crc kubenswrapper[4881]: I0121 11:18:16.987306 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5bbbc7b58c-8f8v7" podUID="efbfd001-4602-47b8-8c93-750ee3526e9e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.120:5353: connect: connection refused" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.397661 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cp5cl" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.440117 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b4bf-account-create-update-6p74j" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.471181 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-smj4g" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.485182 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a34b-account-create-update-hm56c" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.502612 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-nv8vf" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.518305 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-8d4c-account-create-update-f29tp" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.519028 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a34b-account-create-update-hm56c" event={"ID":"1c4be317-c914-45c5-8da4-1fe7d647db7e","Type":"ContainerDied","Data":"afe8f7c033a7212026d827f9755a996c22dd8a81009d9ff086f6c7998b052858"} Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.519059 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afe8f7c033a7212026d827f9755a996c22dd8a81009d9ff086f6c7998b052858" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.519100 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a34b-account-create-update-hm56c" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.519756 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-gc2qj" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.520701 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-nv8vf" event={"ID":"317bbc59-5154-4c0e-920a-3227d1ec4982","Type":"ContainerDied","Data":"2daa0664d66cd137c24ccb2e8c0b5c88e27c6e03d9118e926f3e7325eeefc498"} Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.520730 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2daa0664d66cd137c24ccb2e8c0b5c88e27c6e03d9118e926f3e7325eeefc498" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.520772 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-nv8vf" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.523160 4881 generic.go:334] "Generic (PLEG): container finished" podID="efbfd001-4602-47b8-8c93-750ee3526e9e" containerID="459e19bc99c44fd2c891c741bcf902ef1564b6013c62bfcf04dec268218723e7" exitCode=0 Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.523223 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bbbc7b58c-8f8v7" event={"ID":"efbfd001-4602-47b8-8c93-750ee3526e9e","Type":"ContainerDied","Data":"459e19bc99c44fd2c891c741bcf902ef1564b6013c62bfcf04dec268218723e7"} Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.524647 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-smj4g" event={"ID":"b6a422f0-bb4b-442c-a2d7-96ac90ffde83","Type":"ContainerDied","Data":"44bbcef1140bc7525d4deb943d4b8475b95e76e49e944932a4346bc691fe09f4"} Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.524675 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44bbcef1140bc7525d4deb943d4b8475b95e76e49e944932a4346bc691fe09f4" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.524729 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-smj4g" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.527697 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-8d4c-account-create-update-f29tp" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.527710 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-8d4c-account-create-update-f29tp" event={"ID":"13ea4f5c-fa1d-485c-80b3-a260d8725e81","Type":"ContainerDied","Data":"3f180f71b7e84f243dc0e8ce19590c31eb5697d4c0625c36de20a7e3a9598f3a"} Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.527834 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f180f71b7e84f243dc0e8ce19590c31eb5697d4c0625c36de20a7e3a9598f3a" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.532617 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07845bf5-b5f8-4a00-9d0e-b86f5062f1ec-operator-scripts\") pod \"07845bf5-b5f8-4a00-9d0e-b86f5062f1ec\" (UID: \"07845bf5-b5f8-4a00-9d0e-b86f5062f1ec\") " Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.533048 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkx6w\" (UniqueName: \"kubernetes.io/projected/07845bf5-b5f8-4a00-9d0e-b86f5062f1ec-kube-api-access-lkx6w\") pod \"07845bf5-b5f8-4a00-9d0e-b86f5062f1ec\" (UID: \"07845bf5-b5f8-4a00-9d0e-b86f5062f1ec\") " Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.534198 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07845bf5-b5f8-4a00-9d0e-b86f5062f1ec-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "07845bf5-b5f8-4a00-9d0e-b86f5062f1ec" (UID: "07845bf5-b5f8-4a00-9d0e-b86f5062f1ec"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.540878 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07845bf5-b5f8-4a00-9d0e-b86f5062f1ec-kube-api-access-lkx6w" (OuterVolumeSpecName: "kube-api-access-lkx6w") pod "07845bf5-b5f8-4a00-9d0e-b86f5062f1ec" (UID: "07845bf5-b5f8-4a00-9d0e-b86f5062f1ec"). InnerVolumeSpecName "kube-api-access-lkx6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.550915 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-gc2qj" event={"ID":"5ecc1262-3ebf-4a17-bc42-507ce55f6d7e","Type":"ContainerDied","Data":"58c871aeff72223fb977bc5b168401e1ae43b57006b7711f7f615f35566c1421"} Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.550965 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58c871aeff72223fb977bc5b168401e1ae43b57006b7711f7f615f35566c1421" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.551050 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-gc2qj" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.553295 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b4bf-account-create-update-6p74j" event={"ID":"331fda3a-4e64-4824-abd7-42eaef7b9b4f","Type":"ContainerDied","Data":"276b421549bf6d196987a877eafdaddacc3fb3a5a15f164ab2c4ad7c7b40910d"} Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.553342 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="276b421549bf6d196987a877eafdaddacc3fb3a5a15f164ab2c4ad7c7b40910d" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.553408 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b4bf-account-create-update-6p74j" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.554874 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-cp5cl" event={"ID":"07845bf5-b5f8-4a00-9d0e-b86f5062f1ec","Type":"ContainerDied","Data":"9ade4fe84a29987bc9e08c5c3d4f89144fde4ef8c7952c33c4574696f711b01e"} Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.554903 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ade4fe84a29987bc9e08c5c3d4f89144fde4ef8c7952c33c4574696f711b01e" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.554977 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-cp5cl" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.635207 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13ea4f5c-fa1d-485c-80b3-a260d8725e81-operator-scripts\") pod \"13ea4f5c-fa1d-485c-80b3-a260d8725e81\" (UID: \"13ea4f5c-fa1d-485c-80b3-a260d8725e81\") " Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.635283 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/331fda3a-4e64-4824-abd7-42eaef7b9b4f-operator-scripts\") pod \"331fda3a-4e64-4824-abd7-42eaef7b9b4f\" (UID: \"331fda3a-4e64-4824-abd7-42eaef7b9b4f\") " Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.635315 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zw7gw\" (UniqueName: \"kubernetes.io/projected/5ecc1262-3ebf-4a17-bc42-507ce55f6d7e-kube-api-access-zw7gw\") pod \"5ecc1262-3ebf-4a17-bc42-507ce55f6d7e\" (UID: \"5ecc1262-3ebf-4a17-bc42-507ce55f6d7e\") " Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.635377 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64bd2\" (UniqueName: \"kubernetes.io/projected/317bbc59-5154-4c0e-920a-3227d1ec4982-kube-api-access-64bd2\") pod \"317bbc59-5154-4c0e-920a-3227d1ec4982\" (UID: \"317bbc59-5154-4c0e-920a-3227d1ec4982\") " Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.635462 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2l844\" (UniqueName: \"kubernetes.io/projected/331fda3a-4e64-4824-abd7-42eaef7b9b4f-kube-api-access-2l844\") pod \"331fda3a-4e64-4824-abd7-42eaef7b9b4f\" (UID: \"331fda3a-4e64-4824-abd7-42eaef7b9b4f\") " Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.635496 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7s25\" (UniqueName: \"kubernetes.io/projected/1c4be317-c914-45c5-8da4-1fe7d647db7e-kube-api-access-h7s25\") pod \"1c4be317-c914-45c5-8da4-1fe7d647db7e\" (UID: \"1c4be317-c914-45c5-8da4-1fe7d647db7e\") " Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.635532 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ecc1262-3ebf-4a17-bc42-507ce55f6d7e-operator-scripts\") pod \"5ecc1262-3ebf-4a17-bc42-507ce55f6d7e\" (UID: \"5ecc1262-3ebf-4a17-bc42-507ce55f6d7e\") " Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.635641 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gv8rw\" (UniqueName: \"kubernetes.io/projected/13ea4f5c-fa1d-485c-80b3-a260d8725e81-kube-api-access-gv8rw\") pod \"13ea4f5c-fa1d-485c-80b3-a260d8725e81\" (UID: \"13ea4f5c-fa1d-485c-80b3-a260d8725e81\") " Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.635692 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r258x\" (UniqueName: \"kubernetes.io/projected/b6a422f0-bb4b-442c-a2d7-96ac90ffde83-kube-api-access-r258x\") pod \"b6a422f0-bb4b-442c-a2d7-96ac90ffde83\" (UID: \"b6a422f0-bb4b-442c-a2d7-96ac90ffde83\") " Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.635736 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c4be317-c914-45c5-8da4-1fe7d647db7e-operator-scripts\") pod \"1c4be317-c914-45c5-8da4-1fe7d647db7e\" (UID: \"1c4be317-c914-45c5-8da4-1fe7d647db7e\") " Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.635773 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/317bbc59-5154-4c0e-920a-3227d1ec4982-operator-scripts\") pod \"317bbc59-5154-4c0e-920a-3227d1ec4982\" (UID: \"317bbc59-5154-4c0e-920a-3227d1ec4982\") " Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.635816 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6a422f0-bb4b-442c-a2d7-96ac90ffde83-operator-scripts\") pod \"b6a422f0-bb4b-442c-a2d7-96ac90ffde83\" (UID: \"b6a422f0-bb4b-442c-a2d7-96ac90ffde83\") " Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.636121 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13ea4f5c-fa1d-485c-80b3-a260d8725e81-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "13ea4f5c-fa1d-485c-80b3-a260d8725e81" (UID: "13ea4f5c-fa1d-485c-80b3-a260d8725e81"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.636600 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkx6w\" (UniqueName: \"kubernetes.io/projected/07845bf5-b5f8-4a00-9d0e-b86f5062f1ec-kube-api-access-lkx6w\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.636617 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07845bf5-b5f8-4a00-9d0e-b86f5062f1ec-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.636626 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13ea4f5c-fa1d-485c-80b3-a260d8725e81-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.636655 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c4be317-c914-45c5-8da4-1fe7d647db7e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1c4be317-c914-45c5-8da4-1fe7d647db7e" (UID: "1c4be317-c914-45c5-8da4-1fe7d647db7e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.636841 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/331fda3a-4e64-4824-abd7-42eaef7b9b4f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "331fda3a-4e64-4824-abd7-42eaef7b9b4f" (UID: "331fda3a-4e64-4824-abd7-42eaef7b9b4f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.636962 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/317bbc59-5154-4c0e-920a-3227d1ec4982-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "317bbc59-5154-4c0e-920a-3227d1ec4982" (UID: "317bbc59-5154-4c0e-920a-3227d1ec4982"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.637299 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6a422f0-bb4b-442c-a2d7-96ac90ffde83-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b6a422f0-bb4b-442c-a2d7-96ac90ffde83" (UID: "b6a422f0-bb4b-442c-a2d7-96ac90ffde83"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.637550 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ecc1262-3ebf-4a17-bc42-507ce55f6d7e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5ecc1262-3ebf-4a17-bc42-507ce55f6d7e" (UID: "5ecc1262-3ebf-4a17-bc42-507ce55f6d7e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.644078 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6a422f0-bb4b-442c-a2d7-96ac90ffde83-kube-api-access-r258x" (OuterVolumeSpecName: "kube-api-access-r258x") pod "b6a422f0-bb4b-442c-a2d7-96ac90ffde83" (UID: "b6a422f0-bb4b-442c-a2d7-96ac90ffde83"). InnerVolumeSpecName "kube-api-access-r258x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.644197 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ecc1262-3ebf-4a17-bc42-507ce55f6d7e-kube-api-access-zw7gw" (OuterVolumeSpecName: "kube-api-access-zw7gw") pod "5ecc1262-3ebf-4a17-bc42-507ce55f6d7e" (UID: "5ecc1262-3ebf-4a17-bc42-507ce55f6d7e"). InnerVolumeSpecName "kube-api-access-zw7gw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.644252 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13ea4f5c-fa1d-485c-80b3-a260d8725e81-kube-api-access-gv8rw" (OuterVolumeSpecName: "kube-api-access-gv8rw") pod "13ea4f5c-fa1d-485c-80b3-a260d8725e81" (UID: "13ea4f5c-fa1d-485c-80b3-a260d8725e81"). InnerVolumeSpecName "kube-api-access-gv8rw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.644272 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/331fda3a-4e64-4824-abd7-42eaef7b9b4f-kube-api-access-2l844" (OuterVolumeSpecName: "kube-api-access-2l844") pod "331fda3a-4e64-4824-abd7-42eaef7b9b4f" (UID: "331fda3a-4e64-4824-abd7-42eaef7b9b4f"). InnerVolumeSpecName "kube-api-access-2l844". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.648659 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/317bbc59-5154-4c0e-920a-3227d1ec4982-kube-api-access-64bd2" (OuterVolumeSpecName: "kube-api-access-64bd2") pod "317bbc59-5154-4c0e-920a-3227d1ec4982" (UID: "317bbc59-5154-4c0e-920a-3227d1ec4982"). InnerVolumeSpecName "kube-api-access-64bd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.649435 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c4be317-c914-45c5-8da4-1fe7d647db7e-kube-api-access-h7s25" (OuterVolumeSpecName: "kube-api-access-h7s25") pod "1c4be317-c914-45c5-8da4-1fe7d647db7e" (UID: "1c4be317-c914-45c5-8da4-1fe7d647db7e"). InnerVolumeSpecName "kube-api-access-h7s25". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.787949 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2l844\" (UniqueName: \"kubernetes.io/projected/331fda3a-4e64-4824-abd7-42eaef7b9b4f-kube-api-access-2l844\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.787991 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7s25\" (UniqueName: \"kubernetes.io/projected/1c4be317-c914-45c5-8da4-1fe7d647db7e-kube-api-access-h7s25\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.788007 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ecc1262-3ebf-4a17-bc42-507ce55f6d7e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.788020 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gv8rw\" (UniqueName: \"kubernetes.io/projected/13ea4f5c-fa1d-485c-80b3-a260d8725e81-kube-api-access-gv8rw\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.788033 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r258x\" (UniqueName: \"kubernetes.io/projected/b6a422f0-bb4b-442c-a2d7-96ac90ffde83-kube-api-access-r258x\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.788043 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1c4be317-c914-45c5-8da4-1fe7d647db7e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.788055 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/317bbc59-5154-4c0e-920a-3227d1ec4982-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.788066 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b6a422f0-bb4b-442c-a2d7-96ac90ffde83-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.788078 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/331fda3a-4e64-4824-abd7-42eaef7b9b4f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.788091 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zw7gw\" (UniqueName: \"kubernetes.io/projected/5ecc1262-3ebf-4a17-bc42-507ce55f6d7e-kube-api-access-zw7gw\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.788101 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64bd2\" (UniqueName: \"kubernetes.io/projected/317bbc59-5154-4c0e-920a-3227d1ec4982-kube-api-access-64bd2\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.880407 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bbbc7b58c-8f8v7" Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.965496 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-s642n-config-dk4k8"] Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.996124 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/efbfd001-4602-47b8-8c93-750ee3526e9e-ovsdbserver-nb\") pod \"efbfd001-4602-47b8-8c93-750ee3526e9e\" (UID: \"efbfd001-4602-47b8-8c93-750ee3526e9e\") " Jan 21 11:18:17 crc kubenswrapper[4881]: I0121 11:18:17.996191 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efbfd001-4602-47b8-8c93-750ee3526e9e-config\") pod \"efbfd001-4602-47b8-8c93-750ee3526e9e\" (UID: \"efbfd001-4602-47b8-8c93-750ee3526e9e\") " Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:17.996277 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/efbfd001-4602-47b8-8c93-750ee3526e9e-dns-svc\") pod \"efbfd001-4602-47b8-8c93-750ee3526e9e\" (UID: \"efbfd001-4602-47b8-8c93-750ee3526e9e\") " Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:17.996320 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krc4s\" (UniqueName: \"kubernetes.io/projected/efbfd001-4602-47b8-8c93-750ee3526e9e-kube-api-access-krc4s\") pod \"efbfd001-4602-47b8-8c93-750ee3526e9e\" (UID: \"efbfd001-4602-47b8-8c93-750ee3526e9e\") " Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:17.996361 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/efbfd001-4602-47b8-8c93-750ee3526e9e-ovsdbserver-sb\") pod \"efbfd001-4602-47b8-8c93-750ee3526e9e\" (UID: \"efbfd001-4602-47b8-8c93-750ee3526e9e\") " Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:18.001897 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efbfd001-4602-47b8-8c93-750ee3526e9e-kube-api-access-krc4s" (OuterVolumeSpecName: "kube-api-access-krc4s") pod "efbfd001-4602-47b8-8c93-750ee3526e9e" (UID: "efbfd001-4602-47b8-8c93-750ee3526e9e"). InnerVolumeSpecName "kube-api-access-krc4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:18.043754 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efbfd001-4602-47b8-8c93-750ee3526e9e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "efbfd001-4602-47b8-8c93-750ee3526e9e" (UID: "efbfd001-4602-47b8-8c93-750ee3526e9e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:18.049304 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efbfd001-4602-47b8-8c93-750ee3526e9e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "efbfd001-4602-47b8-8c93-750ee3526e9e" (UID: "efbfd001-4602-47b8-8c93-750ee3526e9e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:18.052640 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efbfd001-4602-47b8-8c93-750ee3526e9e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "efbfd001-4602-47b8-8c93-750ee3526e9e" (UID: "efbfd001-4602-47b8-8c93-750ee3526e9e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:18.057327 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efbfd001-4602-47b8-8c93-750ee3526e9e-config" (OuterVolumeSpecName: "config") pod "efbfd001-4602-47b8-8c93-750ee3526e9e" (UID: "efbfd001-4602-47b8-8c93-750ee3526e9e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:18.099053 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/efbfd001-4602-47b8-8c93-750ee3526e9e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:18.099090 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efbfd001-4602-47b8-8c93-750ee3526e9e-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:18.099102 4881 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/efbfd001-4602-47b8-8c93-750ee3526e9e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:18.099111 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krc4s\" (UniqueName: \"kubernetes.io/projected/efbfd001-4602-47b8-8c93-750ee3526e9e-kube-api-access-krc4s\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:18.099124 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/efbfd001-4602-47b8-8c93-750ee3526e9e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:18.566402 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f7e90972-9be1-4d3e-852e-e7f7df6e6623","Type":"ContainerStarted","Data":"8a0e4e5a99ef920688a0d7a6463ea9c0a7db6ff987fcbf667df0b4f98b3356bf"} Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:18.567004 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:18.568447 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-s642n-config-dk4k8" event={"ID":"bb419db7-7bc4-473f-a1ea-7878c6cc7cee","Type":"ContainerStarted","Data":"4b32abc6871e628e297cbe463288501e5adf49f03da08854de77bfb91714eedb"} Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:18.568516 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-s642n-config-dk4k8" event={"ID":"bb419db7-7bc4-473f-a1ea-7878c6cc7cee","Type":"ContainerStarted","Data":"17b36a4727f2f30052334d778af9941aaf1d732632d15b0eb264bc2a85ccdbb5"} Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:18.570733 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"44bcf219-3358-4596-9d1e-88a51c415266","Type":"ContainerStarted","Data":"c5853aef3fb2571c98cb61a06c87c41306574ddbfbed106da2329564ad9cdd0c"} Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:18.571007 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:18.573171 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"078c2368-b247-49d4-8723-fd93918e99b1","Type":"ContainerStarted","Data":"023f57aba22657f38c9822a9fcfbabd9eb5513e10f1d131208e251a7df31b2a0"} Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:18.573610 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:18.575956 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"75733567-f2a6-4331-bdea-147126213437","Type":"ContainerStarted","Data":"2d247ee2c4ae6dcda1bc7bdb88b6f46d738cb9050ce2b5c108235bf069c56986"} Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:18.579327 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bbbc7b58c-8f8v7" event={"ID":"efbfd001-4602-47b8-8c93-750ee3526e9e","Type":"ContainerDied","Data":"0d2501cc7f927d66e1b692f30c322a8fe23a8259355cb2568f67f16617966fc3"} Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:18.579375 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bbbc7b58c-8f8v7" Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:18.579382 4881 scope.go:117] "RemoveContainer" containerID="459e19bc99c44fd2c891c741bcf902ef1564b6013c62bfcf04dec268218723e7" Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:18.606254 4881 scope.go:117] "RemoveContainer" containerID="cdc12a4dbe29fc14fdd129b9c5c90a6d695123d10dd8715736366c33c786a70d" Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:18.629535 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371936.225256 podStartE2EDuration="1m40.62952031s" podCreationTimestamp="2026-01-21 11:16:38 +0000 UTC" firstStartedPulling="2026-01-21 11:16:43.050484476 +0000 UTC m=+1190.310440945" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:18:18.62351451 +0000 UTC m=+1285.883470979" watchObservedRunningTime="2026-01-21 11:18:18.62952031 +0000 UTC m=+1285.889476779" Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:18.706465 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=45.461615508 podStartE2EDuration="1m40.706442242s" podCreationTimestamp="2026-01-21 11:16:38 +0000 UTC" firstStartedPulling="2026-01-21 11:16:41.35837995 +0000 UTC m=+1188.618336419" lastFinishedPulling="2026-01-21 11:17:36.603206684 +0000 UTC m=+1243.863163153" observedRunningTime="2026-01-21 11:18:18.701343494 +0000 UTC m=+1285.961299963" watchObservedRunningTime="2026-01-21 11:18:18.706442242 +0000 UTC m=+1285.966398731" Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:18.758403 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=5.063499534 podStartE2EDuration="1m34.75837752s" podCreationTimestamp="2026-01-21 11:16:44 +0000 UTC" firstStartedPulling="2026-01-21 11:16:48.167169064 +0000 UTC m=+1195.427125533" lastFinishedPulling="2026-01-21 11:18:17.86204705 +0000 UTC m=+1285.122003519" observedRunningTime="2026-01-21 11:18:18.750521483 +0000 UTC m=+1286.010477952" watchObservedRunningTime="2026-01-21 11:18:18.75837752 +0000 UTC m=+1286.018333999" Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:18.818376 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-notifications-server-0" podStartSLOduration=-9223371937.036423 podStartE2EDuration="1m39.818353829s" podCreationTimestamp="2026-01-21 11:16:39 +0000 UTC" firstStartedPulling="2026-01-21 11:16:43.010412195 +0000 UTC m=+1190.270368664" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:18:18.790134044 +0000 UTC m=+1286.050090513" watchObservedRunningTime="2026-01-21 11:18:18.818353829 +0000 UTC m=+1286.078310298" Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:18.839606 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-s642n-config-dk4k8" podStartSLOduration=4.839588069 podStartE2EDuration="4.839588069s" podCreationTimestamp="2026-01-21 11:18:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:18:18.823168398 +0000 UTC m=+1286.083124867" watchObservedRunningTime="2026-01-21 11:18:18.839588069 +0000 UTC m=+1286.099544538" Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:18.844812 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bbbc7b58c-8f8v7"] Jan 21 11:18:18 crc kubenswrapper[4881]: I0121 11:18:18.854294 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bbbc7b58c-8f8v7"] Jan 21 11:18:19 crc kubenswrapper[4881]: I0121 11:18:19.326315 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efbfd001-4602-47b8-8c93-750ee3526e9e" path="/var/lib/kubelet/pods/efbfd001-4602-47b8-8c93-750ee3526e9e/volumes" Jan 21 11:18:19 crc kubenswrapper[4881]: I0121 11:18:19.591279 4881 generic.go:334] "Generic (PLEG): container finished" podID="bb419db7-7bc4-473f-a1ea-7878c6cc7cee" containerID="4b32abc6871e628e297cbe463288501e5adf49f03da08854de77bfb91714eedb" exitCode=0 Jan 21 11:18:19 crc kubenswrapper[4881]: I0121 11:18:19.591319 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-s642n-config-dk4k8" event={"ID":"bb419db7-7bc4-473f-a1ea-7878c6cc7cee","Type":"ContainerDied","Data":"4b32abc6871e628e297cbe463288501e5adf49f03da08854de77bfb91714eedb"} Jan 21 11:18:21 crc kubenswrapper[4881]: I0121 11:18:21.072896 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-s642n-config-dk4k8" Jan 21 11:18:21 crc kubenswrapper[4881]: I0121 11:18:21.184987 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qsnq\" (UniqueName: \"kubernetes.io/projected/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-kube-api-access-2qsnq\") pod \"bb419db7-7bc4-473f-a1ea-7878c6cc7cee\" (UID: \"bb419db7-7bc4-473f-a1ea-7878c6cc7cee\") " Jan 21 11:18:21 crc kubenswrapper[4881]: I0121 11:18:21.185056 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-scripts\") pod \"bb419db7-7bc4-473f-a1ea-7878c6cc7cee\" (UID: \"bb419db7-7bc4-473f-a1ea-7878c6cc7cee\") " Jan 21 11:18:21 crc kubenswrapper[4881]: I0121 11:18:21.185142 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-additional-scripts\") pod \"bb419db7-7bc4-473f-a1ea-7878c6cc7cee\" (UID: \"bb419db7-7bc4-473f-a1ea-7878c6cc7cee\") " Jan 21 11:18:21 crc kubenswrapper[4881]: I0121 11:18:21.185223 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-var-log-ovn\") pod \"bb419db7-7bc4-473f-a1ea-7878c6cc7cee\" (UID: \"bb419db7-7bc4-473f-a1ea-7878c6cc7cee\") " Jan 21 11:18:21 crc kubenswrapper[4881]: I0121 11:18:21.185289 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-var-run\") pod \"bb419db7-7bc4-473f-a1ea-7878c6cc7cee\" (UID: \"bb419db7-7bc4-473f-a1ea-7878c6cc7cee\") " Jan 21 11:18:21 crc kubenswrapper[4881]: I0121 11:18:21.185344 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-var-run-ovn\") pod \"bb419db7-7bc4-473f-a1ea-7878c6cc7cee\" (UID: \"bb419db7-7bc4-473f-a1ea-7878c6cc7cee\") " Jan 21 11:18:21 crc kubenswrapper[4881]: I0121 11:18:21.186053 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "bb419db7-7bc4-473f-a1ea-7878c6cc7cee" (UID: "bb419db7-7bc4-473f-a1ea-7878c6cc7cee"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:18:21 crc kubenswrapper[4881]: I0121 11:18:21.187253 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "bb419db7-7bc4-473f-a1ea-7878c6cc7cee" (UID: "bb419db7-7bc4-473f-a1ea-7878c6cc7cee"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:18:21 crc kubenswrapper[4881]: I0121 11:18:21.187349 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-var-run" (OuterVolumeSpecName: "var-run") pod "bb419db7-7bc4-473f-a1ea-7878c6cc7cee" (UID: "bb419db7-7bc4-473f-a1ea-7878c6cc7cee"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:18:21 crc kubenswrapper[4881]: I0121 11:18:21.188224 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "bb419db7-7bc4-473f-a1ea-7878c6cc7cee" (UID: "bb419db7-7bc4-473f-a1ea-7878c6cc7cee"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:18:21 crc kubenswrapper[4881]: I0121 11:18:21.188523 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-scripts" (OuterVolumeSpecName: "scripts") pod "bb419db7-7bc4-473f-a1ea-7878c6cc7cee" (UID: "bb419db7-7bc4-473f-a1ea-7878c6cc7cee"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:18:21 crc kubenswrapper[4881]: I0121 11:18:21.194081 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-kube-api-access-2qsnq" (OuterVolumeSpecName: "kube-api-access-2qsnq") pod "bb419db7-7bc4-473f-a1ea-7878c6cc7cee" (UID: "bb419db7-7bc4-473f-a1ea-7878c6cc7cee"). InnerVolumeSpecName "kube-api-access-2qsnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:18:21 crc kubenswrapper[4881]: I0121 11:18:21.287825 4881 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:21 crc kubenswrapper[4881]: I0121 11:18:21.287875 4881 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:21 crc kubenswrapper[4881]: I0121 11:18:21.287889 4881 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-var-run\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:21 crc kubenswrapper[4881]: I0121 11:18:21.287901 4881 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:21 crc kubenswrapper[4881]: I0121 11:18:21.287915 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qsnq\" (UniqueName: \"kubernetes.io/projected/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-kube-api-access-2qsnq\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:21 crc kubenswrapper[4881]: I0121 11:18:21.287930 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bb419db7-7bc4-473f-a1ea-7878c6cc7cee-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:21 crc kubenswrapper[4881]: I0121 11:18:21.325378 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-cp5cl"] Jan 21 11:18:21 crc kubenswrapper[4881]: I0121 11:18:21.330538 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-cp5cl"] Jan 21 11:18:21 crc kubenswrapper[4881]: I0121 11:18:21.612916 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-s642n-config-dk4k8" event={"ID":"bb419db7-7bc4-473f-a1ea-7878c6cc7cee","Type":"ContainerDied","Data":"17b36a4727f2f30052334d778af9941aaf1d732632d15b0eb264bc2a85ccdbb5"} Jan 21 11:18:21 crc kubenswrapper[4881]: I0121 11:18:21.612978 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17b36a4727f2f30052334d778af9941aaf1d732632d15b0eb264bc2a85ccdbb5" Jan 21 11:18:21 crc kubenswrapper[4881]: I0121 11:18:21.613064 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-s642n-config-dk4k8" Jan 21 11:18:21 crc kubenswrapper[4881]: I0121 11:18:21.624712 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:22 crc kubenswrapper[4881]: I0121 11:18:22.368007 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-s642n-config-dk4k8"] Jan 21 11:18:22 crc kubenswrapper[4881]: I0121 11:18:22.386843 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-s642n-config-dk4k8"] Jan 21 11:18:23 crc kubenswrapper[4881]: I0121 11:18:23.019286 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 21 11:18:23 crc kubenswrapper[4881]: I0121 11:18:23.323035 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07845bf5-b5f8-4a00-9d0e-b86f5062f1ec" path="/var/lib/kubelet/pods/07845bf5-b5f8-4a00-9d0e-b86f5062f1ec/volumes" Jan 21 11:18:23 crc kubenswrapper[4881]: I0121 11:18:23.323752 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb419db7-7bc4-473f-a1ea-7878c6cc7cee" path="/var/lib/kubelet/pods/bb419db7-7bc4-473f-a1ea-7878c6cc7cee/volumes" Jan 21 11:18:24 crc kubenswrapper[4881]: I0121 11:18:24.362356 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-s642n" Jan 21 11:18:25 crc kubenswrapper[4881]: I0121 11:18:25.647375 4881 generic.go:334] "Generic (PLEG): container finished" podID="27451133-57c8-4991-aae0-ec0a82432176" containerID="5534ffef8705672a9dc2dcfe0651ff073211f019174a771251276741f854255a" exitCode=0 Jan 21 11:18:25 crc kubenswrapper[4881]: I0121 11:18:25.647413 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-j29v8" event={"ID":"27451133-57c8-4991-aae0-ec0a82432176","Type":"ContainerDied","Data":"5534ffef8705672a9dc2dcfe0651ff073211f019174a771251276741f854255a"} Jan 21 11:18:26 crc kubenswrapper[4881]: I0121 11:18:26.325603 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-n9992"] Jan 21 11:18:26 crc kubenswrapper[4881]: E0121 11:18:26.326098 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="317bbc59-5154-4c0e-920a-3227d1ec4982" containerName="mariadb-database-create" Jan 21 11:18:26 crc kubenswrapper[4881]: I0121 11:18:26.326125 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="317bbc59-5154-4c0e-920a-3227d1ec4982" containerName="mariadb-database-create" Jan 21 11:18:26 crc kubenswrapper[4881]: E0121 11:18:26.326141 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ecc1262-3ebf-4a17-bc42-507ce55f6d7e" containerName="mariadb-database-create" Jan 21 11:18:26 crc kubenswrapper[4881]: I0121 11:18:26.326150 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ecc1262-3ebf-4a17-bc42-507ce55f6d7e" containerName="mariadb-database-create" Jan 21 11:18:26 crc kubenswrapper[4881]: E0121 11:18:26.326169 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb419db7-7bc4-473f-a1ea-7878c6cc7cee" containerName="ovn-config" Jan 21 11:18:26 crc kubenswrapper[4881]: I0121 11:18:26.326177 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb419db7-7bc4-473f-a1ea-7878c6cc7cee" containerName="ovn-config" Jan 21 11:18:26 crc kubenswrapper[4881]: E0121 11:18:26.326200 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efbfd001-4602-47b8-8c93-750ee3526e9e" containerName="init" Jan 21 11:18:26 crc kubenswrapper[4881]: I0121 11:18:26.326208 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="efbfd001-4602-47b8-8c93-750ee3526e9e" containerName="init" Jan 21 11:18:26 crc kubenswrapper[4881]: E0121 11:18:26.326224 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13ea4f5c-fa1d-485c-80b3-a260d8725e81" containerName="mariadb-account-create-update" Jan 21 11:18:26 crc kubenswrapper[4881]: I0121 11:18:26.326231 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="13ea4f5c-fa1d-485c-80b3-a260d8725e81" containerName="mariadb-account-create-update" Jan 21 11:18:26 crc kubenswrapper[4881]: E0121 11:18:26.326241 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07845bf5-b5f8-4a00-9d0e-b86f5062f1ec" containerName="mariadb-account-create-update" Jan 21 11:18:26 crc kubenswrapper[4881]: I0121 11:18:26.326248 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="07845bf5-b5f8-4a00-9d0e-b86f5062f1ec" containerName="mariadb-account-create-update" Jan 21 11:18:26 crc kubenswrapper[4881]: E0121 11:18:26.326265 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="331fda3a-4e64-4824-abd7-42eaef7b9b4f" containerName="mariadb-account-create-update" Jan 21 11:18:26 crc kubenswrapper[4881]: I0121 11:18:26.326273 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="331fda3a-4e64-4824-abd7-42eaef7b9b4f" containerName="mariadb-account-create-update" Jan 21 11:18:26 crc kubenswrapper[4881]: E0121 11:18:26.326285 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6a422f0-bb4b-442c-a2d7-96ac90ffde83" containerName="mariadb-database-create" Jan 21 11:18:26 crc kubenswrapper[4881]: I0121 11:18:26.326292 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6a422f0-bb4b-442c-a2d7-96ac90ffde83" containerName="mariadb-database-create" Jan 21 11:18:26 crc kubenswrapper[4881]: E0121 11:18:26.326301 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efbfd001-4602-47b8-8c93-750ee3526e9e" containerName="dnsmasq-dns" Jan 21 11:18:26 crc kubenswrapper[4881]: I0121 11:18:26.326309 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="efbfd001-4602-47b8-8c93-750ee3526e9e" containerName="dnsmasq-dns" Jan 21 11:18:26 crc kubenswrapper[4881]: E0121 11:18:26.326321 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c4be317-c914-45c5-8da4-1fe7d647db7e" containerName="mariadb-account-create-update" Jan 21 11:18:26 crc kubenswrapper[4881]: I0121 11:18:26.326328 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c4be317-c914-45c5-8da4-1fe7d647db7e" containerName="mariadb-account-create-update" Jan 21 11:18:26 crc kubenswrapper[4881]: I0121 11:18:26.326546 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="331fda3a-4e64-4824-abd7-42eaef7b9b4f" containerName="mariadb-account-create-update" Jan 21 11:18:26 crc kubenswrapper[4881]: I0121 11:18:26.326561 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="317bbc59-5154-4c0e-920a-3227d1ec4982" containerName="mariadb-database-create" Jan 21 11:18:26 crc kubenswrapper[4881]: I0121 11:18:26.326577 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="07845bf5-b5f8-4a00-9d0e-b86f5062f1ec" containerName="mariadb-account-create-update" Jan 21 11:18:26 crc kubenswrapper[4881]: I0121 11:18:26.326593 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c4be317-c914-45c5-8da4-1fe7d647db7e" containerName="mariadb-account-create-update" Jan 21 11:18:26 crc kubenswrapper[4881]: I0121 11:18:26.326609 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb419db7-7bc4-473f-a1ea-7878c6cc7cee" containerName="ovn-config" Jan 21 11:18:26 crc kubenswrapper[4881]: I0121 11:18:26.326621 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ecc1262-3ebf-4a17-bc42-507ce55f6d7e" containerName="mariadb-database-create" Jan 21 11:18:26 crc kubenswrapper[4881]: I0121 11:18:26.326632 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="13ea4f5c-fa1d-485c-80b3-a260d8725e81" containerName="mariadb-account-create-update" Jan 21 11:18:26 crc kubenswrapper[4881]: I0121 11:18:26.326644 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6a422f0-bb4b-442c-a2d7-96ac90ffde83" containerName="mariadb-database-create" Jan 21 11:18:26 crc kubenswrapper[4881]: I0121 11:18:26.326653 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="efbfd001-4602-47b8-8c93-750ee3526e9e" containerName="dnsmasq-dns" Jan 21 11:18:26 crc kubenswrapper[4881]: I0121 11:18:26.327370 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-n9992" Jan 21 11:18:26 crc kubenswrapper[4881]: I0121 11:18:26.330802 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 21 11:18:26 crc kubenswrapper[4881]: I0121 11:18:26.408141 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-n9992"] Jan 21 11:18:26 crc kubenswrapper[4881]: I0121 11:18:26.446713 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70a2b37a-049a-45a1-aeb5-6b7d5515dd69-operator-scripts\") pod \"root-account-create-update-n9992\" (UID: \"70a2b37a-049a-45a1-aeb5-6b7d5515dd69\") " pod="openstack/root-account-create-update-n9992" Jan 21 11:18:26 crc kubenswrapper[4881]: I0121 11:18:26.446878 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6l98\" (UniqueName: \"kubernetes.io/projected/70a2b37a-049a-45a1-aeb5-6b7d5515dd69-kube-api-access-k6l98\") pod \"root-account-create-update-n9992\" (UID: \"70a2b37a-049a-45a1-aeb5-6b7d5515dd69\") " pod="openstack/root-account-create-update-n9992" Jan 21 11:18:26 crc kubenswrapper[4881]: I0121 11:18:26.548570 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70a2b37a-049a-45a1-aeb5-6b7d5515dd69-operator-scripts\") pod \"root-account-create-update-n9992\" (UID: \"70a2b37a-049a-45a1-aeb5-6b7d5515dd69\") " pod="openstack/root-account-create-update-n9992" Jan 21 11:18:26 crc kubenswrapper[4881]: I0121 11:18:26.548696 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6l98\" (UniqueName: \"kubernetes.io/projected/70a2b37a-049a-45a1-aeb5-6b7d5515dd69-kube-api-access-k6l98\") pod \"root-account-create-update-n9992\" (UID: \"70a2b37a-049a-45a1-aeb5-6b7d5515dd69\") " pod="openstack/root-account-create-update-n9992" Jan 21 11:18:26 crc kubenswrapper[4881]: I0121 11:18:26.549616 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70a2b37a-049a-45a1-aeb5-6b7d5515dd69-operator-scripts\") pod \"root-account-create-update-n9992\" (UID: \"70a2b37a-049a-45a1-aeb5-6b7d5515dd69\") " pod="openstack/root-account-create-update-n9992" Jan 21 11:18:26 crc kubenswrapper[4881]: I0121 11:18:26.573054 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6l98\" (UniqueName: \"kubernetes.io/projected/70a2b37a-049a-45a1-aeb5-6b7d5515dd69-kube-api-access-k6l98\") pod \"root-account-create-update-n9992\" (UID: \"70a2b37a-049a-45a1-aeb5-6b7d5515dd69\") " pod="openstack/root-account-create-update-n9992" Jan 21 11:18:26 crc kubenswrapper[4881]: I0121 11:18:26.726527 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-n9992" Jan 21 11:18:27 crc kubenswrapper[4881]: I0121 11:18:27.155029 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-j29v8" Jan 21 11:18:27 crc kubenswrapper[4881]: I0121 11:18:27.203442 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/27451133-57c8-4991-aae0-ec0a82432176-scripts\") pod \"27451133-57c8-4991-aae0-ec0a82432176\" (UID: \"27451133-57c8-4991-aae0-ec0a82432176\") " Jan 21 11:18:27 crc kubenswrapper[4881]: I0121 11:18:27.203564 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/27451133-57c8-4991-aae0-ec0a82432176-swiftconf\") pod \"27451133-57c8-4991-aae0-ec0a82432176\" (UID: \"27451133-57c8-4991-aae0-ec0a82432176\") " Jan 21 11:18:27 crc kubenswrapper[4881]: I0121 11:18:27.203610 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/27451133-57c8-4991-aae0-ec0a82432176-etc-swift\") pod \"27451133-57c8-4991-aae0-ec0a82432176\" (UID: \"27451133-57c8-4991-aae0-ec0a82432176\") " Jan 21 11:18:27 crc kubenswrapper[4881]: I0121 11:18:27.203672 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/27451133-57c8-4991-aae0-ec0a82432176-dispersionconf\") pod \"27451133-57c8-4991-aae0-ec0a82432176\" (UID: \"27451133-57c8-4991-aae0-ec0a82432176\") " Jan 21 11:18:27 crc kubenswrapper[4881]: I0121 11:18:27.203699 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fp4l2\" (UniqueName: \"kubernetes.io/projected/27451133-57c8-4991-aae0-ec0a82432176-kube-api-access-fp4l2\") pod \"27451133-57c8-4991-aae0-ec0a82432176\" (UID: \"27451133-57c8-4991-aae0-ec0a82432176\") " Jan 21 11:18:27 crc kubenswrapper[4881]: I0121 11:18:27.203774 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27451133-57c8-4991-aae0-ec0a82432176-combined-ca-bundle\") pod \"27451133-57c8-4991-aae0-ec0a82432176\" (UID: \"27451133-57c8-4991-aae0-ec0a82432176\") " Jan 21 11:18:27 crc kubenswrapper[4881]: I0121 11:18:27.203829 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/27451133-57c8-4991-aae0-ec0a82432176-ring-data-devices\") pod \"27451133-57c8-4991-aae0-ec0a82432176\" (UID: \"27451133-57c8-4991-aae0-ec0a82432176\") " Jan 21 11:18:27 crc kubenswrapper[4881]: I0121 11:18:27.204677 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27451133-57c8-4991-aae0-ec0a82432176-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "27451133-57c8-4991-aae0-ec0a82432176" (UID: "27451133-57c8-4991-aae0-ec0a82432176"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:18:27 crc kubenswrapper[4881]: I0121 11:18:27.204827 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27451133-57c8-4991-aae0-ec0a82432176-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "27451133-57c8-4991-aae0-ec0a82432176" (UID: "27451133-57c8-4991-aae0-ec0a82432176"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:18:27 crc kubenswrapper[4881]: I0121 11:18:27.209578 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27451133-57c8-4991-aae0-ec0a82432176-kube-api-access-fp4l2" (OuterVolumeSpecName: "kube-api-access-fp4l2") pod "27451133-57c8-4991-aae0-ec0a82432176" (UID: "27451133-57c8-4991-aae0-ec0a82432176"). InnerVolumeSpecName "kube-api-access-fp4l2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:18:27 crc kubenswrapper[4881]: I0121 11:18:27.220816 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27451133-57c8-4991-aae0-ec0a82432176-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "27451133-57c8-4991-aae0-ec0a82432176" (UID: "27451133-57c8-4991-aae0-ec0a82432176"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:18:27 crc kubenswrapper[4881]: I0121 11:18:27.233540 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27451133-57c8-4991-aae0-ec0a82432176-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "27451133-57c8-4991-aae0-ec0a82432176" (UID: "27451133-57c8-4991-aae0-ec0a82432176"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:18:27 crc kubenswrapper[4881]: I0121 11:18:27.234114 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27451133-57c8-4991-aae0-ec0a82432176-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "27451133-57c8-4991-aae0-ec0a82432176" (UID: "27451133-57c8-4991-aae0-ec0a82432176"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:18:27 crc kubenswrapper[4881]: I0121 11:18:27.235186 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27451133-57c8-4991-aae0-ec0a82432176-scripts" (OuterVolumeSpecName: "scripts") pod "27451133-57c8-4991-aae0-ec0a82432176" (UID: "27451133-57c8-4991-aae0-ec0a82432176"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:18:27 crc kubenswrapper[4881]: I0121 11:18:27.305450 4881 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/27451133-57c8-4991-aae0-ec0a82432176-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:27 crc kubenswrapper[4881]: I0121 11:18:27.305707 4881 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/27451133-57c8-4991-aae0-ec0a82432176-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:27 crc kubenswrapper[4881]: I0121 11:18:27.305716 4881 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/27451133-57c8-4991-aae0-ec0a82432176-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:27 crc kubenswrapper[4881]: I0121 11:18:27.305727 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fp4l2\" (UniqueName: \"kubernetes.io/projected/27451133-57c8-4991-aae0-ec0a82432176-kube-api-access-fp4l2\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:27 crc kubenswrapper[4881]: I0121 11:18:27.305737 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27451133-57c8-4991-aae0-ec0a82432176-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:27 crc kubenswrapper[4881]: I0121 11:18:27.305747 4881 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/27451133-57c8-4991-aae0-ec0a82432176-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:27 crc kubenswrapper[4881]: I0121 11:18:27.305767 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/27451133-57c8-4991-aae0-ec0a82432176-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:27 crc kubenswrapper[4881]: I0121 11:18:27.322223 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-n9992"] Jan 21 11:18:27 crc kubenswrapper[4881]: I0121 11:18:27.724902 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-j29v8" Jan 21 11:18:27 crc kubenswrapper[4881]: I0121 11:18:27.725411 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-j29v8" event={"ID":"27451133-57c8-4991-aae0-ec0a82432176","Type":"ContainerDied","Data":"a7d4d23aa2fd8ae274e39ac46c3595d9d1bd6e0b97327033852c004b5061046a"} Jan 21 11:18:27 crc kubenswrapper[4881]: I0121 11:18:27.725582 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7d4d23aa2fd8ae274e39ac46c3595d9d1bd6e0b97327033852c004b5061046a" Jan 21 11:18:27 crc kubenswrapper[4881]: I0121 11:18:27.727294 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-n9992" event={"ID":"70a2b37a-049a-45a1-aeb5-6b7d5515dd69","Type":"ContainerStarted","Data":"6e182625c740cd9b27db99777efb40afa19b03bf59089a6dcf471f48c90169e9"} Jan 21 11:18:27 crc kubenswrapper[4881]: I0121 11:18:27.815206 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/eafb725b-4d8c-44b6-8966-4c611d4897d8-etc-swift\") pod \"swift-storage-0\" (UID: \"eafb725b-4d8c-44b6-8966-4c611d4897d8\") " pod="openstack/swift-storage-0" Jan 21 11:18:27 crc kubenswrapper[4881]: I0121 11:18:27.825532 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/eafb725b-4d8c-44b6-8966-4c611d4897d8-etc-swift\") pod \"swift-storage-0\" (UID: \"eafb725b-4d8c-44b6-8966-4c611d4897d8\") " pod="openstack/swift-storage-0" Jan 21 11:18:28 crc kubenswrapper[4881]: I0121 11:18:28.016640 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 21 11:18:29 crc kubenswrapper[4881]: W0121 11:18:29.380568 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeafb725b_4d8c_44b6_8966_4c611d4897d8.slice/crio-fa371e25057562bc0967926609e1375457c656d723443fd8c191eb196655406f WatchSource:0}: Error finding container fa371e25057562bc0967926609e1375457c656d723443fd8c191eb196655406f: Status 404 returned error can't find the container with id fa371e25057562bc0967926609e1375457c656d723443fd8c191eb196655406f Jan 21 11:18:29 crc kubenswrapper[4881]: I0121 11:18:29.397536 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 21 11:18:30 crc kubenswrapper[4881]: I0121 11:18:30.157303 4881 generic.go:334] "Generic (PLEG): container finished" podID="70a2b37a-049a-45a1-aeb5-6b7d5515dd69" containerID="0287622c020081ba9c95095872909db810663fe9347d92c3e84d5f5ddca8090f" exitCode=0 Jan 21 11:18:30 crc kubenswrapper[4881]: I0121 11:18:30.157403 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-n9992" event={"ID":"70a2b37a-049a-45a1-aeb5-6b7d5515dd69","Type":"ContainerDied","Data":"0287622c020081ba9c95095872909db810663fe9347d92c3e84d5f5ddca8090f"} Jan 21 11:18:30 crc kubenswrapper[4881]: I0121 11:18:30.159323 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eafb725b-4d8c-44b6-8966-4c611d4897d8","Type":"ContainerStarted","Data":"fa371e25057562bc0967926609e1375457c656d723443fd8c191eb196655406f"} Jan 21 11:18:30 crc kubenswrapper[4881]: I0121 11:18:30.192330 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="078c2368-b247-49d4-8723-fd93918e99b1" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.107:5671: connect: connection refused" Jan 21 11:18:30 crc kubenswrapper[4881]: I0121 11:18:30.480508 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="f7e90972-9be1-4d3e-852e-e7f7df6e6623" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.106:5671: connect: connection refused" Jan 21 11:18:30 crc kubenswrapper[4881]: I0121 11:18:30.588757 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-notifications-server-0" podUID="44bcf219-3358-4596-9d1e-88a51c415266" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.108:5671: connect: connection refused" Jan 21 11:18:31 crc kubenswrapper[4881]: I0121 11:18:31.258386 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eafb725b-4d8c-44b6-8966-4c611d4897d8","Type":"ContainerStarted","Data":"4fd185e130e69b2415f699558b7acc78898a4578573fcf0ee5fd93c9eb52f9a9"} Jan 21 11:18:31 crc kubenswrapper[4881]: I0121 11:18:31.258443 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eafb725b-4d8c-44b6-8966-4c611d4897d8","Type":"ContainerStarted","Data":"16fbd2cda89c78dca24b01be6b4a2ae3db901547bd215d6b3e425bcb0a7650ed"} Jan 21 11:18:31 crc kubenswrapper[4881]: I0121 11:18:31.258456 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eafb725b-4d8c-44b6-8966-4c611d4897d8","Type":"ContainerStarted","Data":"8a7eba6c367beaedd5f9a7ebe117fff18116464f09dfc1c8fe21415f39dc26bf"} Jan 21 11:18:31 crc kubenswrapper[4881]: I0121 11:18:31.625218 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:31 crc kubenswrapper[4881]: I0121 11:18:31.628886 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:31 crc kubenswrapper[4881]: I0121 11:18:31.701595 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-n9992" Jan 21 11:18:31 crc kubenswrapper[4881]: I0121 11:18:31.857385 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6l98\" (UniqueName: \"kubernetes.io/projected/70a2b37a-049a-45a1-aeb5-6b7d5515dd69-kube-api-access-k6l98\") pod \"70a2b37a-049a-45a1-aeb5-6b7d5515dd69\" (UID: \"70a2b37a-049a-45a1-aeb5-6b7d5515dd69\") " Jan 21 11:18:31 crc kubenswrapper[4881]: I0121 11:18:31.857547 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70a2b37a-049a-45a1-aeb5-6b7d5515dd69-operator-scripts\") pod \"70a2b37a-049a-45a1-aeb5-6b7d5515dd69\" (UID: \"70a2b37a-049a-45a1-aeb5-6b7d5515dd69\") " Jan 21 11:18:31 crc kubenswrapper[4881]: I0121 11:18:31.858168 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70a2b37a-049a-45a1-aeb5-6b7d5515dd69-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "70a2b37a-049a-45a1-aeb5-6b7d5515dd69" (UID: "70a2b37a-049a-45a1-aeb5-6b7d5515dd69"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:18:31 crc kubenswrapper[4881]: I0121 11:18:31.869946 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70a2b37a-049a-45a1-aeb5-6b7d5515dd69-kube-api-access-k6l98" (OuterVolumeSpecName: "kube-api-access-k6l98") pod "70a2b37a-049a-45a1-aeb5-6b7d5515dd69" (UID: "70a2b37a-049a-45a1-aeb5-6b7d5515dd69"). InnerVolumeSpecName "kube-api-access-k6l98". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:18:31 crc kubenswrapper[4881]: I0121 11:18:31.960154 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6l98\" (UniqueName: \"kubernetes.io/projected/70a2b37a-049a-45a1-aeb5-6b7d5515dd69-kube-api-access-k6l98\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:31 crc kubenswrapper[4881]: I0121 11:18:31.960200 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70a2b37a-049a-45a1-aeb5-6b7d5515dd69-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:32 crc kubenswrapper[4881]: I0121 11:18:32.269298 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-n9992" event={"ID":"70a2b37a-049a-45a1-aeb5-6b7d5515dd69","Type":"ContainerDied","Data":"6e182625c740cd9b27db99777efb40afa19b03bf59089a6dcf471f48c90169e9"} Jan 21 11:18:32 crc kubenswrapper[4881]: I0121 11:18:32.269641 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e182625c740cd9b27db99777efb40afa19b03bf59089a6dcf471f48c90169e9" Jan 21 11:18:32 crc kubenswrapper[4881]: I0121 11:18:32.269355 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-n9992" Jan 21 11:18:32 crc kubenswrapper[4881]: I0121 11:18:32.272021 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eafb725b-4d8c-44b6-8966-4c611d4897d8","Type":"ContainerStarted","Data":"30ef45750c7a8247839a15ff79716a9275f85fec09fa57057b4125239f19114b"} Jan 21 11:18:32 crc kubenswrapper[4881]: I0121 11:18:32.278908 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:33 crc kubenswrapper[4881]: I0121 11:18:33.301893 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eafb725b-4d8c-44b6-8966-4c611d4897d8","Type":"ContainerStarted","Data":"d3c4e2bdeaf341c15b75402994fe952ffbca5d0b9516cc44904770c1c4df18e7"} Jan 21 11:18:33 crc kubenswrapper[4881]: I0121 11:18:33.302264 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eafb725b-4d8c-44b6-8966-4c611d4897d8","Type":"ContainerStarted","Data":"a2b0caec7793e742605110a061597cc5066635faf6282964b3a3687b1511e3bd"} Jan 21 11:18:33 crc kubenswrapper[4881]: I0121 11:18:33.302279 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eafb725b-4d8c-44b6-8966-4c611d4897d8","Type":"ContainerStarted","Data":"9a8f8fb1f0e137ee1f5de2fb461ffc3df0553ae1fb3bbcc4b17b9b6c66fa13e8"} Jan 21 11:18:33 crc kubenswrapper[4881]: I0121 11:18:33.302290 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eafb725b-4d8c-44b6-8966-4c611d4897d8","Type":"ContainerStarted","Data":"3c3eda95e085d4311b0544201ae61db17d63fb863fdd6190b85822634f42ecd9"} Jan 21 11:18:35 crc kubenswrapper[4881]: I0121 11:18:35.469614 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eafb725b-4d8c-44b6-8966-4c611d4897d8","Type":"ContainerStarted","Data":"2f05a62c3bd278bf78c1161a5e27081796317c3b2794a6ecf5faa3095cf831c5"} Jan 21 11:18:35 crc kubenswrapper[4881]: I0121 11:18:35.533322 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 11:18:35 crc kubenswrapper[4881]: I0121 11:18:35.533923 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="75733567-f2a6-4331-bdea-147126213437" containerName="thanos-sidecar" containerID="cri-o://2d247ee2c4ae6dcda1bc7bdb88b6f46d738cb9050ce2b5c108235bf069c56986" gracePeriod=600 Jan 21 11:18:35 crc kubenswrapper[4881]: I0121 11:18:35.534112 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="75733567-f2a6-4331-bdea-147126213437" containerName="config-reloader" containerID="cri-o://5833adb0117a8d41a669b51e672fa4471dd8e152778ebc0db32735d286328549" gracePeriod=600 Jan 21 11:18:35 crc kubenswrapper[4881]: I0121 11:18:35.534184 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="75733567-f2a6-4331-bdea-147126213437" containerName="prometheus" containerID="cri-o://a56efe39870006b796c3201c8dc3334fb4d25c094ef7e6facbf2f393bd54653c" gracePeriod=600 Jan 21 11:18:35 crc kubenswrapper[4881]: E0121 11:18:35.725567 4881 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75733567_f2a6_4331_bdea_147126213437.slice/crio-conmon-2d247ee2c4ae6dcda1bc7bdb88b6f46d738cb9050ce2b5c108235bf069c56986.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75733567_f2a6_4331_bdea_147126213437.slice/crio-2d247ee2c4ae6dcda1bc7bdb88b6f46d738cb9050ce2b5c108235bf069c56986.scope\": RecentStats: unable to find data in memory cache]" Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.498124 4881 generic.go:334] "Generic (PLEG): container finished" podID="75733567-f2a6-4331-bdea-147126213437" containerID="2d247ee2c4ae6dcda1bc7bdb88b6f46d738cb9050ce2b5c108235bf069c56986" exitCode=0 Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.498516 4881 generic.go:334] "Generic (PLEG): container finished" podID="75733567-f2a6-4331-bdea-147126213437" containerID="5833adb0117a8d41a669b51e672fa4471dd8e152778ebc0db32735d286328549" exitCode=0 Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.498524 4881 generic.go:334] "Generic (PLEG): container finished" podID="75733567-f2a6-4331-bdea-147126213437" containerID="a56efe39870006b796c3201c8dc3334fb4d25c094ef7e6facbf2f393bd54653c" exitCode=0 Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.498584 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"75733567-f2a6-4331-bdea-147126213437","Type":"ContainerDied","Data":"2d247ee2c4ae6dcda1bc7bdb88b6f46d738cb9050ce2b5c108235bf069c56986"} Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.498612 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"75733567-f2a6-4331-bdea-147126213437","Type":"ContainerDied","Data":"5833adb0117a8d41a669b51e672fa4471dd8e152778ebc0db32735d286328549"} Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.498623 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"75733567-f2a6-4331-bdea-147126213437","Type":"ContainerDied","Data":"a56efe39870006b796c3201c8dc3334fb4d25c094ef7e6facbf2f393bd54653c"} Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.514008 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eafb725b-4d8c-44b6-8966-4c611d4897d8","Type":"ContainerStarted","Data":"b2b3898e8cf9e67719df1cbd7d9730c00502e2beb2d6aabf2368adaabab0bde5"} Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.514057 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eafb725b-4d8c-44b6-8966-4c611d4897d8","Type":"ContainerStarted","Data":"a455f293414cf9854db8ac764207fddf18e9d7fdd01199943100a6d3d797481d"} Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.514066 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eafb725b-4d8c-44b6-8966-4c611d4897d8","Type":"ContainerStarted","Data":"30ebbc3da58097752ab30be268597fcf58310323ce02fb38797ea939848af428"} Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.725089 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.733489 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2vkg\" (UniqueName: \"kubernetes.io/projected/75733567-f2a6-4331-bdea-147126213437-kube-api-access-n2vkg\") pod \"75733567-f2a6-4331-bdea-147126213437\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.736692 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/75733567-f2a6-4331-bdea-147126213437-web-config\") pod \"75733567-f2a6-4331-bdea-147126213437\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.737747 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\") pod \"75733567-f2a6-4331-bdea-147126213437\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.738365 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75733567-f2a6-4331-bdea-147126213437-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "75733567-f2a6-4331-bdea-147126213437" (UID: "75733567-f2a6-4331-bdea-147126213437"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.737778 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/75733567-f2a6-4331-bdea-147126213437-prometheus-metric-storage-rulefiles-0\") pod \"75733567-f2a6-4331-bdea-147126213437\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.738962 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/75733567-f2a6-4331-bdea-147126213437-config\") pod \"75733567-f2a6-4331-bdea-147126213437\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.738999 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/75733567-f2a6-4331-bdea-147126213437-thanos-prometheus-http-client-file\") pod \"75733567-f2a6-4331-bdea-147126213437\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.739548 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75733567-f2a6-4331-bdea-147126213437-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "75733567-f2a6-4331-bdea-147126213437" (UID: "75733567-f2a6-4331-bdea-147126213437"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.739016 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/75733567-f2a6-4331-bdea-147126213437-prometheus-metric-storage-rulefiles-1\") pod \"75733567-f2a6-4331-bdea-147126213437\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.744735 4881 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/75733567-f2a6-4331-bdea-147126213437-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.744759 4881 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/75733567-f2a6-4331-bdea-147126213437-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.747763 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75733567-f2a6-4331-bdea-147126213437-kube-api-access-n2vkg" (OuterVolumeSpecName: "kube-api-access-n2vkg") pod "75733567-f2a6-4331-bdea-147126213437" (UID: "75733567-f2a6-4331-bdea-147126213437"). InnerVolumeSpecName "kube-api-access-n2vkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.756548 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75733567-f2a6-4331-bdea-147126213437-config" (OuterVolumeSpecName: "config") pod "75733567-f2a6-4331-bdea-147126213437" (UID: "75733567-f2a6-4331-bdea-147126213437"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.849767 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/75733567-f2a6-4331-bdea-147126213437-prometheus-metric-storage-rulefiles-2\") pod \"75733567-f2a6-4331-bdea-147126213437\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.849908 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/75733567-f2a6-4331-bdea-147126213437-config-out\") pod \"75733567-f2a6-4331-bdea-147126213437\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.849943 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/75733567-f2a6-4331-bdea-147126213437-tls-assets\") pod \"75733567-f2a6-4331-bdea-147126213437\" (UID: \"75733567-f2a6-4331-bdea-147126213437\") " Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.850698 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2vkg\" (UniqueName: \"kubernetes.io/projected/75733567-f2a6-4331-bdea-147126213437-kube-api-access-n2vkg\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.850725 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/75733567-f2a6-4331-bdea-147126213437-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.851610 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75733567-f2a6-4331-bdea-147126213437-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "75733567-f2a6-4331-bdea-147126213437" (UID: "75733567-f2a6-4331-bdea-147126213437"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.860426 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75733567-f2a6-4331-bdea-147126213437-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "75733567-f2a6-4331-bdea-147126213437" (UID: "75733567-f2a6-4331-bdea-147126213437"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.861229 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75733567-f2a6-4331-bdea-147126213437-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "75733567-f2a6-4331-bdea-147126213437" (UID: "75733567-f2a6-4331-bdea-147126213437"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.863924 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75733567-f2a6-4331-bdea-147126213437-web-config" (OuterVolumeSpecName: "web-config") pod "75733567-f2a6-4331-bdea-147126213437" (UID: "75733567-f2a6-4331-bdea-147126213437"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.871682 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75733567-f2a6-4331-bdea-147126213437-config-out" (OuterVolumeSpecName: "config-out") pod "75733567-f2a6-4331-bdea-147126213437" (UID: "75733567-f2a6-4331-bdea-147126213437"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.884270 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "75733567-f2a6-4331-bdea-147126213437" (UID: "75733567-f2a6-4331-bdea-147126213437"). InnerVolumeSpecName "pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.952931 4881 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/75733567-f2a6-4331-bdea-147126213437-web-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.953012 4881 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\") on node \"crc\" " Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.953028 4881 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/75733567-f2a6-4331-bdea-147126213437-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.953040 4881 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/75733567-f2a6-4331-bdea-147126213437-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.953051 4881 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/75733567-f2a6-4331-bdea-147126213437-config-out\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.953060 4881 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/75733567-f2a6-4331-bdea-147126213437-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.984212 4881 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 21 11:18:36 crc kubenswrapper[4881]: I0121 11:18:36.984464 4881 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a") on node "crc" Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.054456 4881 reconciler_common.go:293] "Volume detached for volume \"pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.649908 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eafb725b-4d8c-44b6-8966-4c611d4897d8","Type":"ContainerStarted","Data":"289fc976662972c742902d4838622aa28afe05c468c3ba1562bd132609c2c02d"} Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.671651 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"75733567-f2a6-4331-bdea-147126213437","Type":"ContainerDied","Data":"648f9884533415a5c2309f4dd9efc2ccd6cbaeb098dca1475cdb0221de466d52"} Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.671753 4881 scope.go:117] "RemoveContainer" containerID="2d247ee2c4ae6dcda1bc7bdb88b6f46d738cb9050ce2b5c108235bf069c56986" Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.672100 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.703528 4881 scope.go:117] "RemoveContainer" containerID="5833adb0117a8d41a669b51e672fa4471dd8e152778ebc0db32735d286328549" Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.724224 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.733001 4881 scope.go:117] "RemoveContainer" containerID="a56efe39870006b796c3201c8dc3334fb4d25c094ef7e6facbf2f393bd54653c" Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.739188 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.769625 4881 scope.go:117] "RemoveContainer" containerID="3d2c36495c41eb6152a1fc9a05412fce52a5f353e0b59004227d5efed6039fb6" Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.777951 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 11:18:37 crc kubenswrapper[4881]: E0121 11:18:37.778336 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75733567-f2a6-4331-bdea-147126213437" containerName="config-reloader" Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.778357 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="75733567-f2a6-4331-bdea-147126213437" containerName="config-reloader" Jan 21 11:18:37 crc kubenswrapper[4881]: E0121 11:18:37.778372 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75733567-f2a6-4331-bdea-147126213437" containerName="thanos-sidecar" Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.778379 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="75733567-f2a6-4331-bdea-147126213437" containerName="thanos-sidecar" Jan 21 11:18:37 crc kubenswrapper[4881]: E0121 11:18:37.778395 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70a2b37a-049a-45a1-aeb5-6b7d5515dd69" containerName="mariadb-account-create-update" Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.778401 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="70a2b37a-049a-45a1-aeb5-6b7d5515dd69" containerName="mariadb-account-create-update" Jan 21 11:18:37 crc kubenswrapper[4881]: E0121 11:18:37.778418 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75733567-f2a6-4331-bdea-147126213437" containerName="prometheus" Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.778424 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="75733567-f2a6-4331-bdea-147126213437" containerName="prometheus" Jan 21 11:18:37 crc kubenswrapper[4881]: E0121 11:18:37.778435 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75733567-f2a6-4331-bdea-147126213437" containerName="init-config-reloader" Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.778441 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="75733567-f2a6-4331-bdea-147126213437" containerName="init-config-reloader" Jan 21 11:18:37 crc kubenswrapper[4881]: E0121 11:18:37.778450 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27451133-57c8-4991-aae0-ec0a82432176" containerName="swift-ring-rebalance" Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.778457 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="27451133-57c8-4991-aae0-ec0a82432176" containerName="swift-ring-rebalance" Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.785277 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="70a2b37a-049a-45a1-aeb5-6b7d5515dd69" containerName="mariadb-account-create-update" Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.785335 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="75733567-f2a6-4331-bdea-147126213437" containerName="prometheus" Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.785349 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="75733567-f2a6-4331-bdea-147126213437" containerName="config-reloader" Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.785365 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="27451133-57c8-4991-aae0-ec0a82432176" containerName="swift-ring-rebalance" Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.785373 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="75733567-f2a6-4331-bdea-147126213437" containerName="thanos-sidecar" Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.805297 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.811187 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.812551 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.813168 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.814060 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.813655 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.815015 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.824976 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-jwvdx" Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.839163 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.863538 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 11:18:37 crc kubenswrapper[4881]: I0121 11:18:37.902603 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.004231 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.004343 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.004373 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.004389 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.004414 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9ng7\" (UniqueName: \"kubernetes.io/projected/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-kube-api-access-d9ng7\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.004436 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.004471 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.004519 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.004549 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.004584 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-config\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.004622 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.004641 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.004659 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.106797 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-config\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.107241 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.107286 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.107327 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.107376 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.107457 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.107479 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.107507 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.107557 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9ng7\" (UniqueName: \"kubernetes.io/projected/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-kube-api-access-d9ng7\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.107583 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.107708 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.107815 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.107849 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.109367 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.110016 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.110127 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.115683 4881 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.115731 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3c91253029fdcc57c7bcc13c4ee1dc503079fe71761fa62e5d04837e0b8b075e/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.119407 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-config\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.129169 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.131655 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.132922 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.135703 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.136770 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.137647 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.142093 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.144126 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9ng7\" (UniqueName: \"kubernetes.io/projected/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-kube-api-access-d9ng7\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.233454 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\") pod \"prometheus-metric-storage-0\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.510585 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 21 11:18:38 crc kubenswrapper[4881]: I0121 11:18:38.814126 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eafb725b-4d8c-44b6-8966-4c611d4897d8","Type":"ContainerStarted","Data":"7fea739fffe156a19d69d7b51628d39a5e4c2419e42dcdc81465b1fd6fd1e3e1"} Jan 21 11:18:39 crc kubenswrapper[4881]: I0121 11:18:39.162305 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 11:18:39 crc kubenswrapper[4881]: W0121 11:18:39.179069 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5ae3126_d6d3_4268_8e35_e216eabcc6f4.slice/crio-044ed91f90f2699cb0b2df7171e316d9c18fb8084140392d8cb4307802d39a3c WatchSource:0}: Error finding container 044ed91f90f2699cb0b2df7171e316d9c18fb8084140392d8cb4307802d39a3c: Status 404 returned error can't find the container with id 044ed91f90f2699cb0b2df7171e316d9c18fb8084140392d8cb4307802d39a3c Jan 21 11:18:39 crc kubenswrapper[4881]: I0121 11:18:39.327301 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75733567-f2a6-4331-bdea-147126213437" path="/var/lib/kubelet/pods/75733567-f2a6-4331-bdea-147126213437/volumes" Jan 21 11:18:39 crc kubenswrapper[4881]: I0121 11:18:39.626644 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="75733567-f2a6-4331-bdea-147126213437" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.113:9090/-/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 11:18:39 crc kubenswrapper[4881]: I0121 11:18:39.831484 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eafb725b-4d8c-44b6-8966-4c611d4897d8","Type":"ContainerStarted","Data":"3ddd9b68c26af9e4e85ec9549e5f6dce7d1eb4439d142a49985d4929d3f28693"} Jan 21 11:18:39 crc kubenswrapper[4881]: I0121 11:18:39.832707 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c5ae3126-d6d3-4268-8e35-e216eabcc6f4","Type":"ContainerStarted","Data":"044ed91f90f2699cb0b2df7171e316d9c18fb8084140392d8cb4307802d39a3c"} Jan 21 11:18:39 crc kubenswrapper[4881]: I0121 11:18:39.952609 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=40.502813543 podStartE2EDuration="45.952588942s" podCreationTimestamp="2026-01-21 11:17:54 +0000 UTC" firstStartedPulling="2026-01-21 11:18:29.384698183 +0000 UTC m=+1296.644654642" lastFinishedPulling="2026-01-21 11:18:34.834473572 +0000 UTC m=+1302.094430041" observedRunningTime="2026-01-21 11:18:39.949393733 +0000 UTC m=+1307.209350212" watchObservedRunningTime="2026-01-21 11:18:39.952588942 +0000 UTC m=+1307.212545411" Jan 21 11:18:40 crc kubenswrapper[4881]: I0121 11:18:40.191327 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="078c2368-b247-49d4-8723-fd93918e99b1" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.107:5671: connect: connection refused" Jan 21 11:18:40 crc kubenswrapper[4881]: I0121 11:18:40.275838 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c88945fd5-tqqvj"] Jan 21 11:18:40 crc kubenswrapper[4881]: I0121 11:18:40.277685 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" Jan 21 11:18:40 crc kubenswrapper[4881]: I0121 11:18:40.279954 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 21 11:18:40 crc kubenswrapper[4881]: I0121 11:18:40.296058 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c88945fd5-tqqvj"] Jan 21 11:18:40 crc kubenswrapper[4881]: I0121 11:18:40.366321 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e51b074c-ae44-4db9-9ce6-b656a961dfaf-ovsdbserver-nb\") pod \"dnsmasq-dns-7c88945fd5-tqqvj\" (UID: \"e51b074c-ae44-4db9-9ce6-b656a961dfaf\") " pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" Jan 21 11:18:40 crc kubenswrapper[4881]: I0121 11:18:40.366541 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e51b074c-ae44-4db9-9ce6-b656a961dfaf-ovsdbserver-sb\") pod \"dnsmasq-dns-7c88945fd5-tqqvj\" (UID: \"e51b074c-ae44-4db9-9ce6-b656a961dfaf\") " pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" Jan 21 11:18:40 crc kubenswrapper[4881]: I0121 11:18:40.366604 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e51b074c-ae44-4db9-9ce6-b656a961dfaf-dns-svc\") pod \"dnsmasq-dns-7c88945fd5-tqqvj\" (UID: \"e51b074c-ae44-4db9-9ce6-b656a961dfaf\") " pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" Jan 21 11:18:40 crc kubenswrapper[4881]: I0121 11:18:40.366648 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4gqq\" (UniqueName: \"kubernetes.io/projected/e51b074c-ae44-4db9-9ce6-b656a961dfaf-kube-api-access-m4gqq\") pod \"dnsmasq-dns-7c88945fd5-tqqvj\" (UID: \"e51b074c-ae44-4db9-9ce6-b656a961dfaf\") " pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" Jan 21 11:18:40 crc kubenswrapper[4881]: I0121 11:18:40.366688 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e51b074c-ae44-4db9-9ce6-b656a961dfaf-dns-swift-storage-0\") pod \"dnsmasq-dns-7c88945fd5-tqqvj\" (UID: \"e51b074c-ae44-4db9-9ce6-b656a961dfaf\") " pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" Jan 21 11:18:40 crc kubenswrapper[4881]: I0121 11:18:40.366717 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e51b074c-ae44-4db9-9ce6-b656a961dfaf-config\") pod \"dnsmasq-dns-7c88945fd5-tqqvj\" (UID: \"e51b074c-ae44-4db9-9ce6-b656a961dfaf\") " pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" Jan 21 11:18:40 crc kubenswrapper[4881]: I0121 11:18:40.468356 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e51b074c-ae44-4db9-9ce6-b656a961dfaf-ovsdbserver-nb\") pod \"dnsmasq-dns-7c88945fd5-tqqvj\" (UID: \"e51b074c-ae44-4db9-9ce6-b656a961dfaf\") " pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" Jan 21 11:18:40 crc kubenswrapper[4881]: I0121 11:18:40.468498 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e51b074c-ae44-4db9-9ce6-b656a961dfaf-ovsdbserver-sb\") pod \"dnsmasq-dns-7c88945fd5-tqqvj\" (UID: \"e51b074c-ae44-4db9-9ce6-b656a961dfaf\") " pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" Jan 21 11:18:40 crc kubenswrapper[4881]: I0121 11:18:40.468571 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e51b074c-ae44-4db9-9ce6-b656a961dfaf-dns-svc\") pod \"dnsmasq-dns-7c88945fd5-tqqvj\" (UID: \"e51b074c-ae44-4db9-9ce6-b656a961dfaf\") " pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" Jan 21 11:18:40 crc kubenswrapper[4881]: I0121 11:18:40.468628 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4gqq\" (UniqueName: \"kubernetes.io/projected/e51b074c-ae44-4db9-9ce6-b656a961dfaf-kube-api-access-m4gqq\") pod \"dnsmasq-dns-7c88945fd5-tqqvj\" (UID: \"e51b074c-ae44-4db9-9ce6-b656a961dfaf\") " pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" Jan 21 11:18:40 crc kubenswrapper[4881]: I0121 11:18:40.468752 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e51b074c-ae44-4db9-9ce6-b656a961dfaf-dns-swift-storage-0\") pod \"dnsmasq-dns-7c88945fd5-tqqvj\" (UID: \"e51b074c-ae44-4db9-9ce6-b656a961dfaf\") " pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" Jan 21 11:18:40 crc kubenswrapper[4881]: I0121 11:18:40.468799 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e51b074c-ae44-4db9-9ce6-b656a961dfaf-config\") pod \"dnsmasq-dns-7c88945fd5-tqqvj\" (UID: \"e51b074c-ae44-4db9-9ce6-b656a961dfaf\") " pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" Jan 21 11:18:40 crc kubenswrapper[4881]: I0121 11:18:40.469558 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e51b074c-ae44-4db9-9ce6-b656a961dfaf-ovsdbserver-sb\") pod \"dnsmasq-dns-7c88945fd5-tqqvj\" (UID: \"e51b074c-ae44-4db9-9ce6-b656a961dfaf\") " pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" Jan 21 11:18:40 crc kubenswrapper[4881]: I0121 11:18:40.469573 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e51b074c-ae44-4db9-9ce6-b656a961dfaf-ovsdbserver-nb\") pod \"dnsmasq-dns-7c88945fd5-tqqvj\" (UID: \"e51b074c-ae44-4db9-9ce6-b656a961dfaf\") " pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" Jan 21 11:18:40 crc kubenswrapper[4881]: I0121 11:18:40.469682 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e51b074c-ae44-4db9-9ce6-b656a961dfaf-dns-swift-storage-0\") pod \"dnsmasq-dns-7c88945fd5-tqqvj\" (UID: \"e51b074c-ae44-4db9-9ce6-b656a961dfaf\") " pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" Jan 21 11:18:40 crc kubenswrapper[4881]: I0121 11:18:40.469688 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e51b074c-ae44-4db9-9ce6-b656a961dfaf-dns-svc\") pod \"dnsmasq-dns-7c88945fd5-tqqvj\" (UID: \"e51b074c-ae44-4db9-9ce6-b656a961dfaf\") " pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" Jan 21 11:18:40 crc kubenswrapper[4881]: I0121 11:18:40.469923 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e51b074c-ae44-4db9-9ce6-b656a961dfaf-config\") pod \"dnsmasq-dns-7c88945fd5-tqqvj\" (UID: \"e51b074c-ae44-4db9-9ce6-b656a961dfaf\") " pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" Jan 21 11:18:40 crc kubenswrapper[4881]: I0121 11:18:40.477628 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="f7e90972-9be1-4d3e-852e-e7f7df6e6623" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.106:5671: connect: connection refused" Jan 21 11:18:40 crc kubenswrapper[4881]: I0121 11:18:40.498615 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4gqq\" (UniqueName: \"kubernetes.io/projected/e51b074c-ae44-4db9-9ce6-b656a961dfaf-kube-api-access-m4gqq\") pod \"dnsmasq-dns-7c88945fd5-tqqvj\" (UID: \"e51b074c-ae44-4db9-9ce6-b656a961dfaf\") " pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" Jan 21 11:18:40 crc kubenswrapper[4881]: I0121 11:18:40.586947 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-notifications-server-0" podUID="44bcf219-3358-4596-9d1e-88a51c415266" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.108:5671: connect: connection refused" Jan 21 11:18:40 crc kubenswrapper[4881]: I0121 11:18:40.603329 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" Jan 21 11:18:41 crc kubenswrapper[4881]: I0121 11:18:41.130423 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c88945fd5-tqqvj"] Jan 21 11:18:41 crc kubenswrapper[4881]: W0121 11:18:41.138995 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode51b074c_ae44_4db9_9ce6_b656a961dfaf.slice/crio-485dc8c96eb7030a8e95c465abb23eb90b718f53333b55d575fff9445925584c WatchSource:0}: Error finding container 485dc8c96eb7030a8e95c465abb23eb90b718f53333b55d575fff9445925584c: Status 404 returned error can't find the container with id 485dc8c96eb7030a8e95c465abb23eb90b718f53333b55d575fff9445925584c Jan 21 11:18:42 crc kubenswrapper[4881]: I0121 11:18:42.118082 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" event={"ID":"e51b074c-ae44-4db9-9ce6-b656a961dfaf","Type":"ContainerStarted","Data":"596eab5e695f6c4af1ee0501f1a922c8b4ac8e567cedab5865035324bb33f0cb"} Jan 21 11:18:42 crc kubenswrapper[4881]: I0121 11:18:42.118542 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" event={"ID":"e51b074c-ae44-4db9-9ce6-b656a961dfaf","Type":"ContainerStarted","Data":"485dc8c96eb7030a8e95c465abb23eb90b718f53333b55d575fff9445925584c"} Jan 21 11:18:43 crc kubenswrapper[4881]: I0121 11:18:43.130681 4881 generic.go:334] "Generic (PLEG): container finished" podID="e51b074c-ae44-4db9-9ce6-b656a961dfaf" containerID="596eab5e695f6c4af1ee0501f1a922c8b4ac8e567cedab5865035324bb33f0cb" exitCode=0 Jan 21 11:18:43 crc kubenswrapper[4881]: I0121 11:18:43.130882 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" event={"ID":"e51b074c-ae44-4db9-9ce6-b656a961dfaf","Type":"ContainerDied","Data":"596eab5e695f6c4af1ee0501f1a922c8b4ac8e567cedab5865035324bb33f0cb"} Jan 21 11:18:44 crc kubenswrapper[4881]: I0121 11:18:44.179971 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c5ae3126-d6d3-4268-8e35-e216eabcc6f4","Type":"ContainerStarted","Data":"a35359d5b5faf07c0a8496b05737dc67dd3207c714c5cd8b7b98eda3d6b21eb4"} Jan 21 11:18:44 crc kubenswrapper[4881]: I0121 11:18:44.184770 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" event={"ID":"e51b074c-ae44-4db9-9ce6-b656a961dfaf","Type":"ContainerStarted","Data":"942d5c3de6fa62e5024b8e526fb126bf73a64902207ddcb2a51d04aa20661a8c"} Jan 21 11:18:44 crc kubenswrapper[4881]: I0121 11:18:44.184900 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" Jan 21 11:18:50 crc kubenswrapper[4881]: I0121 11:18:50.192026 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:18:50 crc kubenswrapper[4881]: I0121 11:18:50.229599 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" podStartSLOduration=10.229574974 podStartE2EDuration="10.229574974s" podCreationTimestamp="2026-01-21 11:18:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:18:44.253899689 +0000 UTC m=+1311.513856178" watchObservedRunningTime="2026-01-21 11:18:50.229574974 +0000 UTC m=+1317.489531443" Jan 21 11:18:50 crc kubenswrapper[4881]: I0121 11:18:50.481052 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 21 11:18:50 crc kubenswrapper[4881]: I0121 11:18:50.591992 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-notifications-server-0" Jan 21 11:18:50 crc kubenswrapper[4881]: I0121 11:18:50.605013 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" Jan 21 11:18:51 crc kubenswrapper[4881]: I0121 11:18:51.158578 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84cb884cf9-wmwx8"] Jan 21 11:18:51 crc kubenswrapper[4881]: I0121 11:18:51.158818 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-84cb884cf9-wmwx8" podUID="62435f30-e8fc-4fcd-8b96-4a604439965e" containerName="dnsmasq-dns" containerID="cri-o://a28ebc9fc60a5b5f4c6d8022f7888aae1167af104726fcaf924581e71afbdd73" gracePeriod=10 Jan 21 11:18:52 crc kubenswrapper[4881]: I0121 11:18:52.311257 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84cb884cf9-wmwx8" Jan 21 11:18:52 crc kubenswrapper[4881]: I0121 11:18:52.491897 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/62435f30-e8fc-4fcd-8b96-4a604439965e-dns-svc\") pod \"62435f30-e8fc-4fcd-8b96-4a604439965e\" (UID: \"62435f30-e8fc-4fcd-8b96-4a604439965e\") " Jan 21 11:18:52 crc kubenswrapper[4881]: I0121 11:18:52.492101 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/62435f30-e8fc-4fcd-8b96-4a604439965e-ovsdbserver-nb\") pod \"62435f30-e8fc-4fcd-8b96-4a604439965e\" (UID: \"62435f30-e8fc-4fcd-8b96-4a604439965e\") " Jan 21 11:18:52 crc kubenswrapper[4881]: I0121 11:18:52.492158 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62435f30-e8fc-4fcd-8b96-4a604439965e-config\") pod \"62435f30-e8fc-4fcd-8b96-4a604439965e\" (UID: \"62435f30-e8fc-4fcd-8b96-4a604439965e\") " Jan 21 11:18:52 crc kubenswrapper[4881]: I0121 11:18:52.492338 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45wlj\" (UniqueName: \"kubernetes.io/projected/62435f30-e8fc-4fcd-8b96-4a604439965e-kube-api-access-45wlj\") pod \"62435f30-e8fc-4fcd-8b96-4a604439965e\" (UID: \"62435f30-e8fc-4fcd-8b96-4a604439965e\") " Jan 21 11:18:52 crc kubenswrapper[4881]: I0121 11:18:52.493318 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/62435f30-e8fc-4fcd-8b96-4a604439965e-ovsdbserver-sb\") pod \"62435f30-e8fc-4fcd-8b96-4a604439965e\" (UID: \"62435f30-e8fc-4fcd-8b96-4a604439965e\") " Jan 21 11:18:52 crc kubenswrapper[4881]: I0121 11:18:52.494075 4881 generic.go:334] "Generic (PLEG): container finished" podID="62435f30-e8fc-4fcd-8b96-4a604439965e" containerID="a28ebc9fc60a5b5f4c6d8022f7888aae1167af104726fcaf924581e71afbdd73" exitCode=0 Jan 21 11:18:52 crc kubenswrapper[4881]: I0121 11:18:52.494189 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84cb884cf9-wmwx8" event={"ID":"62435f30-e8fc-4fcd-8b96-4a604439965e","Type":"ContainerDied","Data":"a28ebc9fc60a5b5f4c6d8022f7888aae1167af104726fcaf924581e71afbdd73"} Jan 21 11:18:52 crc kubenswrapper[4881]: I0121 11:18:52.494241 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84cb884cf9-wmwx8" event={"ID":"62435f30-e8fc-4fcd-8b96-4a604439965e","Type":"ContainerDied","Data":"44f80926337efad13c65101fd501f43ed3467cedbf9bc0293c7241abb38a34e2"} Jan 21 11:18:52 crc kubenswrapper[4881]: I0121 11:18:52.494262 4881 scope.go:117] "RemoveContainer" containerID="a28ebc9fc60a5b5f4c6d8022f7888aae1167af104726fcaf924581e71afbdd73" Jan 21 11:18:52 crc kubenswrapper[4881]: I0121 11:18:52.494453 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84cb884cf9-wmwx8" Jan 21 11:18:52 crc kubenswrapper[4881]: I0121 11:18:52.515859 4881 generic.go:334] "Generic (PLEG): container finished" podID="c5ae3126-d6d3-4268-8e35-e216eabcc6f4" containerID="a35359d5b5faf07c0a8496b05737dc67dd3207c714c5cd8b7b98eda3d6b21eb4" exitCode=0 Jan 21 11:18:52 crc kubenswrapper[4881]: I0121 11:18:52.515901 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c5ae3126-d6d3-4268-8e35-e216eabcc6f4","Type":"ContainerDied","Data":"a35359d5b5faf07c0a8496b05737dc67dd3207c714c5cd8b7b98eda3d6b21eb4"} Jan 21 11:18:52 crc kubenswrapper[4881]: I0121 11:18:52.519872 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62435f30-e8fc-4fcd-8b96-4a604439965e-kube-api-access-45wlj" (OuterVolumeSpecName: "kube-api-access-45wlj") pod "62435f30-e8fc-4fcd-8b96-4a604439965e" (UID: "62435f30-e8fc-4fcd-8b96-4a604439965e"). InnerVolumeSpecName "kube-api-access-45wlj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:18:52 crc kubenswrapper[4881]: I0121 11:18:52.599681 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45wlj\" (UniqueName: \"kubernetes.io/projected/62435f30-e8fc-4fcd-8b96-4a604439965e-kube-api-access-45wlj\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:52 crc kubenswrapper[4881]: I0121 11:18:52.641039 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62435f30-e8fc-4fcd-8b96-4a604439965e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "62435f30-e8fc-4fcd-8b96-4a604439965e" (UID: "62435f30-e8fc-4fcd-8b96-4a604439965e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:18:52 crc kubenswrapper[4881]: I0121 11:18:52.680660 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62435f30-e8fc-4fcd-8b96-4a604439965e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "62435f30-e8fc-4fcd-8b96-4a604439965e" (UID: "62435f30-e8fc-4fcd-8b96-4a604439965e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:18:52 crc kubenswrapper[4881]: I0121 11:18:52.700260 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62435f30-e8fc-4fcd-8b96-4a604439965e-config" (OuterVolumeSpecName: "config") pod "62435f30-e8fc-4fcd-8b96-4a604439965e" (UID: "62435f30-e8fc-4fcd-8b96-4a604439965e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:18:52 crc kubenswrapper[4881]: I0121 11:18:52.708571 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/62435f30-e8fc-4fcd-8b96-4a604439965e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:52 crc kubenswrapper[4881]: I0121 11:18:52.712445 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62435f30-e8fc-4fcd-8b96-4a604439965e-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:52 crc kubenswrapper[4881]: I0121 11:18:52.713130 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/62435f30-e8fc-4fcd-8b96-4a604439965e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:52 crc kubenswrapper[4881]: I0121 11:18:52.718475 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62435f30-e8fc-4fcd-8b96-4a604439965e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "62435f30-e8fc-4fcd-8b96-4a604439965e" (UID: "62435f30-e8fc-4fcd-8b96-4a604439965e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:18:52 crc kubenswrapper[4881]: I0121 11:18:52.788434 4881 scope.go:117] "RemoveContainer" containerID="f24832aadef02f1c7ff84c5f003b7d3cb18bb769662ee1a6581898a328c41e06" Jan 21 11:18:52 crc kubenswrapper[4881]: I0121 11:18:52.815349 4881 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/62435f30-e8fc-4fcd-8b96-4a604439965e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 11:18:52 crc kubenswrapper[4881]: I0121 11:18:52.872189 4881 scope.go:117] "RemoveContainer" containerID="a28ebc9fc60a5b5f4c6d8022f7888aae1167af104726fcaf924581e71afbdd73" Jan 21 11:18:52 crc kubenswrapper[4881]: E0121 11:18:52.885105 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a28ebc9fc60a5b5f4c6d8022f7888aae1167af104726fcaf924581e71afbdd73\": container with ID starting with a28ebc9fc60a5b5f4c6d8022f7888aae1167af104726fcaf924581e71afbdd73 not found: ID does not exist" containerID="a28ebc9fc60a5b5f4c6d8022f7888aae1167af104726fcaf924581e71afbdd73" Jan 21 11:18:52 crc kubenswrapper[4881]: I0121 11:18:52.885215 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a28ebc9fc60a5b5f4c6d8022f7888aae1167af104726fcaf924581e71afbdd73"} err="failed to get container status \"a28ebc9fc60a5b5f4c6d8022f7888aae1167af104726fcaf924581e71afbdd73\": rpc error: code = NotFound desc = could not find container \"a28ebc9fc60a5b5f4c6d8022f7888aae1167af104726fcaf924581e71afbdd73\": container with ID starting with a28ebc9fc60a5b5f4c6d8022f7888aae1167af104726fcaf924581e71afbdd73 not found: ID does not exist" Jan 21 11:18:52 crc kubenswrapper[4881]: I0121 11:18:52.885267 4881 scope.go:117] "RemoveContainer" containerID="f24832aadef02f1c7ff84c5f003b7d3cb18bb769662ee1a6581898a328c41e06" Jan 21 11:18:52 crc kubenswrapper[4881]: E0121 11:18:52.892045 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f24832aadef02f1c7ff84c5f003b7d3cb18bb769662ee1a6581898a328c41e06\": container with ID starting with f24832aadef02f1c7ff84c5f003b7d3cb18bb769662ee1a6581898a328c41e06 not found: ID does not exist" containerID="f24832aadef02f1c7ff84c5f003b7d3cb18bb769662ee1a6581898a328c41e06" Jan 21 11:18:52 crc kubenswrapper[4881]: I0121 11:18:52.892137 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f24832aadef02f1c7ff84c5f003b7d3cb18bb769662ee1a6581898a328c41e06"} err="failed to get container status \"f24832aadef02f1c7ff84c5f003b7d3cb18bb769662ee1a6581898a328c41e06\": rpc error: code = NotFound desc = could not find container \"f24832aadef02f1c7ff84c5f003b7d3cb18bb769662ee1a6581898a328c41e06\": container with ID starting with f24832aadef02f1c7ff84c5f003b7d3cb18bb769662ee1a6581898a328c41e06 not found: ID does not exist" Jan 21 11:18:52 crc kubenswrapper[4881]: I0121 11:18:52.893798 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84cb884cf9-wmwx8"] Jan 21 11:18:52 crc kubenswrapper[4881]: I0121 11:18:52.915693 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84cb884cf9-wmwx8"] Jan 21 11:18:53 crc kubenswrapper[4881]: I0121 11:18:53.064471 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-sync-t4mx7"] Jan 21 11:18:53 crc kubenswrapper[4881]: E0121 11:18:53.065363 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62435f30-e8fc-4fcd-8b96-4a604439965e" containerName="dnsmasq-dns" Jan 21 11:18:53 crc kubenswrapper[4881]: I0121 11:18:53.065386 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="62435f30-e8fc-4fcd-8b96-4a604439965e" containerName="dnsmasq-dns" Jan 21 11:18:55 crc kubenswrapper[4881]: E0121 11:18:53.066137 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62435f30-e8fc-4fcd-8b96-4a604439965e" containerName="init" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.066168 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="62435f30-e8fc-4fcd-8b96-4a604439965e" containerName="init" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.066627 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="62435f30-e8fc-4fcd-8b96-4a604439965e" containerName="dnsmasq-dns" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.068763 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-t4mx7" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.081086 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-vlkhp" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.081405 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-config-data" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.105448 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-t4mx7"] Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.186484 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-ktp2w"] Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.188181 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ktp2w" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.202254 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-ktp2w"] Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.252976 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc7e598c-b449-4e8c-9214-44e27cb45e53-config-data\") pod \"watcher-db-sync-t4mx7\" (UID: \"bc7e598c-b449-4e8c-9214-44e27cb45e53\") " pod="openstack/watcher-db-sync-t4mx7" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.253028 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bc7e598c-b449-4e8c-9214-44e27cb45e53-db-sync-config-data\") pod \"watcher-db-sync-t4mx7\" (UID: \"bc7e598c-b449-4e8c-9214-44e27cb45e53\") " pod="openstack/watcher-db-sync-t4mx7" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.253076 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd8cs\" (UniqueName: \"kubernetes.io/projected/bc7e598c-b449-4e8c-9214-44e27cb45e53-kube-api-access-gd8cs\") pod \"watcher-db-sync-t4mx7\" (UID: \"bc7e598c-b449-4e8c-9214-44e27cb45e53\") " pod="openstack/watcher-db-sync-t4mx7" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.253141 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc7e598c-b449-4e8c-9214-44e27cb45e53-combined-ca-bundle\") pod \"watcher-db-sync-t4mx7\" (UID: \"bc7e598c-b449-4e8c-9214-44e27cb45e53\") " pod="openstack/watcher-db-sync-t4mx7" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.298546 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-r9r4z"] Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.303529 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-r9r4z" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.331024 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62435f30-e8fc-4fcd-8b96-4a604439965e" path="/var/lib/kubelet/pods/62435f30-e8fc-4fcd-8b96-4a604439965e/volumes" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.355812 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc7e598c-b449-4e8c-9214-44e27cb45e53-config-data\") pod \"watcher-db-sync-t4mx7\" (UID: \"bc7e598c-b449-4e8c-9214-44e27cb45e53\") " pod="openstack/watcher-db-sync-t4mx7" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.355872 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5g6p\" (UniqueName: \"kubernetes.io/projected/5d72ab14-b1c2-4382-847a-00eb254ac958-kube-api-access-z5g6p\") pod \"cinder-db-create-ktp2w\" (UID: \"5d72ab14-b1c2-4382-847a-00eb254ac958\") " pod="openstack/cinder-db-create-ktp2w" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.355905 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bc7e598c-b449-4e8c-9214-44e27cb45e53-db-sync-config-data\") pod \"watcher-db-sync-t4mx7\" (UID: \"bc7e598c-b449-4e8c-9214-44e27cb45e53\") " pod="openstack/watcher-db-sync-t4mx7" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.355940 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gd8cs\" (UniqueName: \"kubernetes.io/projected/bc7e598c-b449-4e8c-9214-44e27cb45e53-kube-api-access-gd8cs\") pod \"watcher-db-sync-t4mx7\" (UID: \"bc7e598c-b449-4e8c-9214-44e27cb45e53\") " pod="openstack/watcher-db-sync-t4mx7" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.355977 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d72ab14-b1c2-4382-847a-00eb254ac958-operator-scripts\") pod \"cinder-db-create-ktp2w\" (UID: \"5d72ab14-b1c2-4382-847a-00eb254ac958\") " pod="openstack/cinder-db-create-ktp2w" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.356014 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc7e598c-b449-4e8c-9214-44e27cb45e53-combined-ca-bundle\") pod \"watcher-db-sync-t4mx7\" (UID: \"bc7e598c-b449-4e8c-9214-44e27cb45e53\") " pod="openstack/watcher-db-sync-t4mx7" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.359507 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-r9r4z"] Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.366806 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc7e598c-b449-4e8c-9214-44e27cb45e53-combined-ca-bundle\") pod \"watcher-db-sync-t4mx7\" (UID: \"bc7e598c-b449-4e8c-9214-44e27cb45e53\") " pod="openstack/watcher-db-sync-t4mx7" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.367535 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc7e598c-b449-4e8c-9214-44e27cb45e53-config-data\") pod \"watcher-db-sync-t4mx7\" (UID: \"bc7e598c-b449-4e8c-9214-44e27cb45e53\") " pod="openstack/watcher-db-sync-t4mx7" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.368232 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bc7e598c-b449-4e8c-9214-44e27cb45e53-db-sync-config-data\") pod \"watcher-db-sync-t4mx7\" (UID: \"bc7e598c-b449-4e8c-9214-44e27cb45e53\") " pod="openstack/watcher-db-sync-t4mx7" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.377474 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-a5aa-account-create-update-j2nc8"] Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.379657 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a5aa-account-create-update-j2nc8" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.389300 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.398290 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-a5aa-account-create-update-j2nc8"] Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.409589 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd8cs\" (UniqueName: \"kubernetes.io/projected/bc7e598c-b449-4e8c-9214-44e27cb45e53-kube-api-access-gd8cs\") pod \"watcher-db-sync-t4mx7\" (UID: \"bc7e598c-b449-4e8c-9214-44e27cb45e53\") " pod="openstack/watcher-db-sync-t4mx7" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.417868 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-t4mx7" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.458775 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5g6p\" (UniqueName: \"kubernetes.io/projected/5d72ab14-b1c2-4382-847a-00eb254ac958-kube-api-access-z5g6p\") pod \"cinder-db-create-ktp2w\" (UID: \"5d72ab14-b1c2-4382-847a-00eb254ac958\") " pod="openstack/cinder-db-create-ktp2w" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.458873 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8cfe009-eba2-4713-b50f-cc334b4ca691-operator-scripts\") pod \"barbican-db-create-r9r4z\" (UID: \"c8cfe009-eba2-4713-b50f-cc334b4ca691\") " pod="openstack/barbican-db-create-r9r4z" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.458944 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d72ab14-b1c2-4382-847a-00eb254ac958-operator-scripts\") pod \"cinder-db-create-ktp2w\" (UID: \"5d72ab14-b1c2-4382-847a-00eb254ac958\") " pod="openstack/cinder-db-create-ktp2w" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.459017 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfm9d\" (UniqueName: \"kubernetes.io/projected/c8cfe009-eba2-4713-b50f-cc334b4ca691-kube-api-access-qfm9d\") pod \"barbican-db-create-r9r4z\" (UID: \"c8cfe009-eba2-4713-b50f-cc334b4ca691\") " pod="openstack/barbican-db-create-r9r4z" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.460913 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d72ab14-b1c2-4382-847a-00eb254ac958-operator-scripts\") pod \"cinder-db-create-ktp2w\" (UID: \"5d72ab14-b1c2-4382-847a-00eb254ac958\") " pod="openstack/cinder-db-create-ktp2w" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.513657 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5g6p\" (UniqueName: \"kubernetes.io/projected/5d72ab14-b1c2-4382-847a-00eb254ac958-kube-api-access-z5g6p\") pod \"cinder-db-create-ktp2w\" (UID: \"5d72ab14-b1c2-4382-847a-00eb254ac958\") " pod="openstack/cinder-db-create-ktp2w" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.560260 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-44pdb"] Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.562156 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-44pdb" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.567531 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8cfe009-eba2-4713-b50f-cc334b4ca691-operator-scripts\") pod \"barbican-db-create-r9r4z\" (UID: \"c8cfe009-eba2-4713-b50f-cc334b4ca691\") " pod="openstack/barbican-db-create-r9r4z" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.567647 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm6lx\" (UniqueName: \"kubernetes.io/projected/ec3ba10e-2cbd-4350-9014-27a92932849f-kube-api-access-nm6lx\") pod \"barbican-a5aa-account-create-update-j2nc8\" (UID: \"ec3ba10e-2cbd-4350-9014-27a92932849f\") " pod="openstack/barbican-a5aa-account-create-update-j2nc8" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.567755 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec3ba10e-2cbd-4350-9014-27a92932849f-operator-scripts\") pod \"barbican-a5aa-account-create-update-j2nc8\" (UID: \"ec3ba10e-2cbd-4350-9014-27a92932849f\") " pod="openstack/barbican-a5aa-account-create-update-j2nc8" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.567843 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfm9d\" (UniqueName: \"kubernetes.io/projected/c8cfe009-eba2-4713-b50f-cc334b4ca691-kube-api-access-qfm9d\") pod \"barbican-db-create-r9r4z\" (UID: \"c8cfe009-eba2-4713-b50f-cc334b4ca691\") " pod="openstack/barbican-db-create-r9r4z" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.569297 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8cfe009-eba2-4713-b50f-cc334b4ca691-operator-scripts\") pod \"barbican-db-create-r9r4z\" (UID: \"c8cfe009-eba2-4713-b50f-cc334b4ca691\") " pod="openstack/barbican-db-create-r9r4z" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.580756 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-44pdb"] Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.584031 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.584344 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.584640 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.591315 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-j54nk" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.629589 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfm9d\" (UniqueName: \"kubernetes.io/projected/c8cfe009-eba2-4713-b50f-cc334b4ca691-kube-api-access-qfm9d\") pod \"barbican-db-create-r9r4z\" (UID: \"c8cfe009-eba2-4713-b50f-cc334b4ca691\") " pod="openstack/barbican-db-create-r9r4z" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.669560 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34efcb76-01fb-490b-88c0-a4ee1363a01e-config-data\") pod \"keystone-db-sync-44pdb\" (UID: \"34efcb76-01fb-490b-88c0-a4ee1363a01e\") " pod="openstack/keystone-db-sync-44pdb" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.669621 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nm6lx\" (UniqueName: \"kubernetes.io/projected/ec3ba10e-2cbd-4350-9014-27a92932849f-kube-api-access-nm6lx\") pod \"barbican-a5aa-account-create-update-j2nc8\" (UID: \"ec3ba10e-2cbd-4350-9014-27a92932849f\") " pod="openstack/barbican-a5aa-account-create-update-j2nc8" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.669668 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec3ba10e-2cbd-4350-9014-27a92932849f-operator-scripts\") pod \"barbican-a5aa-account-create-update-j2nc8\" (UID: \"ec3ba10e-2cbd-4350-9014-27a92932849f\") " pod="openstack/barbican-a5aa-account-create-update-j2nc8" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.669695 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34efcb76-01fb-490b-88c0-a4ee1363a01e-combined-ca-bundle\") pod \"keystone-db-sync-44pdb\" (UID: \"34efcb76-01fb-490b-88c0-a4ee1363a01e\") " pod="openstack/keystone-db-sync-44pdb" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.669821 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5fnp\" (UniqueName: \"kubernetes.io/projected/34efcb76-01fb-490b-88c0-a4ee1363a01e-kube-api-access-r5fnp\") pod \"keystone-db-sync-44pdb\" (UID: \"34efcb76-01fb-490b-88c0-a4ee1363a01e\") " pod="openstack/keystone-db-sync-44pdb" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.674198 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-r9r4z" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.675043 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec3ba10e-2cbd-4350-9014-27a92932849f-operator-scripts\") pod \"barbican-a5aa-account-create-update-j2nc8\" (UID: \"ec3ba10e-2cbd-4350-9014-27a92932849f\") " pod="openstack/barbican-a5aa-account-create-update-j2nc8" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.718748 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nm6lx\" (UniqueName: \"kubernetes.io/projected/ec3ba10e-2cbd-4350-9014-27a92932849f-kube-api-access-nm6lx\") pod \"barbican-a5aa-account-create-update-j2nc8\" (UID: \"ec3ba10e-2cbd-4350-9014-27a92932849f\") " pod="openstack/barbican-a5aa-account-create-update-j2nc8" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.786106 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="cd1973a5-773b-438b-aab7-709fb828324d" containerName="galera" probeResult="failure" output="command timed out" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.790330 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34efcb76-01fb-490b-88c0-a4ee1363a01e-config-data\") pod \"keystone-db-sync-44pdb\" (UID: \"34efcb76-01fb-490b-88c0-a4ee1363a01e\") " pod="openstack/keystone-db-sync-44pdb" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.790430 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34efcb76-01fb-490b-88c0-a4ee1363a01e-combined-ca-bundle\") pod \"keystone-db-sync-44pdb\" (UID: \"34efcb76-01fb-490b-88c0-a4ee1363a01e\") " pod="openstack/keystone-db-sync-44pdb" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.790566 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5fnp\" (UniqueName: \"kubernetes.io/projected/34efcb76-01fb-490b-88c0-a4ee1363a01e-kube-api-access-r5fnp\") pod \"keystone-db-sync-44pdb\" (UID: \"34efcb76-01fb-490b-88c0-a4ee1363a01e\") " pod="openstack/keystone-db-sync-44pdb" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.798890 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="cd1973a5-773b-438b-aab7-709fb828324d" containerName="galera" probeResult="failure" output="command timed out" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.803802 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34efcb76-01fb-490b-88c0-a4ee1363a01e-combined-ca-bundle\") pod \"keystone-db-sync-44pdb\" (UID: \"34efcb76-01fb-490b-88c0-a4ee1363a01e\") " pod="openstack/keystone-db-sync-44pdb" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.808206 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.816948 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ktp2w" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:53.821875 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34efcb76-01fb-490b-88c0-a4ee1363a01e-config-data\") pod \"keystone-db-sync-44pdb\" (UID: \"34efcb76-01fb-490b-88c0-a4ee1363a01e\") " pod="openstack/keystone-db-sync-44pdb" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:54.173687 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a5aa-account-create-update-j2nc8" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:54.230648 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5fnp\" (UniqueName: \"kubernetes.io/projected/34efcb76-01fb-490b-88c0-a4ee1363a01e-kube-api-access-r5fnp\") pod \"keystone-db-sync-44pdb\" (UID: \"34efcb76-01fb-490b-88c0-a4ee1363a01e\") " pod="openstack/keystone-db-sync-44pdb" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:54.520256 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-j54nk" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:54.528531 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-44pdb" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:54.562860 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-c7b7-account-create-update-dcz9r"] Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:54.564353 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c7b7-account-create-update-dcz9r" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:54.576288 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:54.689357 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-c7b7-account-create-update-dcz9r"] Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:54.713482 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0145b8f9-5452-4f0e-819c-61fbb8badffb-operator-scripts\") pod \"cinder-c7b7-account-create-update-dcz9r\" (UID: \"0145b8f9-5452-4f0e-819c-61fbb8badffb\") " pod="openstack/cinder-c7b7-account-create-update-dcz9r" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:54.713573 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wd8bn\" (UniqueName: \"kubernetes.io/projected/0145b8f9-5452-4f0e-819c-61fbb8badffb-kube-api-access-wd8bn\") pod \"cinder-c7b7-account-create-update-dcz9r\" (UID: \"0145b8f9-5452-4f0e-819c-61fbb8badffb\") " pod="openstack/cinder-c7b7-account-create-update-dcz9r" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:54.779610 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c5ae3126-d6d3-4268-8e35-e216eabcc6f4","Type":"ContainerStarted","Data":"8325ef681bcdbc9f213b1b50d5070cda09f322843e0e7d334a000739ac240fa4"} Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:54.814996 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wd8bn\" (UniqueName: \"kubernetes.io/projected/0145b8f9-5452-4f0e-819c-61fbb8badffb-kube-api-access-wd8bn\") pod \"cinder-c7b7-account-create-update-dcz9r\" (UID: \"0145b8f9-5452-4f0e-819c-61fbb8badffb\") " pod="openstack/cinder-c7b7-account-create-update-dcz9r" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:54.815206 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0145b8f9-5452-4f0e-819c-61fbb8badffb-operator-scripts\") pod \"cinder-c7b7-account-create-update-dcz9r\" (UID: \"0145b8f9-5452-4f0e-819c-61fbb8badffb\") " pod="openstack/cinder-c7b7-account-create-update-dcz9r" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:54.819086 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0145b8f9-5452-4f0e-819c-61fbb8badffb-operator-scripts\") pod \"cinder-c7b7-account-create-update-dcz9r\" (UID: \"0145b8f9-5452-4f0e-819c-61fbb8badffb\") " pod="openstack/cinder-c7b7-account-create-update-dcz9r" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:54.882983 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wd8bn\" (UniqueName: \"kubernetes.io/projected/0145b8f9-5452-4f0e-819c-61fbb8badffb-kube-api-access-wd8bn\") pod \"cinder-c7b7-account-create-update-dcz9r\" (UID: \"0145b8f9-5452-4f0e-819c-61fbb8badffb\") " pod="openstack/cinder-c7b7-account-create-update-dcz9r" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:55.133832 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c7b7-account-create-update-dcz9r" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:55.871028 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-82x9l"] Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:55.874132 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-82x9l" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:55.894223 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-82x9l"] Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:55.942601 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j4f4\" (UniqueName: \"kubernetes.io/projected/b4b2b4e9-304c-47ae-939a-9d938d012b90-kube-api-access-7j4f4\") pod \"glance-db-create-82x9l\" (UID: \"b4b2b4e9-304c-47ae-939a-9d938d012b90\") " pod="openstack/glance-db-create-82x9l" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:55.942745 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4b2b4e9-304c-47ae-939a-9d938d012b90-operator-scripts\") pod \"glance-db-create-82x9l\" (UID: \"b4b2b4e9-304c-47ae-939a-9d938d012b90\") " pod="openstack/glance-db-create-82x9l" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:55.972604 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-3649-account-create-update-pqj5m"] Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:55.974486 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3649-account-create-update-pqj5m" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:55.978520 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 21 11:18:55 crc kubenswrapper[4881]: I0121 11:18:55.995200 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-3649-account-create-update-pqj5m"] Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.047625 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7j4f4\" (UniqueName: \"kubernetes.io/projected/b4b2b4e9-304c-47ae-939a-9d938d012b90-kube-api-access-7j4f4\") pod \"glance-db-create-82x9l\" (UID: \"b4b2b4e9-304c-47ae-939a-9d938d012b90\") " pod="openstack/glance-db-create-82x9l" Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.049234 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f6f337c-95ec-448f-ab58-e7e7fe7abfd4-operator-scripts\") pod \"glance-3649-account-create-update-pqj5m\" (UID: \"6f6f337c-95ec-448f-ab58-e7e7fe7abfd4\") " pod="openstack/glance-3649-account-create-update-pqj5m" Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.049380 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxqzm\" (UniqueName: \"kubernetes.io/projected/6f6f337c-95ec-448f-ab58-e7e7fe7abfd4-kube-api-access-fxqzm\") pod \"glance-3649-account-create-update-pqj5m\" (UID: \"6f6f337c-95ec-448f-ab58-e7e7fe7abfd4\") " pod="openstack/glance-3649-account-create-update-pqj5m" Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.049590 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4b2b4e9-304c-47ae-939a-9d938d012b90-operator-scripts\") pod \"glance-db-create-82x9l\" (UID: \"b4b2b4e9-304c-47ae-939a-9d938d012b90\") " pod="openstack/glance-db-create-82x9l" Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.051260 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4b2b4e9-304c-47ae-939a-9d938d012b90-operator-scripts\") pod \"glance-db-create-82x9l\" (UID: \"b4b2b4e9-304c-47ae-939a-9d938d012b90\") " pod="openstack/glance-db-create-82x9l" Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.097473 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7j4f4\" (UniqueName: \"kubernetes.io/projected/b4b2b4e9-304c-47ae-939a-9d938d012b90-kube-api-access-7j4f4\") pod \"glance-db-create-82x9l\" (UID: \"b4b2b4e9-304c-47ae-939a-9d938d012b90\") " pod="openstack/glance-db-create-82x9l" Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.151419 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f6f337c-95ec-448f-ab58-e7e7fe7abfd4-operator-scripts\") pod \"glance-3649-account-create-update-pqj5m\" (UID: \"6f6f337c-95ec-448f-ab58-e7e7fe7abfd4\") " pod="openstack/glance-3649-account-create-update-pqj5m" Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.151468 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxqzm\" (UniqueName: \"kubernetes.io/projected/6f6f337c-95ec-448f-ab58-e7e7fe7abfd4-kube-api-access-fxqzm\") pod \"glance-3649-account-create-update-pqj5m\" (UID: \"6f6f337c-95ec-448f-ab58-e7e7fe7abfd4\") " pod="openstack/glance-3649-account-create-update-pqj5m" Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.152679 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f6f337c-95ec-448f-ab58-e7e7fe7abfd4-operator-scripts\") pod \"glance-3649-account-create-update-pqj5m\" (UID: \"6f6f337c-95ec-448f-ab58-e7e7fe7abfd4\") " pod="openstack/glance-3649-account-create-update-pqj5m" Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.203464 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-82x9l" Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.219441 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-b544m"] Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.221399 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-b544m" Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.246037 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-170f-account-create-update-8bt4l"] Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.247889 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-170f-account-create-update-8bt4l" Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.250420 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxqzm\" (UniqueName: \"kubernetes.io/projected/6f6f337c-95ec-448f-ab58-e7e7fe7abfd4-kube-api-access-fxqzm\") pod \"glance-3649-account-create-update-pqj5m\" (UID: \"6f6f337c-95ec-448f-ab58-e7e7fe7abfd4\") " pod="openstack/glance-3649-account-create-update-pqj5m" Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.259634 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.270289 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-b544m"] Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.285379 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-170f-account-create-update-8bt4l"] Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.307483 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3649-account-create-update-pqj5m" Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.658217 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmx6h\" (UniqueName: \"kubernetes.io/projected/c837cab9-43a5-4b84-a0bd-d915bca31600-kube-api-access-gmx6h\") pod \"neutron-170f-account-create-update-8bt4l\" (UID: \"c837cab9-43a5-4b84-a0bd-d915bca31600\") " pod="openstack/neutron-170f-account-create-update-8bt4l" Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.658293 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/760e8dbf-d827-42ef-969c-1c7409f7ac20-operator-scripts\") pod \"neutron-db-create-b544m\" (UID: \"760e8dbf-d827-42ef-969c-1c7409f7ac20\") " pod="openstack/neutron-db-create-b544m" Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.658336 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c837cab9-43a5-4b84-a0bd-d915bca31600-operator-scripts\") pod \"neutron-170f-account-create-update-8bt4l\" (UID: \"c837cab9-43a5-4b84-a0bd-d915bca31600\") " pod="openstack/neutron-170f-account-create-update-8bt4l" Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.658428 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft5l8\" (UniqueName: \"kubernetes.io/projected/760e8dbf-d827-42ef-969c-1c7409f7ac20-kube-api-access-ft5l8\") pod \"neutron-db-create-b544m\" (UID: \"760e8dbf-d827-42ef-969c-1c7409f7ac20\") " pod="openstack/neutron-db-create-b544m" Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.759852 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ft5l8\" (UniqueName: \"kubernetes.io/projected/760e8dbf-d827-42ef-969c-1c7409f7ac20-kube-api-access-ft5l8\") pod \"neutron-db-create-b544m\" (UID: \"760e8dbf-d827-42ef-969c-1c7409f7ac20\") " pod="openstack/neutron-db-create-b544m" Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.760337 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmx6h\" (UniqueName: \"kubernetes.io/projected/c837cab9-43a5-4b84-a0bd-d915bca31600-kube-api-access-gmx6h\") pod \"neutron-170f-account-create-update-8bt4l\" (UID: \"c837cab9-43a5-4b84-a0bd-d915bca31600\") " pod="openstack/neutron-170f-account-create-update-8bt4l" Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.760376 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/760e8dbf-d827-42ef-969c-1c7409f7ac20-operator-scripts\") pod \"neutron-db-create-b544m\" (UID: \"760e8dbf-d827-42ef-969c-1c7409f7ac20\") " pod="openstack/neutron-db-create-b544m" Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.760421 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c837cab9-43a5-4b84-a0bd-d915bca31600-operator-scripts\") pod \"neutron-170f-account-create-update-8bt4l\" (UID: \"c837cab9-43a5-4b84-a0bd-d915bca31600\") " pod="openstack/neutron-170f-account-create-update-8bt4l" Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.761309 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c837cab9-43a5-4b84-a0bd-d915bca31600-operator-scripts\") pod \"neutron-170f-account-create-update-8bt4l\" (UID: \"c837cab9-43a5-4b84-a0bd-d915bca31600\") " pod="openstack/neutron-170f-account-create-update-8bt4l" Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.762008 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/760e8dbf-d827-42ef-969c-1c7409f7ac20-operator-scripts\") pod \"neutron-db-create-b544m\" (UID: \"760e8dbf-d827-42ef-969c-1c7409f7ac20\") " pod="openstack/neutron-db-create-b544m" Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.799672 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft5l8\" (UniqueName: \"kubernetes.io/projected/760e8dbf-d827-42ef-969c-1c7409f7ac20-kube-api-access-ft5l8\") pod \"neutron-db-create-b544m\" (UID: \"760e8dbf-d827-42ef-969c-1c7409f7ac20\") " pod="openstack/neutron-db-create-b544m" Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.807668 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmx6h\" (UniqueName: \"kubernetes.io/projected/c837cab9-43a5-4b84-a0bd-d915bca31600-kube-api-access-gmx6h\") pod \"neutron-170f-account-create-update-8bt4l\" (UID: \"c837cab9-43a5-4b84-a0bd-d915bca31600\") " pod="openstack/neutron-170f-account-create-update-8bt4l" Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.948623 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-b544m" Jan 21 11:18:56 crc kubenswrapper[4881]: I0121 11:18:56.998093 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-170f-account-create-update-8bt4l" Jan 21 11:18:57 crc kubenswrapper[4881]: I0121 11:18:57.097899 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-44pdb"] Jan 21 11:18:57 crc kubenswrapper[4881]: I0121 11:18:57.155814 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 21 11:18:57 crc kubenswrapper[4881]: I0121 11:18:57.157389 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-r9r4z"] Jan 21 11:18:57 crc kubenswrapper[4881]: I0121 11:18:57.181080 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-a5aa-account-create-update-j2nc8"] Jan 21 11:18:57 crc kubenswrapper[4881]: I0121 11:18:57.197866 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-c7b7-account-create-update-dcz9r"] Jan 21 11:18:57 crc kubenswrapper[4881]: I0121 11:18:57.287105 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-ktp2w"] Jan 21 11:18:57 crc kubenswrapper[4881]: W0121 11:18:57.318382 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc7e598c_b449_4e8c_9214_44e27cb45e53.slice/crio-7f0bea9e9dc943e576802d8c9a13363afa658fe4236f457e4490a5dbcd4320bd WatchSource:0}: Error finding container 7f0bea9e9dc943e576802d8c9a13363afa658fe4236f457e4490a5dbcd4320bd: Status 404 returned error can't find the container with id 7f0bea9e9dc943e576802d8c9a13363afa658fe4236f457e4490a5dbcd4320bd Jan 21 11:18:57 crc kubenswrapper[4881]: I0121 11:18:57.346568 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-t4mx7"] Jan 21 11:18:57 crc kubenswrapper[4881]: I0121 11:18:57.347543 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-82x9l"] Jan 21 11:18:57 crc kubenswrapper[4881]: I0121 11:18:57.734551 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-3649-account-create-update-pqj5m"] Jan 21 11:18:57 crc kubenswrapper[4881]: I0121 11:18:57.829193 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-44pdb" event={"ID":"34efcb76-01fb-490b-88c0-a4ee1363a01e","Type":"ContainerStarted","Data":"6dc4d522c502820b83234d2fee061b7bda412d486d52242e7e816991b3acbb57"} Jan 21 11:18:57 crc kubenswrapper[4881]: I0121 11:18:57.837703 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-82x9l" event={"ID":"b4b2b4e9-304c-47ae-939a-9d938d012b90","Type":"ContainerStarted","Data":"2a3219b4170b52910ee3ec4f3e718c26c9394c8de6c94a328647a77455eecee7"} Jan 21 11:18:57 crc kubenswrapper[4881]: I0121 11:18:57.847280 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-r9r4z" event={"ID":"c8cfe009-eba2-4713-b50f-cc334b4ca691","Type":"ContainerStarted","Data":"f5cc4525f4f901e33752ba6e7b8772cae9da70d02d9ba133272b4a6ad13119ce"} Jan 21 11:18:57 crc kubenswrapper[4881]: I0121 11:18:57.849236 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-t4mx7" event={"ID":"bc7e598c-b449-4e8c-9214-44e27cb45e53","Type":"ContainerStarted","Data":"7f0bea9e9dc943e576802d8c9a13363afa658fe4236f457e4490a5dbcd4320bd"} Jan 21 11:18:57 crc kubenswrapper[4881]: I0121 11:18:57.851309 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c7b7-account-create-update-dcz9r" event={"ID":"0145b8f9-5452-4f0e-819c-61fbb8badffb","Type":"ContainerStarted","Data":"bcc90bc5bb0ac66c01f3db31717a3508d38e85e90ddae059cb25369a981558ec"} Jan 21 11:18:57 crc kubenswrapper[4881]: I0121 11:18:57.853906 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-170f-account-create-update-8bt4l"] Jan 21 11:18:57 crc kubenswrapper[4881]: I0121 11:18:57.856528 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-a5aa-account-create-update-j2nc8" event={"ID":"ec3ba10e-2cbd-4350-9014-27a92932849f","Type":"ContainerStarted","Data":"b23cd46acdcd43f425c2a5437146050ee4518de5ebe4b06308893c922580bb1d"} Jan 21 11:18:57 crc kubenswrapper[4881]: I0121 11:18:57.859776 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-ktp2w" event={"ID":"5d72ab14-b1c2-4382-847a-00eb254ac958","Type":"ContainerStarted","Data":"72a097cf59195f6eec304ff661d8ae56f590c3e6389aa564783cb080dd6a3c8c"} Jan 21 11:18:57 crc kubenswrapper[4881]: I0121 11:18:57.913663 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-b544m"] Jan 21 11:18:57 crc kubenswrapper[4881]: W0121 11:18:57.990966 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc837cab9_43a5_4b84_a0bd_d915bca31600.slice/crio-7d95b7d1b61a8e9a37711f63ef2a8a7295172bf5dcd8dec5e260dde19f296088 WatchSource:0}: Error finding container 7d95b7d1b61a8e9a37711f63ef2a8a7295172bf5dcd8dec5e260dde19f296088: Status 404 returned error can't find the container with id 7d95b7d1b61a8e9a37711f63ef2a8a7295172bf5dcd8dec5e260dde19f296088 Jan 21 11:18:58 crc kubenswrapper[4881]: W0121 11:18:58.024312 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod760e8dbf_d827_42ef_969c_1c7409f7ac20.slice/crio-c42dc028c1bafc0d8598c90d3604d93606d97482b49abd8a9779624f869edd2d WatchSource:0}: Error finding container c42dc028c1bafc0d8598c90d3604d93606d97482b49abd8a9779624f869edd2d: Status 404 returned error can't find the container with id c42dc028c1bafc0d8598c90d3604d93606d97482b49abd8a9779624f869edd2d Jan 21 11:18:59 crc kubenswrapper[4881]: I0121 11:18:59.116577 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-170f-account-create-update-8bt4l" event={"ID":"c837cab9-43a5-4b84-a0bd-d915bca31600","Type":"ContainerStarted","Data":"7d95b7d1b61a8e9a37711f63ef2a8a7295172bf5dcd8dec5e260dde19f296088"} Jan 21 11:18:59 crc kubenswrapper[4881]: I0121 11:18:59.125183 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3649-account-create-update-pqj5m" event={"ID":"6f6f337c-95ec-448f-ab58-e7e7fe7abfd4","Type":"ContainerStarted","Data":"0d6a8467ce12e79fc1ea582199a39bbf54288de22059797959d06afa76924361"} Jan 21 11:18:59 crc kubenswrapper[4881]: I0121 11:18:59.128397 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-b544m" event={"ID":"760e8dbf-d827-42ef-969c-1c7409f7ac20","Type":"ContainerStarted","Data":"c42dc028c1bafc0d8598c90d3604d93606d97482b49abd8a9779624f869edd2d"} Jan 21 11:19:00 crc kubenswrapper[4881]: I0121 11:19:00.158839 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-ktp2w" event={"ID":"5d72ab14-b1c2-4382-847a-00eb254ac958","Type":"ContainerStarted","Data":"9183c1ea9a3472251b9a9872ac196a0371d8a3a960cf0876e3244bf2dc5fc313"} Jan 21 11:19:00 crc kubenswrapper[4881]: I0121 11:19:00.164489 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-82x9l" event={"ID":"b4b2b4e9-304c-47ae-939a-9d938d012b90","Type":"ContainerStarted","Data":"842c407700548966028d06c2f685224af9199aeb260a3fcbe49b13c5d2308449"} Jan 21 11:19:00 crc kubenswrapper[4881]: I0121 11:19:00.167586 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-r9r4z" event={"ID":"c8cfe009-eba2-4713-b50f-cc334b4ca691","Type":"ContainerStarted","Data":"8fede96a0f0891ea2a0beeea55c81b92d1d136a372295efbbbb9fb60c32a400b"} Jan 21 11:19:00 crc kubenswrapper[4881]: I0121 11:19:00.169847 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-170f-account-create-update-8bt4l" event={"ID":"c837cab9-43a5-4b84-a0bd-d915bca31600","Type":"ContainerStarted","Data":"475d11a1d0ffe3143569c01c096587097abd1f5b648c8d0d1064b5b35157b3c4"} Jan 21 11:19:00 crc kubenswrapper[4881]: I0121 11:19:00.173157 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3649-account-create-update-pqj5m" event={"ID":"6f6f337c-95ec-448f-ab58-e7e7fe7abfd4","Type":"ContainerStarted","Data":"23d18cc60c7d47249b61d06b5e22cae5297e1e798a824f42c26b13569f6185c2"} Jan 21 11:19:00 crc kubenswrapper[4881]: I0121 11:19:00.174884 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-b544m" event={"ID":"760e8dbf-d827-42ef-969c-1c7409f7ac20","Type":"ContainerStarted","Data":"4830c420695532fe361ac3eb65ee53d659da36dd7a4d7c07a18532e51115b820"} Jan 21 11:19:00 crc kubenswrapper[4881]: I0121 11:19:00.177551 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c7b7-account-create-update-dcz9r" event={"ID":"0145b8f9-5452-4f0e-819c-61fbb8badffb","Type":"ContainerStarted","Data":"19837216e672b1d70dcee3db6a9cc2dfe6a6a6ac2f0ef6c6a1c9729e5d023d0f"} Jan 21 11:19:00 crc kubenswrapper[4881]: I0121 11:19:00.184993 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-a5aa-account-create-update-j2nc8" event={"ID":"ec3ba10e-2cbd-4350-9014-27a92932849f","Type":"ContainerStarted","Data":"68b28d1f90d946399d23686118aca2c39b038f12760a90f94c3980be0fdb6b45"} Jan 21 11:19:00 crc kubenswrapper[4881]: I0121 11:19:00.192656 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c5ae3126-d6d3-4268-8e35-e216eabcc6f4","Type":"ContainerStarted","Data":"ef9d78c9c5e22c01f5e8274cad9637d465377b5339dc20fcbf444a1190841bcb"} Jan 21 11:19:00 crc kubenswrapper[4881]: I0121 11:19:00.201548 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-c7b7-account-create-update-dcz9r" podStartSLOduration=7.201514433 podStartE2EDuration="7.201514433s" podCreationTimestamp="2026-01-21 11:18:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:19:00.197344909 +0000 UTC m=+1327.457301378" watchObservedRunningTime="2026-01-21 11:19:00.201514433 +0000 UTC m=+1327.461470912" Jan 21 11:19:02 crc kubenswrapper[4881]: I0121 11:19:02.437616 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-3649-account-create-update-pqj5m" podStartSLOduration=7.437586694 podStartE2EDuration="7.437586694s" podCreationTimestamp="2026-01-21 11:18:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:19:02.425687926 +0000 UTC m=+1329.685644405" watchObservedRunningTime="2026-01-21 11:19:02.437586694 +0000 UTC m=+1329.697543163" Jan 21 11:19:02 crc kubenswrapper[4881]: I0121 11:19:02.454325 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-a5aa-account-create-update-j2nc8" podStartSLOduration=9.454291902 podStartE2EDuration="9.454291902s" podCreationTimestamp="2026-01-21 11:18:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:19:02.449973234 +0000 UTC m=+1329.709929703" watchObservedRunningTime="2026-01-21 11:19:02.454291902 +0000 UTC m=+1329.714248371" Jan 21 11:19:02 crc kubenswrapper[4881]: I0121 11:19:02.489232 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-ktp2w" podStartSLOduration=9.489199275 podStartE2EDuration="9.489199275s" podCreationTimestamp="2026-01-21 11:18:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:19:02.48099973 +0000 UTC m=+1329.740956209" watchObservedRunningTime="2026-01-21 11:19:02.489199275 +0000 UTC m=+1329.749155744" Jan 21 11:19:02 crc kubenswrapper[4881]: I0121 11:19:02.507363 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-b544m" podStartSLOduration=6.507328899 podStartE2EDuration="6.507328899s" podCreationTimestamp="2026-01-21 11:18:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:19:02.503012891 +0000 UTC m=+1329.762969380" watchObservedRunningTime="2026-01-21 11:19:02.507328899 +0000 UTC m=+1329.767285368" Jan 21 11:19:02 crc kubenswrapper[4881]: I0121 11:19:02.527104 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-170f-account-create-update-8bt4l" podStartSLOduration=6.527067822 podStartE2EDuration="6.527067822s" podCreationTimestamp="2026-01-21 11:18:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:19:02.525875932 +0000 UTC m=+1329.785832401" watchObservedRunningTime="2026-01-21 11:19:02.527067822 +0000 UTC m=+1329.787024291" Jan 21 11:19:02 crc kubenswrapper[4881]: I0121 11:19:02.559924 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-82x9l" podStartSLOduration=7.559901394 podStartE2EDuration="7.559901394s" podCreationTimestamp="2026-01-21 11:18:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:19:02.548562771 +0000 UTC m=+1329.808519230" watchObservedRunningTime="2026-01-21 11:19:02.559901394 +0000 UTC m=+1329.819857863" Jan 21 11:19:02 crc kubenswrapper[4881]: I0121 11:19:02.569509 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-r9r4z" podStartSLOduration=9.569490553 podStartE2EDuration="9.569490553s" podCreationTimestamp="2026-01-21 11:18:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:19:02.567515494 +0000 UTC m=+1329.827471963" watchObservedRunningTime="2026-01-21 11:19:02.569490553 +0000 UTC m=+1329.829447032" Jan 21 11:19:03 crc kubenswrapper[4881]: I0121 11:19:03.401773 4881 generic.go:334] "Generic (PLEG): container finished" podID="c837cab9-43a5-4b84-a0bd-d915bca31600" containerID="475d11a1d0ffe3143569c01c096587097abd1f5b648c8d0d1064b5b35157b3c4" exitCode=0 Jan 21 11:19:03 crc kubenswrapper[4881]: I0121 11:19:03.401866 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-170f-account-create-update-8bt4l" event={"ID":"c837cab9-43a5-4b84-a0bd-d915bca31600","Type":"ContainerDied","Data":"475d11a1d0ffe3143569c01c096587097abd1f5b648c8d0d1064b5b35157b3c4"} Jan 21 11:19:03 crc kubenswrapper[4881]: I0121 11:19:03.406551 4881 generic.go:334] "Generic (PLEG): container finished" podID="5d72ab14-b1c2-4382-847a-00eb254ac958" containerID="9183c1ea9a3472251b9a9872ac196a0371d8a3a960cf0876e3244bf2dc5fc313" exitCode=0 Jan 21 11:19:03 crc kubenswrapper[4881]: I0121 11:19:03.406595 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-ktp2w" event={"ID":"5d72ab14-b1c2-4382-847a-00eb254ac958","Type":"ContainerDied","Data":"9183c1ea9a3472251b9a9872ac196a0371d8a3a960cf0876e3244bf2dc5fc313"} Jan 21 11:19:03 crc kubenswrapper[4881]: I0121 11:19:03.409565 4881 generic.go:334] "Generic (PLEG): container finished" podID="760e8dbf-d827-42ef-969c-1c7409f7ac20" containerID="4830c420695532fe361ac3eb65ee53d659da36dd7a4d7c07a18532e51115b820" exitCode=0 Jan 21 11:19:03 crc kubenswrapper[4881]: I0121 11:19:03.409601 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-b544m" event={"ID":"760e8dbf-d827-42ef-969c-1c7409f7ac20","Type":"ContainerDied","Data":"4830c420695532fe361ac3eb65ee53d659da36dd7a4d7c07a18532e51115b820"} Jan 21 11:19:03 crc kubenswrapper[4881]: I0121 11:19:03.426296 4881 generic.go:334] "Generic (PLEG): container finished" podID="b4b2b4e9-304c-47ae-939a-9d938d012b90" containerID="842c407700548966028d06c2f685224af9199aeb260a3fcbe49b13c5d2308449" exitCode=0 Jan 21 11:19:03 crc kubenswrapper[4881]: I0121 11:19:03.426425 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-82x9l" event={"ID":"b4b2b4e9-304c-47ae-939a-9d938d012b90","Type":"ContainerDied","Data":"842c407700548966028d06c2f685224af9199aeb260a3fcbe49b13c5d2308449"} Jan 21 11:19:03 crc kubenswrapper[4881]: I0121 11:19:03.435468 4881 generic.go:334] "Generic (PLEG): container finished" podID="c8cfe009-eba2-4713-b50f-cc334b4ca691" containerID="8fede96a0f0891ea2a0beeea55c81b92d1d136a372295efbbbb9fb60c32a400b" exitCode=0 Jan 21 11:19:03 crc kubenswrapper[4881]: I0121 11:19:03.435645 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-r9r4z" event={"ID":"c8cfe009-eba2-4713-b50f-cc334b4ca691","Type":"ContainerDied","Data":"8fede96a0f0891ea2a0beeea55c81b92d1d136a372295efbbbb9fb60c32a400b"} Jan 21 11:19:03 crc kubenswrapper[4881]: I0121 11:19:03.451150 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c5ae3126-d6d3-4268-8e35-e216eabcc6f4","Type":"ContainerStarted","Data":"c140acf6f14058c82c2022005acd28d679f35f983dc5582ed33c0dd219896e01"} Jan 21 11:19:03 crc kubenswrapper[4881]: I0121 11:19:03.511333 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 21 11:19:03 crc kubenswrapper[4881]: I0121 11:19:03.535093 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=26.53506426 podStartE2EDuration="26.53506426s" podCreationTimestamp="2026-01-21 11:18:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:19:03.534003293 +0000 UTC m=+1330.793959772" watchObservedRunningTime="2026-01-21 11:19:03.53506426 +0000 UTC m=+1330.795020729" Jan 21 11:19:05 crc kubenswrapper[4881]: I0121 11:19:05.966470 4881 generic.go:334] "Generic (PLEG): container finished" podID="6f6f337c-95ec-448f-ab58-e7e7fe7abfd4" containerID="23d18cc60c7d47249b61d06b5e22cae5297e1e798a824f42c26b13569f6185c2" exitCode=0 Jan 21 11:19:05 crc kubenswrapper[4881]: I0121 11:19:05.966689 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3649-account-create-update-pqj5m" event={"ID":"6f6f337c-95ec-448f-ab58-e7e7fe7abfd4","Type":"ContainerDied","Data":"23d18cc60c7d47249b61d06b5e22cae5297e1e798a824f42c26b13569f6185c2"} Jan 21 11:19:05 crc kubenswrapper[4881]: I0121 11:19:05.972982 4881 generic.go:334] "Generic (PLEG): container finished" podID="0145b8f9-5452-4f0e-819c-61fbb8badffb" containerID="19837216e672b1d70dcee3db6a9cc2dfe6a6a6ac2f0ef6c6a1c9729e5d023d0f" exitCode=0 Jan 21 11:19:05 crc kubenswrapper[4881]: I0121 11:19:05.973069 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c7b7-account-create-update-dcz9r" event={"ID":"0145b8f9-5452-4f0e-819c-61fbb8badffb","Type":"ContainerDied","Data":"19837216e672b1d70dcee3db6a9cc2dfe6a6a6ac2f0ef6c6a1c9729e5d023d0f"} Jan 21 11:19:05 crc kubenswrapper[4881]: I0121 11:19:05.976842 4881 generic.go:334] "Generic (PLEG): container finished" podID="ec3ba10e-2cbd-4350-9014-27a92932849f" containerID="68b28d1f90d946399d23686118aca2c39b038f12760a90f94c3980be0fdb6b45" exitCode=0 Jan 21 11:19:05 crc kubenswrapper[4881]: I0121 11:19:05.977035 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-a5aa-account-create-update-j2nc8" event={"ID":"ec3ba10e-2cbd-4350-9014-27a92932849f","Type":"ContainerDied","Data":"68b28d1f90d946399d23686118aca2c39b038f12760a90f94c3980be0fdb6b45"} Jan 21 11:19:08 crc kubenswrapper[4881]: I0121 11:19:08.511986 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 21 11:19:08 crc kubenswrapper[4881]: I0121 11:19:08.523440 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 21 11:19:09 crc kubenswrapper[4881]: I0121 11:19:09.021300 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 21 11:19:09 crc kubenswrapper[4881]: I0121 11:19:09.224840 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-b544m" Jan 21 11:19:09 crc kubenswrapper[4881]: I0121 11:19:09.231857 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-170f-account-create-update-8bt4l" Jan 21 11:19:09 crc kubenswrapper[4881]: I0121 11:19:09.238217 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-r9r4z" Jan 21 11:19:09 crc kubenswrapper[4881]: I0121 11:19:09.319595 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c837cab9-43a5-4b84-a0bd-d915bca31600-operator-scripts\") pod \"c837cab9-43a5-4b84-a0bd-d915bca31600\" (UID: \"c837cab9-43a5-4b84-a0bd-d915bca31600\") " Jan 21 11:19:09 crc kubenswrapper[4881]: I0121 11:19:09.319685 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8cfe009-eba2-4713-b50f-cc334b4ca691-operator-scripts\") pod \"c8cfe009-eba2-4713-b50f-cc334b4ca691\" (UID: \"c8cfe009-eba2-4713-b50f-cc334b4ca691\") " Jan 21 11:19:09 crc kubenswrapper[4881]: I0121 11:19:09.319952 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/760e8dbf-d827-42ef-969c-1c7409f7ac20-operator-scripts\") pod \"760e8dbf-d827-42ef-969c-1c7409f7ac20\" (UID: \"760e8dbf-d827-42ef-969c-1c7409f7ac20\") " Jan 21 11:19:09 crc kubenswrapper[4881]: I0121 11:19:09.320022 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmx6h\" (UniqueName: \"kubernetes.io/projected/c837cab9-43a5-4b84-a0bd-d915bca31600-kube-api-access-gmx6h\") pod \"c837cab9-43a5-4b84-a0bd-d915bca31600\" (UID: \"c837cab9-43a5-4b84-a0bd-d915bca31600\") " Jan 21 11:19:09 crc kubenswrapper[4881]: I0121 11:19:09.320072 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ft5l8\" (UniqueName: \"kubernetes.io/projected/760e8dbf-d827-42ef-969c-1c7409f7ac20-kube-api-access-ft5l8\") pod \"760e8dbf-d827-42ef-969c-1c7409f7ac20\" (UID: \"760e8dbf-d827-42ef-969c-1c7409f7ac20\") " Jan 21 11:19:09 crc kubenswrapper[4881]: I0121 11:19:09.320103 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfm9d\" (UniqueName: \"kubernetes.io/projected/c8cfe009-eba2-4713-b50f-cc334b4ca691-kube-api-access-qfm9d\") pod \"c8cfe009-eba2-4713-b50f-cc334b4ca691\" (UID: \"c8cfe009-eba2-4713-b50f-cc334b4ca691\") " Jan 21 11:19:09 crc kubenswrapper[4881]: I0121 11:19:09.320867 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/760e8dbf-d827-42ef-969c-1c7409f7ac20-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "760e8dbf-d827-42ef-969c-1c7409f7ac20" (UID: "760e8dbf-d827-42ef-969c-1c7409f7ac20"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:19:09 crc kubenswrapper[4881]: I0121 11:19:09.321321 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c837cab9-43a5-4b84-a0bd-d915bca31600-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c837cab9-43a5-4b84-a0bd-d915bca31600" (UID: "c837cab9-43a5-4b84-a0bd-d915bca31600"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:19:09 crc kubenswrapper[4881]: I0121 11:19:09.321670 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8cfe009-eba2-4713-b50f-cc334b4ca691-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c8cfe009-eba2-4713-b50f-cc334b4ca691" (UID: "c8cfe009-eba2-4713-b50f-cc334b4ca691"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:19:09 crc kubenswrapper[4881]: I0121 11:19:09.336175 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/760e8dbf-d827-42ef-969c-1c7409f7ac20-kube-api-access-ft5l8" (OuterVolumeSpecName: "kube-api-access-ft5l8") pod "760e8dbf-d827-42ef-969c-1c7409f7ac20" (UID: "760e8dbf-d827-42ef-969c-1c7409f7ac20"). InnerVolumeSpecName "kube-api-access-ft5l8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:19:09 crc kubenswrapper[4881]: I0121 11:19:09.346429 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c837cab9-43a5-4b84-a0bd-d915bca31600-kube-api-access-gmx6h" (OuterVolumeSpecName: "kube-api-access-gmx6h") pod "c837cab9-43a5-4b84-a0bd-d915bca31600" (UID: "c837cab9-43a5-4b84-a0bd-d915bca31600"). InnerVolumeSpecName "kube-api-access-gmx6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:19:09 crc kubenswrapper[4881]: I0121 11:19:09.347107 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8cfe009-eba2-4713-b50f-cc334b4ca691-kube-api-access-qfm9d" (OuterVolumeSpecName: "kube-api-access-qfm9d") pod "c8cfe009-eba2-4713-b50f-cc334b4ca691" (UID: "c8cfe009-eba2-4713-b50f-cc334b4ca691"). InnerVolumeSpecName "kube-api-access-qfm9d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:19:09 crc kubenswrapper[4881]: I0121 11:19:09.423560 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8cfe009-eba2-4713-b50f-cc334b4ca691-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:19:09 crc kubenswrapper[4881]: I0121 11:19:09.423645 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/760e8dbf-d827-42ef-969c-1c7409f7ac20-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:19:09 crc kubenswrapper[4881]: I0121 11:19:09.423659 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmx6h\" (UniqueName: \"kubernetes.io/projected/c837cab9-43a5-4b84-a0bd-d915bca31600-kube-api-access-gmx6h\") on node \"crc\" DevicePath \"\"" Jan 21 11:19:09 crc kubenswrapper[4881]: I0121 11:19:09.423672 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ft5l8\" (UniqueName: \"kubernetes.io/projected/760e8dbf-d827-42ef-969c-1c7409f7ac20-kube-api-access-ft5l8\") on node \"crc\" DevicePath \"\"" Jan 21 11:19:09 crc kubenswrapper[4881]: I0121 11:19:09.423686 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfm9d\" (UniqueName: \"kubernetes.io/projected/c8cfe009-eba2-4713-b50f-cc334b4ca691-kube-api-access-qfm9d\") on node \"crc\" DevicePath \"\"" Jan 21 11:19:09 crc kubenswrapper[4881]: I0121 11:19:09.423700 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c837cab9-43a5-4b84-a0bd-d915bca31600-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:19:10 crc kubenswrapper[4881]: I0121 11:19:10.029699 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-170f-account-create-update-8bt4l" event={"ID":"c837cab9-43a5-4b84-a0bd-d915bca31600","Type":"ContainerDied","Data":"7d95b7d1b61a8e9a37711f63ef2a8a7295172bf5dcd8dec5e260dde19f296088"} Jan 21 11:19:10 crc kubenswrapper[4881]: I0121 11:19:10.029812 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d95b7d1b61a8e9a37711f63ef2a8a7295172bf5dcd8dec5e260dde19f296088" Jan 21 11:19:10 crc kubenswrapper[4881]: I0121 11:19:10.029911 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-170f-account-create-update-8bt4l" Jan 21 11:19:10 crc kubenswrapper[4881]: I0121 11:19:10.033167 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-b544m" event={"ID":"760e8dbf-d827-42ef-969c-1c7409f7ac20","Type":"ContainerDied","Data":"c42dc028c1bafc0d8598c90d3604d93606d97482b49abd8a9779624f869edd2d"} Jan 21 11:19:10 crc kubenswrapper[4881]: I0121 11:19:10.033219 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c42dc028c1bafc0d8598c90d3604d93606d97482b49abd8a9779624f869edd2d" Jan 21 11:19:10 crc kubenswrapper[4881]: I0121 11:19:10.033312 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-b544m" Jan 21 11:19:10 crc kubenswrapper[4881]: I0121 11:19:10.038843 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-r9r4z" Jan 21 11:19:10 crc kubenswrapper[4881]: I0121 11:19:10.039180 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-r9r4z" event={"ID":"c8cfe009-eba2-4713-b50f-cc334b4ca691","Type":"ContainerDied","Data":"f5cc4525f4f901e33752ba6e7b8772cae9da70d02d9ba133272b4a6ad13119ce"} Jan 21 11:19:10 crc kubenswrapper[4881]: I0121 11:19:10.039251 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5cc4525f4f901e33752ba6e7b8772cae9da70d02d9ba133272b4a6ad13119ce" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.073462 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-82x9l" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.081911 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c7b7-account-create-update-dcz9r" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.124206 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ktp2w" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.130772 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-a5aa-account-create-update-j2nc8" event={"ID":"ec3ba10e-2cbd-4350-9014-27a92932849f","Type":"ContainerDied","Data":"b23cd46acdcd43f425c2a5437146050ee4518de5ebe4b06308893c922580bb1d"} Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.130844 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b23cd46acdcd43f425c2a5437146050ee4518de5ebe4b06308893c922580bb1d" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.140294 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3649-account-create-update-pqj5m" event={"ID":"6f6f337c-95ec-448f-ab58-e7e7fe7abfd4","Type":"ContainerDied","Data":"0d6a8467ce12e79fc1ea582199a39bbf54288de22059797959d06afa76924361"} Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.140340 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d6a8467ce12e79fc1ea582199a39bbf54288de22059797959d06afa76924361" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.144636 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-ktp2w" event={"ID":"5d72ab14-b1c2-4382-847a-00eb254ac958","Type":"ContainerDied","Data":"72a097cf59195f6eec304ff661d8ae56f590c3e6389aa564783cb080dd6a3c8c"} Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.144674 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72a097cf59195f6eec304ff661d8ae56f590c3e6389aa564783cb080dd6a3c8c" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.144728 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-ktp2w" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.145900 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3649-account-create-update-pqj5m" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.148697 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a5aa-account-create-update-j2nc8" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.155184 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-82x9l" event={"ID":"b4b2b4e9-304c-47ae-939a-9d938d012b90","Type":"ContainerDied","Data":"2a3219b4170b52910ee3ec4f3e718c26c9394c8de6c94a328647a77455eecee7"} Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.155249 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a3219b4170b52910ee3ec4f3e718c26c9394c8de6c94a328647a77455eecee7" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.155372 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-82x9l" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.160857 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c7b7-account-create-update-dcz9r" event={"ID":"0145b8f9-5452-4f0e-819c-61fbb8badffb","Type":"ContainerDied","Data":"bcc90bc5bb0ac66c01f3db31717a3508d38e85e90ddae059cb25369a981558ec"} Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.160902 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bcc90bc5bb0ac66c01f3db31717a3508d38e85e90ddae059cb25369a981558ec" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.161038 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c7b7-account-create-update-dcz9r" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.176145 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0145b8f9-5452-4f0e-819c-61fbb8badffb-operator-scripts\") pod \"0145b8f9-5452-4f0e-819c-61fbb8badffb\" (UID: \"0145b8f9-5452-4f0e-819c-61fbb8badffb\") " Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.176347 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7j4f4\" (UniqueName: \"kubernetes.io/projected/b4b2b4e9-304c-47ae-939a-9d938d012b90-kube-api-access-7j4f4\") pod \"b4b2b4e9-304c-47ae-939a-9d938d012b90\" (UID: \"b4b2b4e9-304c-47ae-939a-9d938d012b90\") " Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.176487 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wd8bn\" (UniqueName: \"kubernetes.io/projected/0145b8f9-5452-4f0e-819c-61fbb8badffb-kube-api-access-wd8bn\") pod \"0145b8f9-5452-4f0e-819c-61fbb8badffb\" (UID: \"0145b8f9-5452-4f0e-819c-61fbb8badffb\") " Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.176540 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4b2b4e9-304c-47ae-939a-9d938d012b90-operator-scripts\") pod \"b4b2b4e9-304c-47ae-939a-9d938d012b90\" (UID: \"b4b2b4e9-304c-47ae-939a-9d938d012b90\") " Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.177959 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4b2b4e9-304c-47ae-939a-9d938d012b90-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b4b2b4e9-304c-47ae-939a-9d938d012b90" (UID: "b4b2b4e9-304c-47ae-939a-9d938d012b90"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.178429 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0145b8f9-5452-4f0e-819c-61fbb8badffb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0145b8f9-5452-4f0e-819c-61fbb8badffb" (UID: "0145b8f9-5452-4f0e-819c-61fbb8badffb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.190068 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4b2b4e9-304c-47ae-939a-9d938d012b90-kube-api-access-7j4f4" (OuterVolumeSpecName: "kube-api-access-7j4f4") pod "b4b2b4e9-304c-47ae-939a-9d938d012b90" (UID: "b4b2b4e9-304c-47ae-939a-9d938d012b90"). InnerVolumeSpecName "kube-api-access-7j4f4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.190229 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0145b8f9-5452-4f0e-819c-61fbb8badffb-kube-api-access-wd8bn" (OuterVolumeSpecName: "kube-api-access-wd8bn") pod "0145b8f9-5452-4f0e-819c-61fbb8badffb" (UID: "0145b8f9-5452-4f0e-819c-61fbb8badffb"). InnerVolumeSpecName "kube-api-access-wd8bn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.278944 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d72ab14-b1c2-4382-847a-00eb254ac958-operator-scripts\") pod \"5d72ab14-b1c2-4382-847a-00eb254ac958\" (UID: \"5d72ab14-b1c2-4382-847a-00eb254ac958\") " Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.279010 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nm6lx\" (UniqueName: \"kubernetes.io/projected/ec3ba10e-2cbd-4350-9014-27a92932849f-kube-api-access-nm6lx\") pod \"ec3ba10e-2cbd-4350-9014-27a92932849f\" (UID: \"ec3ba10e-2cbd-4350-9014-27a92932849f\") " Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.279194 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec3ba10e-2cbd-4350-9014-27a92932849f-operator-scripts\") pod \"ec3ba10e-2cbd-4350-9014-27a92932849f\" (UID: \"ec3ba10e-2cbd-4350-9014-27a92932849f\") " Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.279251 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5g6p\" (UniqueName: \"kubernetes.io/projected/5d72ab14-b1c2-4382-847a-00eb254ac958-kube-api-access-z5g6p\") pod \"5d72ab14-b1c2-4382-847a-00eb254ac958\" (UID: \"5d72ab14-b1c2-4382-847a-00eb254ac958\") " Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.279339 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxqzm\" (UniqueName: \"kubernetes.io/projected/6f6f337c-95ec-448f-ab58-e7e7fe7abfd4-kube-api-access-fxqzm\") pod \"6f6f337c-95ec-448f-ab58-e7e7fe7abfd4\" (UID: \"6f6f337c-95ec-448f-ab58-e7e7fe7abfd4\") " Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.279374 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f6f337c-95ec-448f-ab58-e7e7fe7abfd4-operator-scripts\") pod \"6f6f337c-95ec-448f-ab58-e7e7fe7abfd4\" (UID: \"6f6f337c-95ec-448f-ab58-e7e7fe7abfd4\") " Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.279663 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d72ab14-b1c2-4382-847a-00eb254ac958-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5d72ab14-b1c2-4382-847a-00eb254ac958" (UID: "5d72ab14-b1c2-4382-847a-00eb254ac958"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.280055 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0145b8f9-5452-4f0e-819c-61fbb8badffb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.280085 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7j4f4\" (UniqueName: \"kubernetes.io/projected/b4b2b4e9-304c-47ae-939a-9d938d012b90-kube-api-access-7j4f4\") on node \"crc\" DevicePath \"\"" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.280097 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d72ab14-b1c2-4382-847a-00eb254ac958-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.280107 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wd8bn\" (UniqueName: \"kubernetes.io/projected/0145b8f9-5452-4f0e-819c-61fbb8badffb-kube-api-access-wd8bn\") on node \"crc\" DevicePath \"\"" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.280117 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4b2b4e9-304c-47ae-939a-9d938d012b90-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.280244 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f6f337c-95ec-448f-ab58-e7e7fe7abfd4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6f6f337c-95ec-448f-ab58-e7e7fe7abfd4" (UID: "6f6f337c-95ec-448f-ab58-e7e7fe7abfd4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.280266 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec3ba10e-2cbd-4350-9014-27a92932849f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ec3ba10e-2cbd-4350-9014-27a92932849f" (UID: "ec3ba10e-2cbd-4350-9014-27a92932849f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.282702 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec3ba10e-2cbd-4350-9014-27a92932849f-kube-api-access-nm6lx" (OuterVolumeSpecName: "kube-api-access-nm6lx") pod "ec3ba10e-2cbd-4350-9014-27a92932849f" (UID: "ec3ba10e-2cbd-4350-9014-27a92932849f"). InnerVolumeSpecName "kube-api-access-nm6lx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.282745 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d72ab14-b1c2-4382-847a-00eb254ac958-kube-api-access-z5g6p" (OuterVolumeSpecName: "kube-api-access-z5g6p") pod "5d72ab14-b1c2-4382-847a-00eb254ac958" (UID: "5d72ab14-b1c2-4382-847a-00eb254ac958"). InnerVolumeSpecName "kube-api-access-z5g6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.284406 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f6f337c-95ec-448f-ab58-e7e7fe7abfd4-kube-api-access-fxqzm" (OuterVolumeSpecName: "kube-api-access-fxqzm") pod "6f6f337c-95ec-448f-ab58-e7e7fe7abfd4" (UID: "6f6f337c-95ec-448f-ab58-e7e7fe7abfd4"). InnerVolumeSpecName "kube-api-access-fxqzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.382002 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxqzm\" (UniqueName: \"kubernetes.io/projected/6f6f337c-95ec-448f-ab58-e7e7fe7abfd4-kube-api-access-fxqzm\") on node \"crc\" DevicePath \"\"" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.382045 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f6f337c-95ec-448f-ab58-e7e7fe7abfd4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.382057 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nm6lx\" (UniqueName: \"kubernetes.io/projected/ec3ba10e-2cbd-4350-9014-27a92932849f-kube-api-access-nm6lx\") on node \"crc\" DevicePath \"\"" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.382069 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec3ba10e-2cbd-4350-9014-27a92932849f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:19:15 crc kubenswrapper[4881]: I0121 11:19:15.382078 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5g6p\" (UniqueName: \"kubernetes.io/projected/5d72ab14-b1c2-4382-847a-00eb254ac958-kube-api-access-z5g6p\") on node \"crc\" DevicePath \"\"" Jan 21 11:19:15 crc kubenswrapper[4881]: E0121 11:19:15.750837 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-watcher-api:watcher_latest" Jan 21 11:19:15 crc kubenswrapper[4881]: E0121 11:19:15.750968 4881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-watcher-api:watcher_latest" Jan 21 11:19:15 crc kubenswrapper[4881]: E0121 11:19:15.751212 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:watcher-db-sync,Image:38.102.83.182:5001/podified-master-centos10/openstack-watcher-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/watcher/watcher.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:watcher-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gd8cs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-db-sync-t4mx7_openstack(bc7e598c-b449-4e8c-9214-44e27cb45e53): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:19:15 crc kubenswrapper[4881]: E0121 11:19:15.752575 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/watcher-db-sync-t4mx7" podUID="bc7e598c-b449-4e8c-9214-44e27cb45e53" Jan 21 11:19:16 crc kubenswrapper[4881]: I0121 11:19:16.173834 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3649-account-create-update-pqj5m" Jan 21 11:19:16 crc kubenswrapper[4881]: I0121 11:19:16.174973 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a5aa-account-create-update-j2nc8" Jan 21 11:19:16 crc kubenswrapper[4881]: I0121 11:19:16.173988 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-44pdb" event={"ID":"34efcb76-01fb-490b-88c0-a4ee1363a01e","Type":"ContainerStarted","Data":"498906e9fbb3b564603759f2238f54ad3d7c8a3ccff8535f1f6031fd2e192fd4"} Jan 21 11:19:16 crc kubenswrapper[4881]: E0121 11:19:16.176710 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.182:5001/podified-master-centos10/openstack-watcher-api:watcher_latest\\\"\"" pod="openstack/watcher-db-sync-t4mx7" podUID="bc7e598c-b449-4e8c-9214-44e27cb45e53" Jan 21 11:19:16 crc kubenswrapper[4881]: I0121 11:19:16.525615 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-44pdb" podStartSLOduration=4.929812152 podStartE2EDuration="23.525596156s" podCreationTimestamp="2026-01-21 11:18:53 +0000 UTC" firstStartedPulling="2026-01-21 11:18:57.148883475 +0000 UTC m=+1324.408839944" lastFinishedPulling="2026-01-21 11:19:15.744667479 +0000 UTC m=+1343.004623948" observedRunningTime="2026-01-21 11:19:16.494647612 +0000 UTC m=+1343.754604101" watchObservedRunningTime="2026-01-21 11:19:16.525596156 +0000 UTC m=+1343.785552625" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.510137 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-mxb97"] Jan 21 11:19:21 crc kubenswrapper[4881]: E0121 11:19:21.512670 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f6f337c-95ec-448f-ab58-e7e7fe7abfd4" containerName="mariadb-account-create-update" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.512696 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f6f337c-95ec-448f-ab58-e7e7fe7abfd4" containerName="mariadb-account-create-update" Jan 21 11:19:21 crc kubenswrapper[4881]: E0121 11:19:21.512710 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d72ab14-b1c2-4382-847a-00eb254ac958" containerName="mariadb-database-create" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.512717 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d72ab14-b1c2-4382-847a-00eb254ac958" containerName="mariadb-database-create" Jan 21 11:19:21 crc kubenswrapper[4881]: E0121 11:19:21.512727 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8cfe009-eba2-4713-b50f-cc334b4ca691" containerName="mariadb-database-create" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.512736 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8cfe009-eba2-4713-b50f-cc334b4ca691" containerName="mariadb-database-create" Jan 21 11:19:21 crc kubenswrapper[4881]: E0121 11:19:21.512749 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec3ba10e-2cbd-4350-9014-27a92932849f" containerName="mariadb-account-create-update" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.512766 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec3ba10e-2cbd-4350-9014-27a92932849f" containerName="mariadb-account-create-update" Jan 21 11:19:21 crc kubenswrapper[4881]: E0121 11:19:21.512834 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4b2b4e9-304c-47ae-939a-9d938d012b90" containerName="mariadb-database-create" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.512842 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4b2b4e9-304c-47ae-939a-9d938d012b90" containerName="mariadb-database-create" Jan 21 11:19:21 crc kubenswrapper[4881]: E0121 11:19:21.512873 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0145b8f9-5452-4f0e-819c-61fbb8badffb" containerName="mariadb-account-create-update" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.512885 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="0145b8f9-5452-4f0e-819c-61fbb8badffb" containerName="mariadb-account-create-update" Jan 21 11:19:21 crc kubenswrapper[4881]: E0121 11:19:21.512902 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c837cab9-43a5-4b84-a0bd-d915bca31600" containerName="mariadb-account-create-update" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.512911 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="c837cab9-43a5-4b84-a0bd-d915bca31600" containerName="mariadb-account-create-update" Jan 21 11:19:21 crc kubenswrapper[4881]: E0121 11:19:21.512922 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="760e8dbf-d827-42ef-969c-1c7409f7ac20" containerName="mariadb-database-create" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.512928 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="760e8dbf-d827-42ef-969c-1c7409f7ac20" containerName="mariadb-database-create" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.513173 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="0145b8f9-5452-4f0e-819c-61fbb8badffb" containerName="mariadb-account-create-update" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.513187 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f6f337c-95ec-448f-ab58-e7e7fe7abfd4" containerName="mariadb-account-create-update" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.513209 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4b2b4e9-304c-47ae-939a-9d938d012b90" containerName="mariadb-database-create" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.513228 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d72ab14-b1c2-4382-847a-00eb254ac958" containerName="mariadb-database-create" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.513237 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="760e8dbf-d827-42ef-969c-1c7409f7ac20" containerName="mariadb-database-create" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.513253 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="c837cab9-43a5-4b84-a0bd-d915bca31600" containerName="mariadb-account-create-update" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.513268 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec3ba10e-2cbd-4350-9014-27a92932849f" containerName="mariadb-account-create-update" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.513288 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8cfe009-eba2-4713-b50f-cc334b4ca691" containerName="mariadb-database-create" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.514005 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mxb97" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.516404 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-f8snw" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.520139 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.539428 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-mxb97"] Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.692881 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/349e8898-8b7c-414a-8357-d431c8b81bf4-config-data\") pod \"glance-db-sync-mxb97\" (UID: \"349e8898-8b7c-414a-8357-d431c8b81bf4\") " pod="openstack/glance-db-sync-mxb97" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.693470 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/349e8898-8b7c-414a-8357-d431c8b81bf4-db-sync-config-data\") pod \"glance-db-sync-mxb97\" (UID: \"349e8898-8b7c-414a-8357-d431c8b81bf4\") " pod="openstack/glance-db-sync-mxb97" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.693563 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvn9r\" (UniqueName: \"kubernetes.io/projected/349e8898-8b7c-414a-8357-d431c8b81bf4-kube-api-access-gvn9r\") pod \"glance-db-sync-mxb97\" (UID: \"349e8898-8b7c-414a-8357-d431c8b81bf4\") " pod="openstack/glance-db-sync-mxb97" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.693658 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/349e8898-8b7c-414a-8357-d431c8b81bf4-combined-ca-bundle\") pod \"glance-db-sync-mxb97\" (UID: \"349e8898-8b7c-414a-8357-d431c8b81bf4\") " pod="openstack/glance-db-sync-mxb97" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.795593 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvn9r\" (UniqueName: \"kubernetes.io/projected/349e8898-8b7c-414a-8357-d431c8b81bf4-kube-api-access-gvn9r\") pod \"glance-db-sync-mxb97\" (UID: \"349e8898-8b7c-414a-8357-d431c8b81bf4\") " pod="openstack/glance-db-sync-mxb97" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.795726 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/349e8898-8b7c-414a-8357-d431c8b81bf4-combined-ca-bundle\") pod \"glance-db-sync-mxb97\" (UID: \"349e8898-8b7c-414a-8357-d431c8b81bf4\") " pod="openstack/glance-db-sync-mxb97" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.795869 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/349e8898-8b7c-414a-8357-d431c8b81bf4-config-data\") pod \"glance-db-sync-mxb97\" (UID: \"349e8898-8b7c-414a-8357-d431c8b81bf4\") " pod="openstack/glance-db-sync-mxb97" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.795913 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/349e8898-8b7c-414a-8357-d431c8b81bf4-db-sync-config-data\") pod \"glance-db-sync-mxb97\" (UID: \"349e8898-8b7c-414a-8357-d431c8b81bf4\") " pod="openstack/glance-db-sync-mxb97" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.810645 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/349e8898-8b7c-414a-8357-d431c8b81bf4-config-data\") pod \"glance-db-sync-mxb97\" (UID: \"349e8898-8b7c-414a-8357-d431c8b81bf4\") " pod="openstack/glance-db-sync-mxb97" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.810687 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/349e8898-8b7c-414a-8357-d431c8b81bf4-combined-ca-bundle\") pod \"glance-db-sync-mxb97\" (UID: \"349e8898-8b7c-414a-8357-d431c8b81bf4\") " pod="openstack/glance-db-sync-mxb97" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.810773 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/349e8898-8b7c-414a-8357-d431c8b81bf4-db-sync-config-data\") pod \"glance-db-sync-mxb97\" (UID: \"349e8898-8b7c-414a-8357-d431c8b81bf4\") " pod="openstack/glance-db-sync-mxb97" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.814248 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvn9r\" (UniqueName: \"kubernetes.io/projected/349e8898-8b7c-414a-8357-d431c8b81bf4-kube-api-access-gvn9r\") pod \"glance-db-sync-mxb97\" (UID: \"349e8898-8b7c-414a-8357-d431c8b81bf4\") " pod="openstack/glance-db-sync-mxb97" Jan 21 11:19:21 crc kubenswrapper[4881]: I0121 11:19:21.872943 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mxb97" Jan 21 11:19:22 crc kubenswrapper[4881]: I0121 11:19:22.569845 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-mxb97"] Jan 21 11:19:23 crc kubenswrapper[4881]: I0121 11:19:23.748026 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mxb97" event={"ID":"349e8898-8b7c-414a-8357-d431c8b81bf4","Type":"ContainerStarted","Data":"cd824796b06380fe0748d0a1334aa26a3fd0a19fab70225e560d35cfb754e2b4"} Jan 21 11:19:30 crc kubenswrapper[4881]: I0121 11:19:30.830775 4881 generic.go:334] "Generic (PLEG): container finished" podID="34efcb76-01fb-490b-88c0-a4ee1363a01e" containerID="498906e9fbb3b564603759f2238f54ad3d7c8a3ccff8535f1f6031fd2e192fd4" exitCode=0 Jan 21 11:19:30 crc kubenswrapper[4881]: I0121 11:19:30.830819 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-44pdb" event={"ID":"34efcb76-01fb-490b-88c0-a4ee1363a01e","Type":"ContainerDied","Data":"498906e9fbb3b564603759f2238f54ad3d7c8a3ccff8535f1f6031fd2e192fd4"} Jan 21 11:19:43 crc kubenswrapper[4881]: E0121 11:19:43.010387 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-glance-api:watcher_latest" Jan 21 11:19:43 crc kubenswrapper[4881]: E0121 11:19:43.010769 4881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-glance-api:watcher_latest" Jan 21 11:19:43 crc kubenswrapper[4881]: E0121 11:19:43.010921 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:38.102.83.182:5001/podified-master-centos10/openstack-glance-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gvn9r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-mxb97_openstack(349e8898-8b7c-414a-8357-d431c8b81bf4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:19:43 crc kubenswrapper[4881]: E0121 11:19:43.012278 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-mxb97" podUID="349e8898-8b7c-414a-8357-d431c8b81bf4" Jan 21 11:19:43 crc kubenswrapper[4881]: I0121 11:19:43.174286 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-44pdb" event={"ID":"34efcb76-01fb-490b-88c0-a4ee1363a01e","Type":"ContainerDied","Data":"6dc4d522c502820b83234d2fee061b7bda412d486d52242e7e816991b3acbb57"} Jan 21 11:19:43 crc kubenswrapper[4881]: I0121 11:19:43.174606 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6dc4d522c502820b83234d2fee061b7bda412d486d52242e7e816991b3acbb57" Jan 21 11:19:43 crc kubenswrapper[4881]: E0121 11:19:43.176414 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.182:5001/podified-master-centos10/openstack-glance-api:watcher_latest\\\"\"" pod="openstack/glance-db-sync-mxb97" podUID="349e8898-8b7c-414a-8357-d431c8b81bf4" Jan 21 11:19:43 crc kubenswrapper[4881]: I0121 11:19:43.207591 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-44pdb" Jan 21 11:19:43 crc kubenswrapper[4881]: I0121 11:19:43.233272 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34efcb76-01fb-490b-88c0-a4ee1363a01e-combined-ca-bundle\") pod \"34efcb76-01fb-490b-88c0-a4ee1363a01e\" (UID: \"34efcb76-01fb-490b-88c0-a4ee1363a01e\") " Jan 21 11:19:43 crc kubenswrapper[4881]: I0121 11:19:43.233431 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34efcb76-01fb-490b-88c0-a4ee1363a01e-config-data\") pod \"34efcb76-01fb-490b-88c0-a4ee1363a01e\" (UID: \"34efcb76-01fb-490b-88c0-a4ee1363a01e\") " Jan 21 11:19:43 crc kubenswrapper[4881]: I0121 11:19:43.234138 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5fnp\" (UniqueName: \"kubernetes.io/projected/34efcb76-01fb-490b-88c0-a4ee1363a01e-kube-api-access-r5fnp\") pod \"34efcb76-01fb-490b-88c0-a4ee1363a01e\" (UID: \"34efcb76-01fb-490b-88c0-a4ee1363a01e\") " Jan 21 11:19:43 crc kubenswrapper[4881]: I0121 11:19:43.239512 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34efcb76-01fb-490b-88c0-a4ee1363a01e-kube-api-access-r5fnp" (OuterVolumeSpecName: "kube-api-access-r5fnp") pod "34efcb76-01fb-490b-88c0-a4ee1363a01e" (UID: "34efcb76-01fb-490b-88c0-a4ee1363a01e"). InnerVolumeSpecName "kube-api-access-r5fnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:19:43 crc kubenswrapper[4881]: I0121 11:19:43.273714 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34efcb76-01fb-490b-88c0-a4ee1363a01e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "34efcb76-01fb-490b-88c0-a4ee1363a01e" (UID: "34efcb76-01fb-490b-88c0-a4ee1363a01e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:19:43 crc kubenswrapper[4881]: I0121 11:19:43.300577 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34efcb76-01fb-490b-88c0-a4ee1363a01e-config-data" (OuterVolumeSpecName: "config-data") pod "34efcb76-01fb-490b-88c0-a4ee1363a01e" (UID: "34efcb76-01fb-490b-88c0-a4ee1363a01e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:19:43 crc kubenswrapper[4881]: I0121 11:19:43.340222 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34efcb76-01fb-490b-88c0-a4ee1363a01e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:19:43 crc kubenswrapper[4881]: I0121 11:19:43.340448 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34efcb76-01fb-490b-88c0-a4ee1363a01e-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:19:43 crc kubenswrapper[4881]: I0121 11:19:43.340527 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5fnp\" (UniqueName: \"kubernetes.io/projected/34efcb76-01fb-490b-88c0-a4ee1363a01e-kube-api-access-r5fnp\") on node \"crc\" DevicePath \"\"" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.184970 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-44pdb" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.185146 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-t4mx7" event={"ID":"bc7e598c-b449-4e8c-9214-44e27cb45e53","Type":"ContainerStarted","Data":"b4ed75bebc3e4f7b35b331a2f216bede613a9086f548aa45e96cbef5724a690a"} Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.217421 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-sync-t4mx7" podStartSLOduration=6.524329766 podStartE2EDuration="52.217391877s" podCreationTimestamp="2026-01-21 11:18:52 +0000 UTC" firstStartedPulling="2026-01-21 11:18:57.365940286 +0000 UTC m=+1324.625896755" lastFinishedPulling="2026-01-21 11:19:43.059002397 +0000 UTC m=+1370.318958866" observedRunningTime="2026-01-21 11:19:44.21514665 +0000 UTC m=+1371.475103139" watchObservedRunningTime="2026-01-21 11:19:44.217391877 +0000 UTC m=+1371.477348386" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.668761 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c487768dc-xjcjd"] Jan 21 11:19:44 crc kubenswrapper[4881]: E0121 11:19:44.669504 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34efcb76-01fb-490b-88c0-a4ee1363a01e" containerName="keystone-db-sync" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.669558 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="34efcb76-01fb-490b-88c0-a4ee1363a01e" containerName="keystone-db-sync" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.670031 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="34efcb76-01fb-490b-88c0-a4ee1363a01e" containerName="keystone-db-sync" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.671879 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c487768dc-xjcjd" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.704284 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c487768dc-xjcjd"] Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.716664 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/386c2ea0-a9e4-490b-b83d-9106af06cd60-ovsdbserver-sb\") pod \"dnsmasq-dns-7c487768dc-xjcjd\" (UID: \"386c2ea0-a9e4-490b-b83d-9106af06cd60\") " pod="openstack/dnsmasq-dns-7c487768dc-xjcjd" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.716730 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tks72\" (UniqueName: \"kubernetes.io/projected/386c2ea0-a9e4-490b-b83d-9106af06cd60-kube-api-access-tks72\") pod \"dnsmasq-dns-7c487768dc-xjcjd\" (UID: \"386c2ea0-a9e4-490b-b83d-9106af06cd60\") " pod="openstack/dnsmasq-dns-7c487768dc-xjcjd" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.716871 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/386c2ea0-a9e4-490b-b83d-9106af06cd60-ovsdbserver-nb\") pod \"dnsmasq-dns-7c487768dc-xjcjd\" (UID: \"386c2ea0-a9e4-490b-b83d-9106af06cd60\") " pod="openstack/dnsmasq-dns-7c487768dc-xjcjd" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.716903 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/386c2ea0-a9e4-490b-b83d-9106af06cd60-config\") pod \"dnsmasq-dns-7c487768dc-xjcjd\" (UID: \"386c2ea0-a9e4-490b-b83d-9106af06cd60\") " pod="openstack/dnsmasq-dns-7c487768dc-xjcjd" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.716938 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/386c2ea0-a9e4-490b-b83d-9106af06cd60-dns-swift-storage-0\") pod \"dnsmasq-dns-7c487768dc-xjcjd\" (UID: \"386c2ea0-a9e4-490b-b83d-9106af06cd60\") " pod="openstack/dnsmasq-dns-7c487768dc-xjcjd" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.716990 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/386c2ea0-a9e4-490b-b83d-9106af06cd60-dns-svc\") pod \"dnsmasq-dns-7c487768dc-xjcjd\" (UID: \"386c2ea0-a9e4-490b-b83d-9106af06cd60\") " pod="openstack/dnsmasq-dns-7c487768dc-xjcjd" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.729268 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-wg7xs"] Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.743288 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wg7xs" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.751994 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.752375 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.752694 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.752911 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-j54nk" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.753079 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.774435 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-wg7xs"] Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.825501 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc3f2556-7427-4715-a56d-bbd3d7f8422f-combined-ca-bundle\") pod \"keystone-bootstrap-wg7xs\" (UID: \"cc3f2556-7427-4715-a56d-bbd3d7f8422f\") " pod="openstack/keystone-bootstrap-wg7xs" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.825567 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc3f2556-7427-4715-a56d-bbd3d7f8422f-config-data\") pod \"keystone-bootstrap-wg7xs\" (UID: \"cc3f2556-7427-4715-a56d-bbd3d7f8422f\") " pod="openstack/keystone-bootstrap-wg7xs" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.825691 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/386c2ea0-a9e4-490b-b83d-9106af06cd60-ovsdbserver-sb\") pod \"dnsmasq-dns-7c487768dc-xjcjd\" (UID: \"386c2ea0-a9e4-490b-b83d-9106af06cd60\") " pod="openstack/dnsmasq-dns-7c487768dc-xjcjd" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.825750 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tks72\" (UniqueName: \"kubernetes.io/projected/386c2ea0-a9e4-490b-b83d-9106af06cd60-kube-api-access-tks72\") pod \"dnsmasq-dns-7c487768dc-xjcjd\" (UID: \"386c2ea0-a9e4-490b-b83d-9106af06cd60\") " pod="openstack/dnsmasq-dns-7c487768dc-xjcjd" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.825852 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cc3f2556-7427-4715-a56d-bbd3d7f8422f-fernet-keys\") pod \"keystone-bootstrap-wg7xs\" (UID: \"cc3f2556-7427-4715-a56d-bbd3d7f8422f\") " pod="openstack/keystone-bootstrap-wg7xs" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.825939 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vbj2\" (UniqueName: \"kubernetes.io/projected/cc3f2556-7427-4715-a56d-bbd3d7f8422f-kube-api-access-6vbj2\") pod \"keystone-bootstrap-wg7xs\" (UID: \"cc3f2556-7427-4715-a56d-bbd3d7f8422f\") " pod="openstack/keystone-bootstrap-wg7xs" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.826004 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc3f2556-7427-4715-a56d-bbd3d7f8422f-scripts\") pod \"keystone-bootstrap-wg7xs\" (UID: \"cc3f2556-7427-4715-a56d-bbd3d7f8422f\") " pod="openstack/keystone-bootstrap-wg7xs" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.826102 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/386c2ea0-a9e4-490b-b83d-9106af06cd60-ovsdbserver-nb\") pod \"dnsmasq-dns-7c487768dc-xjcjd\" (UID: \"386c2ea0-a9e4-490b-b83d-9106af06cd60\") " pod="openstack/dnsmasq-dns-7c487768dc-xjcjd" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.826139 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/386c2ea0-a9e4-490b-b83d-9106af06cd60-config\") pod \"dnsmasq-dns-7c487768dc-xjcjd\" (UID: \"386c2ea0-a9e4-490b-b83d-9106af06cd60\") " pod="openstack/dnsmasq-dns-7c487768dc-xjcjd" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.826178 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cc3f2556-7427-4715-a56d-bbd3d7f8422f-credential-keys\") pod \"keystone-bootstrap-wg7xs\" (UID: \"cc3f2556-7427-4715-a56d-bbd3d7f8422f\") " pod="openstack/keystone-bootstrap-wg7xs" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.826217 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/386c2ea0-a9e4-490b-b83d-9106af06cd60-dns-swift-storage-0\") pod \"dnsmasq-dns-7c487768dc-xjcjd\" (UID: \"386c2ea0-a9e4-490b-b83d-9106af06cd60\") " pod="openstack/dnsmasq-dns-7c487768dc-xjcjd" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.826432 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/386c2ea0-a9e4-490b-b83d-9106af06cd60-dns-svc\") pod \"dnsmasq-dns-7c487768dc-xjcjd\" (UID: \"386c2ea0-a9e4-490b-b83d-9106af06cd60\") " pod="openstack/dnsmasq-dns-7c487768dc-xjcjd" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.827877 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/386c2ea0-a9e4-490b-b83d-9106af06cd60-ovsdbserver-nb\") pod \"dnsmasq-dns-7c487768dc-xjcjd\" (UID: \"386c2ea0-a9e4-490b-b83d-9106af06cd60\") " pod="openstack/dnsmasq-dns-7c487768dc-xjcjd" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.832707 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/386c2ea0-a9e4-490b-b83d-9106af06cd60-ovsdbserver-sb\") pod \"dnsmasq-dns-7c487768dc-xjcjd\" (UID: \"386c2ea0-a9e4-490b-b83d-9106af06cd60\") " pod="openstack/dnsmasq-dns-7c487768dc-xjcjd" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.836579 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/386c2ea0-a9e4-490b-b83d-9106af06cd60-dns-svc\") pod \"dnsmasq-dns-7c487768dc-xjcjd\" (UID: \"386c2ea0-a9e4-490b-b83d-9106af06cd60\") " pod="openstack/dnsmasq-dns-7c487768dc-xjcjd" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.838183 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/386c2ea0-a9e4-490b-b83d-9106af06cd60-dns-swift-storage-0\") pod \"dnsmasq-dns-7c487768dc-xjcjd\" (UID: \"386c2ea0-a9e4-490b-b83d-9106af06cd60\") " pod="openstack/dnsmasq-dns-7c487768dc-xjcjd" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.839041 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/386c2ea0-a9e4-490b-b83d-9106af06cd60-config\") pod \"dnsmasq-dns-7c487768dc-xjcjd\" (UID: \"386c2ea0-a9e4-490b-b83d-9106af06cd60\") " pod="openstack/dnsmasq-dns-7c487768dc-xjcjd" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.915010 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tks72\" (UniqueName: \"kubernetes.io/projected/386c2ea0-a9e4-490b-b83d-9106af06cd60-kube-api-access-tks72\") pod \"dnsmasq-dns-7c487768dc-xjcjd\" (UID: \"386c2ea0-a9e4-490b-b83d-9106af06cd60\") " pod="openstack/dnsmasq-dns-7c487768dc-xjcjd" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.928980 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc3f2556-7427-4715-a56d-bbd3d7f8422f-combined-ca-bundle\") pod \"keystone-bootstrap-wg7xs\" (UID: \"cc3f2556-7427-4715-a56d-bbd3d7f8422f\") " pod="openstack/keystone-bootstrap-wg7xs" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.929028 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc3f2556-7427-4715-a56d-bbd3d7f8422f-config-data\") pod \"keystone-bootstrap-wg7xs\" (UID: \"cc3f2556-7427-4715-a56d-bbd3d7f8422f\") " pod="openstack/keystone-bootstrap-wg7xs" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.929114 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cc3f2556-7427-4715-a56d-bbd3d7f8422f-fernet-keys\") pod \"keystone-bootstrap-wg7xs\" (UID: \"cc3f2556-7427-4715-a56d-bbd3d7f8422f\") " pod="openstack/keystone-bootstrap-wg7xs" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.929163 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vbj2\" (UniqueName: \"kubernetes.io/projected/cc3f2556-7427-4715-a56d-bbd3d7f8422f-kube-api-access-6vbj2\") pod \"keystone-bootstrap-wg7xs\" (UID: \"cc3f2556-7427-4715-a56d-bbd3d7f8422f\") " pod="openstack/keystone-bootstrap-wg7xs" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.929200 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc3f2556-7427-4715-a56d-bbd3d7f8422f-scripts\") pod \"keystone-bootstrap-wg7xs\" (UID: \"cc3f2556-7427-4715-a56d-bbd3d7f8422f\") " pod="openstack/keystone-bootstrap-wg7xs" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.929259 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cc3f2556-7427-4715-a56d-bbd3d7f8422f-credential-keys\") pod \"keystone-bootstrap-wg7xs\" (UID: \"cc3f2556-7427-4715-a56d-bbd3d7f8422f\") " pod="openstack/keystone-bootstrap-wg7xs" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.936007 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cc3f2556-7427-4715-a56d-bbd3d7f8422f-fernet-keys\") pod \"keystone-bootstrap-wg7xs\" (UID: \"cc3f2556-7427-4715-a56d-bbd3d7f8422f\") " pod="openstack/keystone-bootstrap-wg7xs" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.939653 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc3f2556-7427-4715-a56d-bbd3d7f8422f-config-data\") pod \"keystone-bootstrap-wg7xs\" (UID: \"cc3f2556-7427-4715-a56d-bbd3d7f8422f\") " pod="openstack/keystone-bootstrap-wg7xs" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.942942 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cc3f2556-7427-4715-a56d-bbd3d7f8422f-credential-keys\") pod \"keystone-bootstrap-wg7xs\" (UID: \"cc3f2556-7427-4715-a56d-bbd3d7f8422f\") " pod="openstack/keystone-bootstrap-wg7xs" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.943117 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc3f2556-7427-4715-a56d-bbd3d7f8422f-scripts\") pod \"keystone-bootstrap-wg7xs\" (UID: \"cc3f2556-7427-4715-a56d-bbd3d7f8422f\") " pod="openstack/keystone-bootstrap-wg7xs" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.947116 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc3f2556-7427-4715-a56d-bbd3d7f8422f-combined-ca-bundle\") pod \"keystone-bootstrap-wg7xs\" (UID: \"cc3f2556-7427-4715-a56d-bbd3d7f8422f\") " pod="openstack/keystone-bootstrap-wg7xs" Jan 21 11:19:44 crc kubenswrapper[4881]: I0121 11:19:44.974247 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vbj2\" (UniqueName: \"kubernetes.io/projected/cc3f2556-7427-4715-a56d-bbd3d7f8422f-kube-api-access-6vbj2\") pod \"keystone-bootstrap-wg7xs\" (UID: \"cc3f2556-7427-4715-a56d-bbd3d7f8422f\") " pod="openstack/keystone-bootstrap-wg7xs" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.012152 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c487768dc-xjcjd" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.058636 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-77fb486557-zjtxw"] Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.072490 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77fb486557-zjtxw" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.099985 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wg7xs" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.101620 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-77fb486557-zjtxw"] Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.111767 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.112331 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.120967 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-2zrv4" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.122067 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.160003 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.163397 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.176885 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.177163 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.224815 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d96c79b7-58c4-4bcc-9e56-02f2a8860764-horizon-secret-key\") pod \"horizon-77fb486557-zjtxw\" (UID: \"d96c79b7-58c4-4bcc-9e56-02f2a8860764\") " pod="openstack/horizon-77fb486557-zjtxw" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.224926 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6lrj\" (UniqueName: \"kubernetes.io/projected/d96c79b7-58c4-4bcc-9e56-02f2a8860764-kube-api-access-t6lrj\") pod \"horizon-77fb486557-zjtxw\" (UID: \"d96c79b7-58c4-4bcc-9e56-02f2a8860764\") " pod="openstack/horizon-77fb486557-zjtxw" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.225011 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcec3c24-87bd-4c22-a800-d3835455a38b-config-data\") pod \"ceilometer-0\" (UID: \"bcec3c24-87bd-4c22-a800-d3835455a38b\") " pod="openstack/ceilometer-0" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.225058 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d96c79b7-58c4-4bcc-9e56-02f2a8860764-logs\") pod \"horizon-77fb486557-zjtxw\" (UID: \"d96c79b7-58c4-4bcc-9e56-02f2a8860764\") " pod="openstack/horizon-77fb486557-zjtxw" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.225086 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcec3c24-87bd-4c22-a800-d3835455a38b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bcec3c24-87bd-4c22-a800-d3835455a38b\") " pod="openstack/ceilometer-0" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.225108 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bcec3c24-87bd-4c22-a800-d3835455a38b-log-httpd\") pod \"ceilometer-0\" (UID: \"bcec3c24-87bd-4c22-a800-d3835455a38b\") " pod="openstack/ceilometer-0" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.225162 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d96c79b7-58c4-4bcc-9e56-02f2a8860764-config-data\") pod \"horizon-77fb486557-zjtxw\" (UID: \"d96c79b7-58c4-4bcc-9e56-02f2a8860764\") " pod="openstack/horizon-77fb486557-zjtxw" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.225192 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d96c79b7-58c4-4bcc-9e56-02f2a8860764-scripts\") pod \"horizon-77fb486557-zjtxw\" (UID: \"d96c79b7-58c4-4bcc-9e56-02f2a8860764\") " pod="openstack/horizon-77fb486557-zjtxw" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.225230 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bcec3c24-87bd-4c22-a800-d3835455a38b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bcec3c24-87bd-4c22-a800-d3835455a38b\") " pod="openstack/ceilometer-0" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.225251 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bcec3c24-87bd-4c22-a800-d3835455a38b-scripts\") pod \"ceilometer-0\" (UID: \"bcec3c24-87bd-4c22-a800-d3835455a38b\") " pod="openstack/ceilometer-0" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.225278 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bcec3c24-87bd-4c22-a800-d3835455a38b-run-httpd\") pod \"ceilometer-0\" (UID: \"bcec3c24-87bd-4c22-a800-d3835455a38b\") " pod="openstack/ceilometer-0" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.225316 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj6cp\" (UniqueName: \"kubernetes.io/projected/bcec3c24-87bd-4c22-a800-d3835455a38b-kube-api-access-bj6cp\") pod \"ceilometer-0\" (UID: \"bcec3c24-87bd-4c22-a800-d3835455a38b\") " pod="openstack/ceilometer-0" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.330841 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d96c79b7-58c4-4bcc-9e56-02f2a8860764-config-data\") pod \"horizon-77fb486557-zjtxw\" (UID: \"d96c79b7-58c4-4bcc-9e56-02f2a8860764\") " pod="openstack/horizon-77fb486557-zjtxw" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.337734 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d96c79b7-58c4-4bcc-9e56-02f2a8860764-scripts\") pod \"horizon-77fb486557-zjtxw\" (UID: \"d96c79b7-58c4-4bcc-9e56-02f2a8860764\") " pod="openstack/horizon-77fb486557-zjtxw" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.337902 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bcec3c24-87bd-4c22-a800-d3835455a38b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bcec3c24-87bd-4c22-a800-d3835455a38b\") " pod="openstack/ceilometer-0" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.337960 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bcec3c24-87bd-4c22-a800-d3835455a38b-scripts\") pod \"ceilometer-0\" (UID: \"bcec3c24-87bd-4c22-a800-d3835455a38b\") " pod="openstack/ceilometer-0" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.338006 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bcec3c24-87bd-4c22-a800-d3835455a38b-run-httpd\") pod \"ceilometer-0\" (UID: \"bcec3c24-87bd-4c22-a800-d3835455a38b\") " pod="openstack/ceilometer-0" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.338062 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bj6cp\" (UniqueName: \"kubernetes.io/projected/bcec3c24-87bd-4c22-a800-d3835455a38b-kube-api-access-bj6cp\") pod \"ceilometer-0\" (UID: \"bcec3c24-87bd-4c22-a800-d3835455a38b\") " pod="openstack/ceilometer-0" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.338217 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d96c79b7-58c4-4bcc-9e56-02f2a8860764-horizon-secret-key\") pod \"horizon-77fb486557-zjtxw\" (UID: \"d96c79b7-58c4-4bcc-9e56-02f2a8860764\") " pod="openstack/horizon-77fb486557-zjtxw" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.338245 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6lrj\" (UniqueName: \"kubernetes.io/projected/d96c79b7-58c4-4bcc-9e56-02f2a8860764-kube-api-access-t6lrj\") pod \"horizon-77fb486557-zjtxw\" (UID: \"d96c79b7-58c4-4bcc-9e56-02f2a8860764\") " pod="openstack/horizon-77fb486557-zjtxw" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.338357 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcec3c24-87bd-4c22-a800-d3835455a38b-config-data\") pod \"ceilometer-0\" (UID: \"bcec3c24-87bd-4c22-a800-d3835455a38b\") " pod="openstack/ceilometer-0" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.338439 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d96c79b7-58c4-4bcc-9e56-02f2a8860764-logs\") pod \"horizon-77fb486557-zjtxw\" (UID: \"d96c79b7-58c4-4bcc-9e56-02f2a8860764\") " pod="openstack/horizon-77fb486557-zjtxw" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.338459 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcec3c24-87bd-4c22-a800-d3835455a38b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bcec3c24-87bd-4c22-a800-d3835455a38b\") " pod="openstack/ceilometer-0" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.338485 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bcec3c24-87bd-4c22-a800-d3835455a38b-log-httpd\") pod \"ceilometer-0\" (UID: \"bcec3c24-87bd-4c22-a800-d3835455a38b\") " pod="openstack/ceilometer-0" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.344656 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bcec3c24-87bd-4c22-a800-d3835455a38b-log-httpd\") pod \"ceilometer-0\" (UID: \"bcec3c24-87bd-4c22-a800-d3835455a38b\") " pod="openstack/ceilometer-0" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.348373 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d96c79b7-58c4-4bcc-9e56-02f2a8860764-logs\") pod \"horizon-77fb486557-zjtxw\" (UID: \"d96c79b7-58c4-4bcc-9e56-02f2a8860764\") " pod="openstack/horizon-77fb486557-zjtxw" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.348742 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bcec3c24-87bd-4c22-a800-d3835455a38b-run-httpd\") pod \"ceilometer-0\" (UID: \"bcec3c24-87bd-4c22-a800-d3835455a38b\") " pod="openstack/ceilometer-0" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.352618 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d96c79b7-58c4-4bcc-9e56-02f2a8860764-horizon-secret-key\") pod \"horizon-77fb486557-zjtxw\" (UID: \"d96c79b7-58c4-4bcc-9e56-02f2a8860764\") " pod="openstack/horizon-77fb486557-zjtxw" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.355005 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d96c79b7-58c4-4bcc-9e56-02f2a8860764-scripts\") pod \"horizon-77fb486557-zjtxw\" (UID: \"d96c79b7-58c4-4bcc-9e56-02f2a8860764\") " pod="openstack/horizon-77fb486557-zjtxw" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.356178 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d96c79b7-58c4-4bcc-9e56-02f2a8860764-config-data\") pod \"horizon-77fb486557-zjtxw\" (UID: \"d96c79b7-58c4-4bcc-9e56-02f2a8860764\") " pod="openstack/horizon-77fb486557-zjtxw" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.379996 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bcec3c24-87bd-4c22-a800-d3835455a38b-scripts\") pod \"ceilometer-0\" (UID: \"bcec3c24-87bd-4c22-a800-d3835455a38b\") " pod="openstack/ceilometer-0" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.390672 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcec3c24-87bd-4c22-a800-d3835455a38b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bcec3c24-87bd-4c22-a800-d3835455a38b\") " pod="openstack/ceilometer-0" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.396713 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcec3c24-87bd-4c22-a800-d3835455a38b-config-data\") pod \"ceilometer-0\" (UID: \"bcec3c24-87bd-4c22-a800-d3835455a38b\") " pod="openstack/ceilometer-0" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.398333 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bcec3c24-87bd-4c22-a800-d3835455a38b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bcec3c24-87bd-4c22-a800-d3835455a38b\") " pod="openstack/ceilometer-0" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.407197 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.434439 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bj6cp\" (UniqueName: \"kubernetes.io/projected/bcec3c24-87bd-4c22-a800-d3835455a38b-kube-api-access-bj6cp\") pod \"ceilometer-0\" (UID: \"bcec3c24-87bd-4c22-a800-d3835455a38b\") " pod="openstack/ceilometer-0" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.439461 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-slhtz"] Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.440991 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-slhtz" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.454081 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6lrj\" (UniqueName: \"kubernetes.io/projected/d96c79b7-58c4-4bcc-9e56-02f2a8860764-kube-api-access-t6lrj\") pod \"horizon-77fb486557-zjtxw\" (UID: \"d96c79b7-58c4-4bcc-9e56-02f2a8860764\") " pod="openstack/horizon-77fb486557-zjtxw" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.455281 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-cl6xz" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.455595 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.472578 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-slhtz"] Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.497843 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-t6mz2"] Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.499625 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-t6mz2" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.535053 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.535662 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.535950 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-kj7bj" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.549744 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/869a596b-159c-4185-a4ab-0e36c5d130fc-config\") pod \"neutron-db-sync-t6mz2\" (UID: \"869a596b-159c-4185-a4ab-0e36c5d130fc\") " pod="openstack/neutron-db-sync-t6mz2" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.549869 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dscc6\" (UniqueName: \"kubernetes.io/projected/869a596b-159c-4185-a4ab-0e36c5d130fc-kube-api-access-dscc6\") pod \"neutron-db-sync-t6mz2\" (UID: \"869a596b-159c-4185-a4ab-0e36c5d130fc\") " pod="openstack/neutron-db-sync-t6mz2" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.549960 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bf52889-d5f3-44f8-b657-8ff3790962d1-combined-ca-bundle\") pod \"barbican-db-sync-slhtz\" (UID: \"4bf52889-d5f3-44f8-b657-8ff3790962d1\") " pod="openstack/barbican-db-sync-slhtz" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.549988 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/869a596b-159c-4185-a4ab-0e36c5d130fc-combined-ca-bundle\") pod \"neutron-db-sync-t6mz2\" (UID: \"869a596b-159c-4185-a4ab-0e36c5d130fc\") " pod="openstack/neutron-db-sync-t6mz2" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.550021 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4bf52889-d5f3-44f8-b657-8ff3790962d1-db-sync-config-data\") pod \"barbican-db-sync-slhtz\" (UID: \"4bf52889-d5f3-44f8-b657-8ff3790962d1\") " pod="openstack/barbican-db-sync-slhtz" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.550106 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7pcb\" (UniqueName: \"kubernetes.io/projected/4bf52889-d5f3-44f8-b657-8ff3790962d1-kube-api-access-j7pcb\") pod \"barbican-db-sync-slhtz\" (UID: \"4bf52889-d5f3-44f8-b657-8ff3790962d1\") " pod="openstack/barbican-db-sync-slhtz" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.594374 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.596260 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-t6mz2"] Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.654038 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bf52889-d5f3-44f8-b657-8ff3790962d1-combined-ca-bundle\") pod \"barbican-db-sync-slhtz\" (UID: \"4bf52889-d5f3-44f8-b657-8ff3790962d1\") " pod="openstack/barbican-db-sync-slhtz" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.654109 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/869a596b-159c-4185-a4ab-0e36c5d130fc-combined-ca-bundle\") pod \"neutron-db-sync-t6mz2\" (UID: \"869a596b-159c-4185-a4ab-0e36c5d130fc\") " pod="openstack/neutron-db-sync-t6mz2" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.654145 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4bf52889-d5f3-44f8-b657-8ff3790962d1-db-sync-config-data\") pod \"barbican-db-sync-slhtz\" (UID: \"4bf52889-d5f3-44f8-b657-8ff3790962d1\") " pod="openstack/barbican-db-sync-slhtz" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.654234 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7pcb\" (UniqueName: \"kubernetes.io/projected/4bf52889-d5f3-44f8-b657-8ff3790962d1-kube-api-access-j7pcb\") pod \"barbican-db-sync-slhtz\" (UID: \"4bf52889-d5f3-44f8-b657-8ff3790962d1\") " pod="openstack/barbican-db-sync-slhtz" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.654280 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/869a596b-159c-4185-a4ab-0e36c5d130fc-config\") pod \"neutron-db-sync-t6mz2\" (UID: \"869a596b-159c-4185-a4ab-0e36c5d130fc\") " pod="openstack/neutron-db-sync-t6mz2" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.654348 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dscc6\" (UniqueName: \"kubernetes.io/projected/869a596b-159c-4185-a4ab-0e36c5d130fc-kube-api-access-dscc6\") pod \"neutron-db-sync-t6mz2\" (UID: \"869a596b-159c-4185-a4ab-0e36c5d130fc\") " pod="openstack/neutron-db-sync-t6mz2" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.678477 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4bf52889-d5f3-44f8-b657-8ff3790962d1-db-sync-config-data\") pod \"barbican-db-sync-slhtz\" (UID: \"4bf52889-d5f3-44f8-b657-8ff3790962d1\") " pod="openstack/barbican-db-sync-slhtz" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.682503 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/869a596b-159c-4185-a4ab-0e36c5d130fc-combined-ca-bundle\") pod \"neutron-db-sync-t6mz2\" (UID: \"869a596b-159c-4185-a4ab-0e36c5d130fc\") " pod="openstack/neutron-db-sync-t6mz2" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.683132 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bf52889-d5f3-44f8-b657-8ff3790962d1-combined-ca-bundle\") pod \"barbican-db-sync-slhtz\" (UID: \"4bf52889-d5f3-44f8-b657-8ff3790962d1\") " pod="openstack/barbican-db-sync-slhtz" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.688058 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-4wxvl"] Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.689920 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4wxvl" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.691625 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/869a596b-159c-4185-a4ab-0e36c5d130fc-config\") pod \"neutron-db-sync-t6mz2\" (UID: \"869a596b-159c-4185-a4ab-0e36c5d130fc\") " pod="openstack/neutron-db-sync-t6mz2" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.700004 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-9r4q7" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.713718 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77fb486557-zjtxw" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.735593 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-4wxvl"] Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.743055 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.747234 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.756401 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dscc6\" (UniqueName: \"kubernetes.io/projected/869a596b-159c-4185-a4ab-0e36c5d130fc-kube-api-access-dscc6\") pod \"neutron-db-sync-t6mz2\" (UID: \"869a596b-159c-4185-a4ab-0e36c5d130fc\") " pod="openstack/neutron-db-sync-t6mz2" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.771904 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7pcb\" (UniqueName: \"kubernetes.io/projected/4bf52889-d5f3-44f8-b657-8ff3790962d1-kube-api-access-j7pcb\") pod \"barbican-db-sync-slhtz\" (UID: \"4bf52889-d5f3-44f8-b657-8ff3790962d1\") " pod="openstack/barbican-db-sync-slhtz" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.778834 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-combined-ca-bundle\") pod \"cinder-db-sync-4wxvl\" (UID: \"65250dcf-0f0f-4fa6-8d57-e07d3d29f290\") " pod="openstack/cinder-db-sync-4wxvl" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.787411 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-scripts\") pod \"cinder-db-sync-4wxvl\" (UID: \"65250dcf-0f0f-4fa6-8d57-e07d3d29f290\") " pod="openstack/cinder-db-sync-4wxvl" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.787855 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltkw6\" (UniqueName: \"kubernetes.io/projected/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-kube-api-access-ltkw6\") pod \"cinder-db-sync-4wxvl\" (UID: \"65250dcf-0f0f-4fa6-8d57-e07d3d29f290\") " pod="openstack/cinder-db-sync-4wxvl" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.787973 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-config-data\") pod \"cinder-db-sync-4wxvl\" (UID: \"65250dcf-0f0f-4fa6-8d57-e07d3d29f290\") " pod="openstack/cinder-db-sync-4wxvl" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.788148 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-db-sync-config-data\") pod \"cinder-db-sync-4wxvl\" (UID: \"65250dcf-0f0f-4fa6-8d57-e07d3d29f290\") " pod="openstack/cinder-db-sync-4wxvl" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.790876 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-etc-machine-id\") pod \"cinder-db-sync-4wxvl\" (UID: \"65250dcf-0f0f-4fa6-8d57-e07d3d29f290\") " pod="openstack/cinder-db-sync-4wxvl" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.801908 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c487768dc-xjcjd"] Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.845223 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-slhtz" Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.849047 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-kc9jz"] Jan 21 11:19:45 crc kubenswrapper[4881]: I0121 11:19:45.850697 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-kc9jz" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:45.900495 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-combined-ca-bundle\") pod \"cinder-db-sync-4wxvl\" (UID: \"65250dcf-0f0f-4fa6-8d57-e07d3d29f290\") " pod="openstack/cinder-db-sync-4wxvl" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:45.900591 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-scripts\") pod \"cinder-db-sync-4wxvl\" (UID: \"65250dcf-0f0f-4fa6-8d57-e07d3d29f290\") " pod="openstack/cinder-db-sync-4wxvl" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:45.900654 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltkw6\" (UniqueName: \"kubernetes.io/projected/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-kube-api-access-ltkw6\") pod \"cinder-db-sync-4wxvl\" (UID: \"65250dcf-0f0f-4fa6-8d57-e07d3d29f290\") " pod="openstack/cinder-db-sync-4wxvl" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:45.900684 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-config-data\") pod \"cinder-db-sync-4wxvl\" (UID: \"65250dcf-0f0f-4fa6-8d57-e07d3d29f290\") " pod="openstack/cinder-db-sync-4wxvl" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:45.900738 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-db-sync-config-data\") pod \"cinder-db-sync-4wxvl\" (UID: \"65250dcf-0f0f-4fa6-8d57-e07d3d29f290\") " pod="openstack/cinder-db-sync-4wxvl" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:45.900768 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-etc-machine-id\") pod \"cinder-db-sync-4wxvl\" (UID: \"65250dcf-0f0f-4fa6-8d57-e07d3d29f290\") " pod="openstack/cinder-db-sync-4wxvl" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.349122 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.368295 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.368527 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-dndng" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.390751 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-etc-machine-id\") pod \"cinder-db-sync-4wxvl\" (UID: \"65250dcf-0f0f-4fa6-8d57-e07d3d29f290\") " pod="openstack/cinder-db-sync-4wxvl" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.391473 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-t6mz2" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.422113 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-db-sync-config-data\") pod \"cinder-db-sync-4wxvl\" (UID: \"65250dcf-0f0f-4fa6-8d57-e07d3d29f290\") " pod="openstack/cinder-db-sync-4wxvl" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.427106 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-scripts\") pod \"cinder-db-sync-4wxvl\" (UID: \"65250dcf-0f0f-4fa6-8d57-e07d3d29f290\") " pod="openstack/cinder-db-sync-4wxvl" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.428811 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-config-data\") pod \"cinder-db-sync-4wxvl\" (UID: \"65250dcf-0f0f-4fa6-8d57-e07d3d29f290\") " pod="openstack/cinder-db-sync-4wxvl" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.431506 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-combined-ca-bundle\") pod \"cinder-db-sync-4wxvl\" (UID: \"65250dcf-0f0f-4fa6-8d57-e07d3d29f290\") " pod="openstack/cinder-db-sync-4wxvl" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.485085 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltkw6\" (UniqueName: \"kubernetes.io/projected/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-kube-api-access-ltkw6\") pod \"cinder-db-sync-4wxvl\" (UID: \"65250dcf-0f0f-4fa6-8d57-e07d3d29f290\") " pod="openstack/cinder-db-sync-4wxvl" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.556284 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f568ffda-82a9-4f47-89d3-13b89a35c9b4-logs\") pod \"placement-db-sync-kc9jz\" (UID: \"f568ffda-82a9-4f47-89d3-13b89a35c9b4\") " pod="openstack/placement-db-sync-kc9jz" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.557031 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv7qz\" (UniqueName: \"kubernetes.io/projected/f568ffda-82a9-4f47-89d3-13b89a35c9b4-kube-api-access-gv7qz\") pod \"placement-db-sync-kc9jz\" (UID: \"f568ffda-82a9-4f47-89d3-13b89a35c9b4\") " pod="openstack/placement-db-sync-kc9jz" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.557238 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f568ffda-82a9-4f47-89d3-13b89a35c9b4-config-data\") pod \"placement-db-sync-kc9jz\" (UID: \"f568ffda-82a9-4f47-89d3-13b89a35c9b4\") " pod="openstack/placement-db-sync-kc9jz" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.557347 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f568ffda-82a9-4f47-89d3-13b89a35c9b4-scripts\") pod \"placement-db-sync-kc9jz\" (UID: \"f568ffda-82a9-4f47-89d3-13b89a35c9b4\") " pod="openstack/placement-db-sync-kc9jz" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.557441 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f568ffda-82a9-4f47-89d3-13b89a35c9b4-combined-ca-bundle\") pod \"placement-db-sync-kc9jz\" (UID: \"f568ffda-82a9-4f47-89d3-13b89a35c9b4\") " pod="openstack/placement-db-sync-kc9jz" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.568904 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-kc9jz"] Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.604303 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bb8f8b9c9-cwqc2"] Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.606666 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bb8f8b9c9-cwqc2" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.613914 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bb8f8b9c9-cwqc2"] Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.641214 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-f67997f9f-4cvfc"] Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.644026 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f67997f9f-4cvfc" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.649025 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4wxvl" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.654396 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c487768dc-xjcjd"] Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.667747 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gv7qz\" (UniqueName: \"kubernetes.io/projected/f568ffda-82a9-4f47-89d3-13b89a35c9b4-kube-api-access-gv7qz\") pod \"placement-db-sync-kc9jz\" (UID: \"f568ffda-82a9-4f47-89d3-13b89a35c9b4\") " pod="openstack/placement-db-sync-kc9jz" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.668019 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-dns-swift-storage-0\") pod \"dnsmasq-dns-bb8f8b9c9-cwqc2\" (UID: \"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f\") " pod="openstack/dnsmasq-dns-bb8f8b9c9-cwqc2" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.668125 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-config\") pod \"dnsmasq-dns-bb8f8b9c9-cwqc2\" (UID: \"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f\") " pod="openstack/dnsmasq-dns-bb8f8b9c9-cwqc2" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.668698 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-ovsdbserver-nb\") pod \"dnsmasq-dns-bb8f8b9c9-cwqc2\" (UID: \"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f\") " pod="openstack/dnsmasq-dns-bb8f8b9c9-cwqc2" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.668833 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-dns-svc\") pod \"dnsmasq-dns-bb8f8b9c9-cwqc2\" (UID: \"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f\") " pod="openstack/dnsmasq-dns-bb8f8b9c9-cwqc2" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.668948 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gpsz\" (UniqueName: \"kubernetes.io/projected/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-kube-api-access-9gpsz\") pod \"dnsmasq-dns-bb8f8b9c9-cwqc2\" (UID: \"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f\") " pod="openstack/dnsmasq-dns-bb8f8b9c9-cwqc2" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.669016 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f568ffda-82a9-4f47-89d3-13b89a35c9b4-config-data\") pod \"placement-db-sync-kc9jz\" (UID: \"f568ffda-82a9-4f47-89d3-13b89a35c9b4\") " pod="openstack/placement-db-sync-kc9jz" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.669097 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f568ffda-82a9-4f47-89d3-13b89a35c9b4-scripts\") pod \"placement-db-sync-kc9jz\" (UID: \"f568ffda-82a9-4f47-89d3-13b89a35c9b4\") " pod="openstack/placement-db-sync-kc9jz" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.669162 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f568ffda-82a9-4f47-89d3-13b89a35c9b4-combined-ca-bundle\") pod \"placement-db-sync-kc9jz\" (UID: \"f568ffda-82a9-4f47-89d3-13b89a35c9b4\") " pod="openstack/placement-db-sync-kc9jz" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.669237 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-ovsdbserver-sb\") pod \"dnsmasq-dns-bb8f8b9c9-cwqc2\" (UID: \"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f\") " pod="openstack/dnsmasq-dns-bb8f8b9c9-cwqc2" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.669364 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f568ffda-82a9-4f47-89d3-13b89a35c9b4-logs\") pod \"placement-db-sync-kc9jz\" (UID: \"f568ffda-82a9-4f47-89d3-13b89a35c9b4\") " pod="openstack/placement-db-sync-kc9jz" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.670511 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f568ffda-82a9-4f47-89d3-13b89a35c9b4-logs\") pod \"placement-db-sync-kc9jz\" (UID: \"f568ffda-82a9-4f47-89d3-13b89a35c9b4\") " pod="openstack/placement-db-sync-kc9jz" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.682096 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-f67997f9f-4cvfc"] Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.682728 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f568ffda-82a9-4f47-89d3-13b89a35c9b4-scripts\") pod \"placement-db-sync-kc9jz\" (UID: \"f568ffda-82a9-4f47-89d3-13b89a35c9b4\") " pod="openstack/placement-db-sync-kc9jz" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.683871 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f568ffda-82a9-4f47-89d3-13b89a35c9b4-combined-ca-bundle\") pod \"placement-db-sync-kc9jz\" (UID: \"f568ffda-82a9-4f47-89d3-13b89a35c9b4\") " pod="openstack/placement-db-sync-kc9jz" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.687806 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f568ffda-82a9-4f47-89d3-13b89a35c9b4-config-data\") pod \"placement-db-sync-kc9jz\" (UID: \"f568ffda-82a9-4f47-89d3-13b89a35c9b4\") " pod="openstack/placement-db-sync-kc9jz" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.704889 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv7qz\" (UniqueName: \"kubernetes.io/projected/f568ffda-82a9-4f47-89d3-13b89a35c9b4-kube-api-access-gv7qz\") pod \"placement-db-sync-kc9jz\" (UID: \"f568ffda-82a9-4f47-89d3-13b89a35c9b4\") " pod="openstack/placement-db-sync-kc9jz" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.820658 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gpsz\" (UniqueName: \"kubernetes.io/projected/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-kube-api-access-9gpsz\") pod \"dnsmasq-dns-bb8f8b9c9-cwqc2\" (UID: \"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f\") " pod="openstack/dnsmasq-dns-bb8f8b9c9-cwqc2" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.824985 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-ovsdbserver-sb\") pod \"dnsmasq-dns-bb8f8b9c9-cwqc2\" (UID: \"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f\") " pod="openstack/dnsmasq-dns-bb8f8b9c9-cwqc2" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.825151 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/71dc95ca-296b-4989-8b57-db806091feea-scripts\") pod \"horizon-f67997f9f-4cvfc\" (UID: \"71dc95ca-296b-4989-8b57-db806091feea\") " pod="openstack/horizon-f67997f9f-4cvfc" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.825213 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/71dc95ca-296b-4989-8b57-db806091feea-config-data\") pod \"horizon-f67997f9f-4cvfc\" (UID: \"71dc95ca-296b-4989-8b57-db806091feea\") " pod="openstack/horizon-f67997f9f-4cvfc" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.825436 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-dns-swift-storage-0\") pod \"dnsmasq-dns-bb8f8b9c9-cwqc2\" (UID: \"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f\") " pod="openstack/dnsmasq-dns-bb8f8b9c9-cwqc2" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.825494 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-config\") pod \"dnsmasq-dns-bb8f8b9c9-cwqc2\" (UID: \"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f\") " pod="openstack/dnsmasq-dns-bb8f8b9c9-cwqc2" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.825539 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71dc95ca-296b-4989-8b57-db806091feea-logs\") pod \"horizon-f67997f9f-4cvfc\" (UID: \"71dc95ca-296b-4989-8b57-db806091feea\") " pod="openstack/horizon-f67997f9f-4cvfc" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.825574 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-ovsdbserver-nb\") pod \"dnsmasq-dns-bb8f8b9c9-cwqc2\" (UID: \"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f\") " pod="openstack/dnsmasq-dns-bb8f8b9c9-cwqc2" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.825614 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/71dc95ca-296b-4989-8b57-db806091feea-horizon-secret-key\") pod \"horizon-f67997f9f-4cvfc\" (UID: \"71dc95ca-296b-4989-8b57-db806091feea\") " pod="openstack/horizon-f67997f9f-4cvfc" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.825668 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-dns-svc\") pod \"dnsmasq-dns-bb8f8b9c9-cwqc2\" (UID: \"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f\") " pod="openstack/dnsmasq-dns-bb8f8b9c9-cwqc2" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.825701 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pfn2\" (UniqueName: \"kubernetes.io/projected/71dc95ca-296b-4989-8b57-db806091feea-kube-api-access-6pfn2\") pod \"horizon-f67997f9f-4cvfc\" (UID: \"71dc95ca-296b-4989-8b57-db806091feea\") " pod="openstack/horizon-f67997f9f-4cvfc" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.827452 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-dns-swift-storage-0\") pod \"dnsmasq-dns-bb8f8b9c9-cwqc2\" (UID: \"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f\") " pod="openstack/dnsmasq-dns-bb8f8b9c9-cwqc2" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.827561 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-ovsdbserver-sb\") pod \"dnsmasq-dns-bb8f8b9c9-cwqc2\" (UID: \"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f\") " pod="openstack/dnsmasq-dns-bb8f8b9c9-cwqc2" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.830100 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-dns-svc\") pod \"dnsmasq-dns-bb8f8b9c9-cwqc2\" (UID: \"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f\") " pod="openstack/dnsmasq-dns-bb8f8b9c9-cwqc2" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.835983 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-ovsdbserver-nb\") pod \"dnsmasq-dns-bb8f8b9c9-cwqc2\" (UID: \"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f\") " pod="openstack/dnsmasq-dns-bb8f8b9c9-cwqc2" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.846449 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-config\") pod \"dnsmasq-dns-bb8f8b9c9-cwqc2\" (UID: \"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f\") " pod="openstack/dnsmasq-dns-bb8f8b9c9-cwqc2" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.867744 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gpsz\" (UniqueName: \"kubernetes.io/projected/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-kube-api-access-9gpsz\") pod \"dnsmasq-dns-bb8f8b9c9-cwqc2\" (UID: \"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f\") " pod="openstack/dnsmasq-dns-bb8f8b9c9-cwqc2" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.954274 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/71dc95ca-296b-4989-8b57-db806091feea-scripts\") pod \"horizon-f67997f9f-4cvfc\" (UID: \"71dc95ca-296b-4989-8b57-db806091feea\") " pod="openstack/horizon-f67997f9f-4cvfc" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.954345 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/71dc95ca-296b-4989-8b57-db806091feea-config-data\") pod \"horizon-f67997f9f-4cvfc\" (UID: \"71dc95ca-296b-4989-8b57-db806091feea\") " pod="openstack/horizon-f67997f9f-4cvfc" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.954535 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71dc95ca-296b-4989-8b57-db806091feea-logs\") pod \"horizon-f67997f9f-4cvfc\" (UID: \"71dc95ca-296b-4989-8b57-db806091feea\") " pod="openstack/horizon-f67997f9f-4cvfc" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.954593 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/71dc95ca-296b-4989-8b57-db806091feea-horizon-secret-key\") pod \"horizon-f67997f9f-4cvfc\" (UID: \"71dc95ca-296b-4989-8b57-db806091feea\") " pod="openstack/horizon-f67997f9f-4cvfc" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.954634 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pfn2\" (UniqueName: \"kubernetes.io/projected/71dc95ca-296b-4989-8b57-db806091feea-kube-api-access-6pfn2\") pod \"horizon-f67997f9f-4cvfc\" (UID: \"71dc95ca-296b-4989-8b57-db806091feea\") " pod="openstack/horizon-f67997f9f-4cvfc" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.955641 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/71dc95ca-296b-4989-8b57-db806091feea-scripts\") pod \"horizon-f67997f9f-4cvfc\" (UID: \"71dc95ca-296b-4989-8b57-db806091feea\") " pod="openstack/horizon-f67997f9f-4cvfc" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.958249 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71dc95ca-296b-4989-8b57-db806091feea-logs\") pod \"horizon-f67997f9f-4cvfc\" (UID: \"71dc95ca-296b-4989-8b57-db806091feea\") " pod="openstack/horizon-f67997f9f-4cvfc" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.960539 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/71dc95ca-296b-4989-8b57-db806091feea-config-data\") pod \"horizon-f67997f9f-4cvfc\" (UID: \"71dc95ca-296b-4989-8b57-db806091feea\") " pod="openstack/horizon-f67997f9f-4cvfc" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.967006 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/71dc95ca-296b-4989-8b57-db806091feea-horizon-secret-key\") pod \"horizon-f67997f9f-4cvfc\" (UID: \"71dc95ca-296b-4989-8b57-db806091feea\") " pod="openstack/horizon-f67997f9f-4cvfc" Jan 21 11:19:46 crc kubenswrapper[4881]: I0121 11:19:46.992936 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pfn2\" (UniqueName: \"kubernetes.io/projected/71dc95ca-296b-4989-8b57-db806091feea-kube-api-access-6pfn2\") pod \"horizon-f67997f9f-4cvfc\" (UID: \"71dc95ca-296b-4989-8b57-db806091feea\") " pod="openstack/horizon-f67997f9f-4cvfc" Jan 21 11:19:47 crc kubenswrapper[4881]: I0121 11:19:47.005709 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-kc9jz" Jan 21 11:19:47 crc kubenswrapper[4881]: I0121 11:19:47.093619 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bb8f8b9c9-cwqc2" Jan 21 11:19:47 crc kubenswrapper[4881]: I0121 11:19:47.105042 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f67997f9f-4cvfc" Jan 21 11:19:47 crc kubenswrapper[4881]: I0121 11:19:47.347926 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-wg7xs"] Jan 21 11:19:47 crc kubenswrapper[4881]: I0121 11:19:47.557508 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wg7xs" event={"ID":"cc3f2556-7427-4715-a56d-bbd3d7f8422f","Type":"ContainerStarted","Data":"255feaa412fc0f66dab19086ce14a7162b45237578665b2935e062ce5998cebf"} Jan 21 11:19:47 crc kubenswrapper[4881]: I0121 11:19:47.593439 4881 generic.go:334] "Generic (PLEG): container finished" podID="386c2ea0-a9e4-490b-b83d-9106af06cd60" containerID="0b3499279e821abc9972417aed3d7ac5e0fad614ad777b7fffe9719ed70fc705" exitCode=0 Jan 21 11:19:47 crc kubenswrapper[4881]: I0121 11:19:47.593520 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c487768dc-xjcjd" event={"ID":"386c2ea0-a9e4-490b-b83d-9106af06cd60","Type":"ContainerDied","Data":"0b3499279e821abc9972417aed3d7ac5e0fad614ad777b7fffe9719ed70fc705"} Jan 21 11:19:47 crc kubenswrapper[4881]: I0121 11:19:47.593556 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c487768dc-xjcjd" event={"ID":"386c2ea0-a9e4-490b-b83d-9106af06cd60","Type":"ContainerStarted","Data":"09b9ddb4df44086c306b5d7a672d610bbf5c91e71fa1fc554515dc374c5b9ffb"} Jan 21 11:19:47 crc kubenswrapper[4881]: I0121 11:19:47.847907 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:19:47 crc kubenswrapper[4881]: W0121 11:19:47.858055 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbcec3c24_87bd_4c22_a800_d3835455a38b.slice/crio-254ee6473012064881c3b931949d5889b646c256080246e608ecc4945a005f58 WatchSource:0}: Error finding container 254ee6473012064881c3b931949d5889b646c256080246e608ecc4945a005f58: Status 404 returned error can't find the container with id 254ee6473012064881c3b931949d5889b646c256080246e608ecc4945a005f58 Jan 21 11:19:47 crc kubenswrapper[4881]: I0121 11:19:47.861884 4881 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 11:19:47 crc kubenswrapper[4881]: I0121 11:19:47.879893 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-t6mz2"] Jan 21 11:19:47 crc kubenswrapper[4881]: I0121 11:19:47.907929 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-slhtz"] Jan 21 11:19:47 crc kubenswrapper[4881]: I0121 11:19:47.945388 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-77fb486557-zjtxw"] Jan 21 11:19:47 crc kubenswrapper[4881]: W0121 11:19:47.952779 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd96c79b7_58c4_4bcc_9e56_02f2a8860764.slice/crio-af85a7051ff9ab4c70d7145be172f02be844f0b1a0972620051139b6c311b772 WatchSource:0}: Error finding container af85a7051ff9ab4c70d7145be172f02be844f0b1a0972620051139b6c311b772: Status 404 returned error can't find the container with id af85a7051ff9ab4c70d7145be172f02be844f0b1a0972620051139b6c311b772 Jan 21 11:19:48 crc kubenswrapper[4881]: I0121 11:19:48.059519 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-4wxvl"] Jan 21 11:19:48 crc kubenswrapper[4881]: I0121 11:19:48.398805 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bb8f8b9c9-cwqc2"] Jan 21 11:19:48 crc kubenswrapper[4881]: I0121 11:19:48.640077 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-kc9jz"] Jan 21 11:19:48 crc kubenswrapper[4881]: I0121 11:19:48.654384 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-f67997f9f-4cvfc"] Jan 21 11:19:48 crc kubenswrapper[4881]: I0121 11:19:48.725148 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wg7xs" event={"ID":"cc3f2556-7427-4715-a56d-bbd3d7f8422f","Type":"ContainerStarted","Data":"20252506bf2921633b620e12ae73d258d135c6a818c92bcf4d604ddbc1f5e46d"} Jan 21 11:19:48 crc kubenswrapper[4881]: I0121 11:19:48.726313 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c487768dc-xjcjd" Jan 21 11:19:48 crc kubenswrapper[4881]: I0121 11:19:48.773343 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4wxvl" event={"ID":"65250dcf-0f0f-4fa6-8d57-e07d3d29f290","Type":"ContainerStarted","Data":"fcbe801cf2c7f3f9ce63291d49a4353e90c810cdaa5f27e1d6112dedee1eae63"} Jan 21 11:19:48 crc kubenswrapper[4881]: I0121 11:19:48.789385 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-slhtz" event={"ID":"4bf52889-d5f3-44f8-b657-8ff3790962d1","Type":"ContainerStarted","Data":"370f02f399b03911d8ee654e46609c08288e0d57caf3655dba13b0b2e545df19"} Jan 21 11:19:48 crc kubenswrapper[4881]: I0121 11:19:48.807473 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bcec3c24-87bd-4c22-a800-d3835455a38b","Type":"ContainerStarted","Data":"254ee6473012064881c3b931949d5889b646c256080246e608ecc4945a005f58"} Jan 21 11:19:48 crc kubenswrapper[4881]: I0121 11:19:48.832214 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77fb486557-zjtxw" event={"ID":"d96c79b7-58c4-4bcc-9e56-02f2a8860764","Type":"ContainerStarted","Data":"af85a7051ff9ab4c70d7145be172f02be844f0b1a0972620051139b6c311b772"} Jan 21 11:19:48 crc kubenswrapper[4881]: I0121 11:19:48.840335 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/386c2ea0-a9e4-490b-b83d-9106af06cd60-ovsdbserver-nb\") pod \"386c2ea0-a9e4-490b-b83d-9106af06cd60\" (UID: \"386c2ea0-a9e4-490b-b83d-9106af06cd60\") " Jan 21 11:19:48 crc kubenswrapper[4881]: I0121 11:19:48.840447 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/386c2ea0-a9e4-490b-b83d-9106af06cd60-config\") pod \"386c2ea0-a9e4-490b-b83d-9106af06cd60\" (UID: \"386c2ea0-a9e4-490b-b83d-9106af06cd60\") " Jan 21 11:19:48 crc kubenswrapper[4881]: I0121 11:19:48.840519 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tks72\" (UniqueName: \"kubernetes.io/projected/386c2ea0-a9e4-490b-b83d-9106af06cd60-kube-api-access-tks72\") pod \"386c2ea0-a9e4-490b-b83d-9106af06cd60\" (UID: \"386c2ea0-a9e4-490b-b83d-9106af06cd60\") " Jan 21 11:19:48 crc kubenswrapper[4881]: I0121 11:19:48.840549 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/386c2ea0-a9e4-490b-b83d-9106af06cd60-dns-svc\") pod \"386c2ea0-a9e4-490b-b83d-9106af06cd60\" (UID: \"386c2ea0-a9e4-490b-b83d-9106af06cd60\") " Jan 21 11:19:48 crc kubenswrapper[4881]: I0121 11:19:48.840772 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/386c2ea0-a9e4-490b-b83d-9106af06cd60-dns-swift-storage-0\") pod \"386c2ea0-a9e4-490b-b83d-9106af06cd60\" (UID: \"386c2ea0-a9e4-490b-b83d-9106af06cd60\") " Jan 21 11:19:48 crc kubenswrapper[4881]: I0121 11:19:48.840822 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/386c2ea0-a9e4-490b-b83d-9106af06cd60-ovsdbserver-sb\") pod \"386c2ea0-a9e4-490b-b83d-9106af06cd60\" (UID: \"386c2ea0-a9e4-490b-b83d-9106af06cd60\") " Jan 21 11:19:48 crc kubenswrapper[4881]: I0121 11:19:48.849342 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb8f8b9c9-cwqc2" event={"ID":"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f","Type":"ContainerStarted","Data":"89b83a73d98285f1ad5dfbcb846ef4a7cc6a0027b6f7fbb5d7b8bc7a7b615ee8"} Jan 21 11:19:48 crc kubenswrapper[4881]: I0121 11:19:48.898205 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-t6mz2" event={"ID":"869a596b-159c-4185-a4ab-0e36c5d130fc","Type":"ContainerStarted","Data":"60332241610e38a80a618de620e24fb0c01532db2d0020dd0177b716555cd915"} Jan 21 11:19:48 crc kubenswrapper[4881]: I0121 11:19:48.916336 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/386c2ea0-a9e4-490b-b83d-9106af06cd60-kube-api-access-tks72" (OuterVolumeSpecName: "kube-api-access-tks72") pod "386c2ea0-a9e4-490b-b83d-9106af06cd60" (UID: "386c2ea0-a9e4-490b-b83d-9106af06cd60"). InnerVolumeSpecName "kube-api-access-tks72". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:19:48 crc kubenswrapper[4881]: I0121 11:19:48.946233 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/386c2ea0-a9e4-490b-b83d-9106af06cd60-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "386c2ea0-a9e4-490b-b83d-9106af06cd60" (UID: "386c2ea0-a9e4-490b-b83d-9106af06cd60"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:19:48 crc kubenswrapper[4881]: I0121 11:19:48.948213 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/386c2ea0-a9e4-490b-b83d-9106af06cd60-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 11:19:48 crc kubenswrapper[4881]: I0121 11:19:48.948246 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tks72\" (UniqueName: \"kubernetes.io/projected/386c2ea0-a9e4-490b-b83d-9106af06cd60-kube-api-access-tks72\") on node \"crc\" DevicePath \"\"" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.037181 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c487768dc-xjcjd" event={"ID":"386c2ea0-a9e4-490b-b83d-9106af06cd60","Type":"ContainerDied","Data":"09b9ddb4df44086c306b5d7a672d610bbf5c91e71fa1fc554515dc374c5b9ffb"} Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.037292 4881 scope.go:117] "RemoveContainer" containerID="0b3499279e821abc9972417aed3d7ac5e0fad614ad777b7fffe9719ed70fc705" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.037704 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c487768dc-xjcjd" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.050583 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/386c2ea0-a9e4-490b-b83d-9106af06cd60-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "386c2ea0-a9e4-490b-b83d-9106af06cd60" (UID: "386c2ea0-a9e4-490b-b83d-9106af06cd60"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.052553 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/386c2ea0-a9e4-490b-b83d-9106af06cd60-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "386c2ea0-a9e4-490b-b83d-9106af06cd60" (UID: "386c2ea0-a9e4-490b-b83d-9106af06cd60"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.127655 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/386c2ea0-a9e4-490b-b83d-9106af06cd60-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.127695 4881 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/386c2ea0-a9e4-490b-b83d-9106af06cd60-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.141305 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-wg7xs" podStartSLOduration=5.141274479 podStartE2EDuration="5.141274479s" podCreationTimestamp="2026-01-21 11:19:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:19:48.896600907 +0000 UTC m=+1376.156557376" watchObservedRunningTime="2026-01-21 11:19:49.141274479 +0000 UTC m=+1376.401230978" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.150926 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-77fb486557-zjtxw"] Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.230436 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.256074 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-67c79cd6d5-lrpwx"] Jan 21 11:19:49 crc kubenswrapper[4881]: E0121 11:19:49.256634 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="386c2ea0-a9e4-490b-b83d-9106af06cd60" containerName="init" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.256651 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="386c2ea0-a9e4-490b-b83d-9106af06cd60" containerName="init" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.256895 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="386c2ea0-a9e4-490b-b83d-9106af06cd60" containerName="init" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.258164 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-67c79cd6d5-lrpwx" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.286916 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/386c2ea0-a9e4-490b-b83d-9106af06cd60-config" (OuterVolumeSpecName: "config") pod "386c2ea0-a9e4-490b-b83d-9106af06cd60" (UID: "386c2ea0-a9e4-490b-b83d-9106af06cd60"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.293907 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-67c79cd6d5-lrpwx"] Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.307534 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/386c2ea0-a9e4-490b-b83d-9106af06cd60-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "386c2ea0-a9e4-490b-b83d-9106af06cd60" (UID: "386c2ea0-a9e4-490b-b83d-9106af06cd60"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.332425 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab2b33fa-d171-4525-b7a6-5bfc3a732fa4-scripts\") pod \"horizon-67c79cd6d5-lrpwx\" (UID: \"ab2b33fa-d171-4525-b7a6-5bfc3a732fa4\") " pod="openstack/horizon-67c79cd6d5-lrpwx" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.334232 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ab2b33fa-d171-4525-b7a6-5bfc3a732fa4-horizon-secret-key\") pod \"horizon-67c79cd6d5-lrpwx\" (UID: \"ab2b33fa-d171-4525-b7a6-5bfc3a732fa4\") " pod="openstack/horizon-67c79cd6d5-lrpwx" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.334336 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62jn4\" (UniqueName: \"kubernetes.io/projected/ab2b33fa-d171-4525-b7a6-5bfc3a732fa4-kube-api-access-62jn4\") pod \"horizon-67c79cd6d5-lrpwx\" (UID: \"ab2b33fa-d171-4525-b7a6-5bfc3a732fa4\") " pod="openstack/horizon-67c79cd6d5-lrpwx" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.334465 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ab2b33fa-d171-4525-b7a6-5bfc3a732fa4-config-data\") pod \"horizon-67c79cd6d5-lrpwx\" (UID: \"ab2b33fa-d171-4525-b7a6-5bfc3a732fa4\") " pod="openstack/horizon-67c79cd6d5-lrpwx" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.340429 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab2b33fa-d171-4525-b7a6-5bfc3a732fa4-logs\") pod \"horizon-67c79cd6d5-lrpwx\" (UID: \"ab2b33fa-d171-4525-b7a6-5bfc3a732fa4\") " pod="openstack/horizon-67c79cd6d5-lrpwx" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.340737 4881 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/386c2ea0-a9e4-490b-b83d-9106af06cd60-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.340761 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/386c2ea0-a9e4-490b-b83d-9106af06cd60-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.445079 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab2b33fa-d171-4525-b7a6-5bfc3a732fa4-logs\") pod \"horizon-67c79cd6d5-lrpwx\" (UID: \"ab2b33fa-d171-4525-b7a6-5bfc3a732fa4\") " pod="openstack/horizon-67c79cd6d5-lrpwx" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.445596 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab2b33fa-d171-4525-b7a6-5bfc3a732fa4-logs\") pod \"horizon-67c79cd6d5-lrpwx\" (UID: \"ab2b33fa-d171-4525-b7a6-5bfc3a732fa4\") " pod="openstack/horizon-67c79cd6d5-lrpwx" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.446692 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab2b33fa-d171-4525-b7a6-5bfc3a732fa4-scripts\") pod \"horizon-67c79cd6d5-lrpwx\" (UID: \"ab2b33fa-d171-4525-b7a6-5bfc3a732fa4\") " pod="openstack/horizon-67c79cd6d5-lrpwx" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.453423 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab2b33fa-d171-4525-b7a6-5bfc3a732fa4-scripts\") pod \"horizon-67c79cd6d5-lrpwx\" (UID: \"ab2b33fa-d171-4525-b7a6-5bfc3a732fa4\") " pod="openstack/horizon-67c79cd6d5-lrpwx" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.453567 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ab2b33fa-d171-4525-b7a6-5bfc3a732fa4-horizon-secret-key\") pod \"horizon-67c79cd6d5-lrpwx\" (UID: \"ab2b33fa-d171-4525-b7a6-5bfc3a732fa4\") " pod="openstack/horizon-67c79cd6d5-lrpwx" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.453636 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62jn4\" (UniqueName: \"kubernetes.io/projected/ab2b33fa-d171-4525-b7a6-5bfc3a732fa4-kube-api-access-62jn4\") pod \"horizon-67c79cd6d5-lrpwx\" (UID: \"ab2b33fa-d171-4525-b7a6-5bfc3a732fa4\") " pod="openstack/horizon-67c79cd6d5-lrpwx" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.453684 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ab2b33fa-d171-4525-b7a6-5bfc3a732fa4-config-data\") pod \"horizon-67c79cd6d5-lrpwx\" (UID: \"ab2b33fa-d171-4525-b7a6-5bfc3a732fa4\") " pod="openstack/horizon-67c79cd6d5-lrpwx" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.455186 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ab2b33fa-d171-4525-b7a6-5bfc3a732fa4-config-data\") pod \"horizon-67c79cd6d5-lrpwx\" (UID: \"ab2b33fa-d171-4525-b7a6-5bfc3a732fa4\") " pod="openstack/horizon-67c79cd6d5-lrpwx" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.463429 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ab2b33fa-d171-4525-b7a6-5bfc3a732fa4-horizon-secret-key\") pod \"horizon-67c79cd6d5-lrpwx\" (UID: \"ab2b33fa-d171-4525-b7a6-5bfc3a732fa4\") " pod="openstack/horizon-67c79cd6d5-lrpwx" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.485697 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62jn4\" (UniqueName: \"kubernetes.io/projected/ab2b33fa-d171-4525-b7a6-5bfc3a732fa4-kube-api-access-62jn4\") pod \"horizon-67c79cd6d5-lrpwx\" (UID: \"ab2b33fa-d171-4525-b7a6-5bfc3a732fa4\") " pod="openstack/horizon-67c79cd6d5-lrpwx" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.625424 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c487768dc-xjcjd"] Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.627286 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-67c79cd6d5-lrpwx" Jan 21 11:19:49 crc kubenswrapper[4881]: I0121 11:19:49.645547 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c487768dc-xjcjd"] Jan 21 11:19:50 crc kubenswrapper[4881]: I0121 11:19:50.088523 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f67997f9f-4cvfc" event={"ID":"71dc95ca-296b-4989-8b57-db806091feea","Type":"ContainerStarted","Data":"c28d2087f01d52faf0bfd56ba4bbb293832881e04f8418954c0e024ee5bf824b"} Jan 21 11:19:50 crc kubenswrapper[4881]: I0121 11:19:50.116632 4881 generic.go:334] "Generic (PLEG): container finished" podID="a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f" containerID="ab477504b6174b1df2cba532dc993abe653a33a827965c0d26c8c5abcd35974f" exitCode=0 Jan 21 11:19:50 crc kubenswrapper[4881]: I0121 11:19:50.116707 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb8f8b9c9-cwqc2" event={"ID":"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f","Type":"ContainerDied","Data":"ab477504b6174b1df2cba532dc993abe653a33a827965c0d26c8c5abcd35974f"} Jan 21 11:19:50 crc kubenswrapper[4881]: I0121 11:19:50.124487 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-t6mz2" event={"ID":"869a596b-159c-4185-a4ab-0e36c5d130fc","Type":"ContainerStarted","Data":"60c7ee63bf67b35a7137c545eb5e36b0ba7f24fe96f583c9314a3bcf2ea933c6"} Jan 21 11:19:50 crc kubenswrapper[4881]: I0121 11:19:50.134109 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-kc9jz" event={"ID":"f568ffda-82a9-4f47-89d3-13b89a35c9b4","Type":"ContainerStarted","Data":"73872e6c614646bff532d76f6a6a2af8c1af4b2996c3b90c9492f6b03925e082"} Jan 21 11:19:50 crc kubenswrapper[4881]: I0121 11:19:50.183870 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-t6mz2" podStartSLOduration=5.18384683 podStartE2EDuration="5.18384683s" podCreationTimestamp="2026-01-21 11:19:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:19:50.171379448 +0000 UTC m=+1377.431335907" watchObservedRunningTime="2026-01-21 11:19:50.18384683 +0000 UTC m=+1377.443803289" Jan 21 11:19:50 crc kubenswrapper[4881]: I0121 11:19:50.293246 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-67c79cd6d5-lrpwx"] Jan 21 11:19:50 crc kubenswrapper[4881]: W0121 11:19:50.299911 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab2b33fa_d171_4525_b7a6_5bfc3a732fa4.slice/crio-c6064c9f2031907151a3a773338a4fc1c8d9b098f896f5cca5bc2a461a7bc91d WatchSource:0}: Error finding container c6064c9f2031907151a3a773338a4fc1c8d9b098f896f5cca5bc2a461a7bc91d: Status 404 returned error can't find the container with id c6064c9f2031907151a3a773338a4fc1c8d9b098f896f5cca5bc2a461a7bc91d Jan 21 11:19:51 crc kubenswrapper[4881]: I0121 11:19:51.156315 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67c79cd6d5-lrpwx" event={"ID":"ab2b33fa-d171-4525-b7a6-5bfc3a732fa4","Type":"ContainerStarted","Data":"c6064c9f2031907151a3a773338a4fc1c8d9b098f896f5cca5bc2a461a7bc91d"} Jan 21 11:19:51 crc kubenswrapper[4881]: I0121 11:19:51.328999 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="386c2ea0-a9e4-490b-b83d-9106af06cd60" path="/var/lib/kubelet/pods/386c2ea0-a9e4-490b-b83d-9106af06cd60/volumes" Jan 21 11:19:52 crc kubenswrapper[4881]: I0121 11:19:52.229063 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb8f8b9c9-cwqc2" event={"ID":"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f","Type":"ContainerStarted","Data":"3c2fbfa61210bf849e04651287e22b6c198d4c12ea96a2312edd5e9f291c7879"} Jan 21 11:19:52 crc kubenswrapper[4881]: I0121 11:19:52.229142 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bb8f8b9c9-cwqc2" Jan 21 11:19:52 crc kubenswrapper[4881]: I0121 11:19:52.272373 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bb8f8b9c9-cwqc2" podStartSLOduration=7.272313518 podStartE2EDuration="7.272313518s" podCreationTimestamp="2026-01-21 11:19:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:19:52.252880812 +0000 UTC m=+1379.512837281" watchObservedRunningTime="2026-01-21 11:19:52.272313518 +0000 UTC m=+1379.532269987" Jan 21 11:19:53 crc kubenswrapper[4881]: I0121 11:19:53.770992 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="cd1973a5-773b-438b-aab7-709fb828324d" containerName="galera" probeResult="failure" output="command timed out" Jan 21 11:19:53 crc kubenswrapper[4881]: I0121 11:19:53.777999 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="cd1973a5-773b-438b-aab7-709fb828324d" containerName="galera" probeResult="failure" output="command timed out" Jan 21 11:19:54 crc kubenswrapper[4881]: I0121 11:19:54.771255 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-f67997f9f-4cvfc"] Jan 21 11:19:54 crc kubenswrapper[4881]: I0121 11:19:54.789214 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-69c96776fd-k2z88"] Jan 21 11:19:54 crc kubenswrapper[4881]: I0121 11:19:54.792363 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69c96776fd-k2z88" Jan 21 11:19:54 crc kubenswrapper[4881]: I0121 11:19:54.796842 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 21 11:19:54 crc kubenswrapper[4881]: I0121 11:19:54.815754 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-69c96776fd-k2z88"] Jan 21 11:19:54 crc kubenswrapper[4881]: I0121 11:19:54.860651 4881 generic.go:334] "Generic (PLEG): container finished" podID="bc7e598c-b449-4e8c-9214-44e27cb45e53" containerID="b4ed75bebc3e4f7b35b331a2f216bede613a9086f548aa45e96cbef5724a690a" exitCode=0 Jan 21 11:19:54 crc kubenswrapper[4881]: I0121 11:19:54.860756 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-t4mx7" event={"ID":"bc7e598c-b449-4e8c-9214-44e27cb45e53","Type":"ContainerDied","Data":"b4ed75bebc3e4f7b35b331a2f216bede613a9086f548aa45e96cbef5724a690a"} Jan 21 11:19:54 crc kubenswrapper[4881]: I0121 11:19:54.880017 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2f516fb6-322b-4eee-9d4d-a10176959bbb-config-data\") pod \"horizon-69c96776fd-k2z88\" (UID: \"2f516fb6-322b-4eee-9d4d-a10176959bbb\") " pod="openstack/horizon-69c96776fd-k2z88" Jan 21 11:19:54 crc kubenswrapper[4881]: I0121 11:19:54.880111 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f516fb6-322b-4eee-9d4d-a10176959bbb-horizon-tls-certs\") pod \"horizon-69c96776fd-k2z88\" (UID: \"2f516fb6-322b-4eee-9d4d-a10176959bbb\") " pod="openstack/horizon-69c96776fd-k2z88" Jan 21 11:19:54 crc kubenswrapper[4881]: I0121 11:19:54.880184 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2f516fb6-322b-4eee-9d4d-a10176959bbb-horizon-secret-key\") pod \"horizon-69c96776fd-k2z88\" (UID: \"2f516fb6-322b-4eee-9d4d-a10176959bbb\") " pod="openstack/horizon-69c96776fd-k2z88" Jan 21 11:19:54 crc kubenswrapper[4881]: I0121 11:19:54.880253 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f516fb6-322b-4eee-9d4d-a10176959bbb-scripts\") pod \"horizon-69c96776fd-k2z88\" (UID: \"2f516fb6-322b-4eee-9d4d-a10176959bbb\") " pod="openstack/horizon-69c96776fd-k2z88" Jan 21 11:19:54 crc kubenswrapper[4881]: I0121 11:19:54.880313 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f516fb6-322b-4eee-9d4d-a10176959bbb-logs\") pod \"horizon-69c96776fd-k2z88\" (UID: \"2f516fb6-322b-4eee-9d4d-a10176959bbb\") " pod="openstack/horizon-69c96776fd-k2z88" Jan 21 11:19:54 crc kubenswrapper[4881]: I0121 11:19:54.880392 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f516fb6-322b-4eee-9d4d-a10176959bbb-combined-ca-bundle\") pod \"horizon-69c96776fd-k2z88\" (UID: \"2f516fb6-322b-4eee-9d4d-a10176959bbb\") " pod="openstack/horizon-69c96776fd-k2z88" Jan 21 11:19:54 crc kubenswrapper[4881]: I0121 11:19:54.880510 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lfrt\" (UniqueName: \"kubernetes.io/projected/2f516fb6-322b-4eee-9d4d-a10176959bbb-kube-api-access-2lfrt\") pod \"horizon-69c96776fd-k2z88\" (UID: \"2f516fb6-322b-4eee-9d4d-a10176959bbb\") " pod="openstack/horizon-69c96776fd-k2z88" Jan 21 11:19:54 crc kubenswrapper[4881]: I0121 11:19:54.984169 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2f516fb6-322b-4eee-9d4d-a10176959bbb-config-data\") pod \"horizon-69c96776fd-k2z88\" (UID: \"2f516fb6-322b-4eee-9d4d-a10176959bbb\") " pod="openstack/horizon-69c96776fd-k2z88" Jan 21 11:19:54 crc kubenswrapper[4881]: I0121 11:19:54.984227 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f516fb6-322b-4eee-9d4d-a10176959bbb-horizon-tls-certs\") pod \"horizon-69c96776fd-k2z88\" (UID: \"2f516fb6-322b-4eee-9d4d-a10176959bbb\") " pod="openstack/horizon-69c96776fd-k2z88" Jan 21 11:19:54 crc kubenswrapper[4881]: I0121 11:19:54.984274 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2f516fb6-322b-4eee-9d4d-a10176959bbb-horizon-secret-key\") pod \"horizon-69c96776fd-k2z88\" (UID: \"2f516fb6-322b-4eee-9d4d-a10176959bbb\") " pod="openstack/horizon-69c96776fd-k2z88" Jan 21 11:19:54 crc kubenswrapper[4881]: I0121 11:19:54.984320 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f516fb6-322b-4eee-9d4d-a10176959bbb-scripts\") pod \"horizon-69c96776fd-k2z88\" (UID: \"2f516fb6-322b-4eee-9d4d-a10176959bbb\") " pod="openstack/horizon-69c96776fd-k2z88" Jan 21 11:19:54 crc kubenswrapper[4881]: I0121 11:19:54.984358 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f516fb6-322b-4eee-9d4d-a10176959bbb-logs\") pod \"horizon-69c96776fd-k2z88\" (UID: \"2f516fb6-322b-4eee-9d4d-a10176959bbb\") " pod="openstack/horizon-69c96776fd-k2z88" Jan 21 11:19:54 crc kubenswrapper[4881]: I0121 11:19:54.984420 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f516fb6-322b-4eee-9d4d-a10176959bbb-combined-ca-bundle\") pod \"horizon-69c96776fd-k2z88\" (UID: \"2f516fb6-322b-4eee-9d4d-a10176959bbb\") " pod="openstack/horizon-69c96776fd-k2z88" Jan 21 11:19:54 crc kubenswrapper[4881]: I0121 11:19:54.984636 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lfrt\" (UniqueName: \"kubernetes.io/projected/2f516fb6-322b-4eee-9d4d-a10176959bbb-kube-api-access-2lfrt\") pod \"horizon-69c96776fd-k2z88\" (UID: \"2f516fb6-322b-4eee-9d4d-a10176959bbb\") " pod="openstack/horizon-69c96776fd-k2z88" Jan 21 11:19:54 crc kubenswrapper[4881]: I0121 11:19:54.986280 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2f516fb6-322b-4eee-9d4d-a10176959bbb-config-data\") pod \"horizon-69c96776fd-k2z88\" (UID: \"2f516fb6-322b-4eee-9d4d-a10176959bbb\") " pod="openstack/horizon-69c96776fd-k2z88" Jan 21 11:19:54 crc kubenswrapper[4881]: I0121 11:19:54.986606 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f516fb6-322b-4eee-9d4d-a10176959bbb-logs\") pod \"horizon-69c96776fd-k2z88\" (UID: \"2f516fb6-322b-4eee-9d4d-a10176959bbb\") " pod="openstack/horizon-69c96776fd-k2z88" Jan 21 11:19:54 crc kubenswrapper[4881]: I0121 11:19:54.987031 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f516fb6-322b-4eee-9d4d-a10176959bbb-scripts\") pod \"horizon-69c96776fd-k2z88\" (UID: \"2f516fb6-322b-4eee-9d4d-a10176959bbb\") " pod="openstack/horizon-69c96776fd-k2z88" Jan 21 11:19:55 crc kubenswrapper[4881]: I0121 11:19:55.001315 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f516fb6-322b-4eee-9d4d-a10176959bbb-horizon-tls-certs\") pod \"horizon-69c96776fd-k2z88\" (UID: \"2f516fb6-322b-4eee-9d4d-a10176959bbb\") " pod="openstack/horizon-69c96776fd-k2z88" Jan 21 11:19:55 crc kubenswrapper[4881]: I0121 11:19:55.013469 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2f516fb6-322b-4eee-9d4d-a10176959bbb-horizon-secret-key\") pod \"horizon-69c96776fd-k2z88\" (UID: \"2f516fb6-322b-4eee-9d4d-a10176959bbb\") " pod="openstack/horizon-69c96776fd-k2z88" Jan 21 11:19:55 crc kubenswrapper[4881]: I0121 11:19:55.015691 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f516fb6-322b-4eee-9d4d-a10176959bbb-combined-ca-bundle\") pod \"horizon-69c96776fd-k2z88\" (UID: \"2f516fb6-322b-4eee-9d4d-a10176959bbb\") " pod="openstack/horizon-69c96776fd-k2z88" Jan 21 11:19:55 crc kubenswrapper[4881]: I0121 11:19:55.038356 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lfrt\" (UniqueName: \"kubernetes.io/projected/2f516fb6-322b-4eee-9d4d-a10176959bbb-kube-api-access-2lfrt\") pod \"horizon-69c96776fd-k2z88\" (UID: \"2f516fb6-322b-4eee-9d4d-a10176959bbb\") " pod="openstack/horizon-69c96776fd-k2z88" Jan 21 11:19:55 crc kubenswrapper[4881]: I0121 11:19:55.049835 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-67c79cd6d5-lrpwx"] Jan 21 11:19:55 crc kubenswrapper[4881]: I0121 11:19:55.096020 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-68b447d964-6llq5"] Jan 21 11:19:55 crc kubenswrapper[4881]: I0121 11:19:55.099775 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68b447d964-6llq5" Jan 21 11:19:55 crc kubenswrapper[4881]: I0121 11:19:55.122988 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69c96776fd-k2z88" Jan 21 11:19:55 crc kubenswrapper[4881]: I0121 11:19:55.144654 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-68b447d964-6llq5"] Jan 21 11:19:55 crc kubenswrapper[4881]: I0121 11:19:55.322314 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlg56\" (UniqueName: \"kubernetes.io/projected/07cdf1a8-aec4-42ca-a564-c91e7132663d-kube-api-access-rlg56\") pod \"horizon-68b447d964-6llq5\" (UID: \"07cdf1a8-aec4-42ca-a564-c91e7132663d\") " pod="openstack/horizon-68b447d964-6llq5" Jan 21 11:19:55 crc kubenswrapper[4881]: I0121 11:19:55.322393 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07cdf1a8-aec4-42ca-a564-c91e7132663d-combined-ca-bundle\") pod \"horizon-68b447d964-6llq5\" (UID: \"07cdf1a8-aec4-42ca-a564-c91e7132663d\") " pod="openstack/horizon-68b447d964-6llq5" Jan 21 11:19:55 crc kubenswrapper[4881]: I0121 11:19:55.322604 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/07cdf1a8-aec4-42ca-a564-c91e7132663d-config-data\") pod \"horizon-68b447d964-6llq5\" (UID: \"07cdf1a8-aec4-42ca-a564-c91e7132663d\") " pod="openstack/horizon-68b447d964-6llq5" Jan 21 11:19:55 crc kubenswrapper[4881]: I0121 11:19:55.322650 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07cdf1a8-aec4-42ca-a564-c91e7132663d-logs\") pod \"horizon-68b447d964-6llq5\" (UID: \"07cdf1a8-aec4-42ca-a564-c91e7132663d\") " pod="openstack/horizon-68b447d964-6llq5" Jan 21 11:19:55 crc kubenswrapper[4881]: I0121 11:19:55.322744 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/07cdf1a8-aec4-42ca-a564-c91e7132663d-scripts\") pod \"horizon-68b447d964-6llq5\" (UID: \"07cdf1a8-aec4-42ca-a564-c91e7132663d\") " pod="openstack/horizon-68b447d964-6llq5" Jan 21 11:19:55 crc kubenswrapper[4881]: I0121 11:19:55.322835 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/07cdf1a8-aec4-42ca-a564-c91e7132663d-horizon-tls-certs\") pod \"horizon-68b447d964-6llq5\" (UID: \"07cdf1a8-aec4-42ca-a564-c91e7132663d\") " pod="openstack/horizon-68b447d964-6llq5" Jan 21 11:19:55 crc kubenswrapper[4881]: I0121 11:19:55.322864 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/07cdf1a8-aec4-42ca-a564-c91e7132663d-horizon-secret-key\") pod \"horizon-68b447d964-6llq5\" (UID: \"07cdf1a8-aec4-42ca-a564-c91e7132663d\") " pod="openstack/horizon-68b447d964-6llq5" Jan 21 11:19:55 crc kubenswrapper[4881]: I0121 11:19:55.424250 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlg56\" (UniqueName: \"kubernetes.io/projected/07cdf1a8-aec4-42ca-a564-c91e7132663d-kube-api-access-rlg56\") pod \"horizon-68b447d964-6llq5\" (UID: \"07cdf1a8-aec4-42ca-a564-c91e7132663d\") " pod="openstack/horizon-68b447d964-6llq5" Jan 21 11:19:55 crc kubenswrapper[4881]: I0121 11:19:55.424610 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07cdf1a8-aec4-42ca-a564-c91e7132663d-combined-ca-bundle\") pod \"horizon-68b447d964-6llq5\" (UID: \"07cdf1a8-aec4-42ca-a564-c91e7132663d\") " pod="openstack/horizon-68b447d964-6llq5" Jan 21 11:19:55 crc kubenswrapper[4881]: I0121 11:19:55.424665 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/07cdf1a8-aec4-42ca-a564-c91e7132663d-config-data\") pod \"horizon-68b447d964-6llq5\" (UID: \"07cdf1a8-aec4-42ca-a564-c91e7132663d\") " pod="openstack/horizon-68b447d964-6llq5" Jan 21 11:19:55 crc kubenswrapper[4881]: I0121 11:19:55.424691 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07cdf1a8-aec4-42ca-a564-c91e7132663d-logs\") pod \"horizon-68b447d964-6llq5\" (UID: \"07cdf1a8-aec4-42ca-a564-c91e7132663d\") " pod="openstack/horizon-68b447d964-6llq5" Jan 21 11:19:55 crc kubenswrapper[4881]: I0121 11:19:55.424732 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/07cdf1a8-aec4-42ca-a564-c91e7132663d-scripts\") pod \"horizon-68b447d964-6llq5\" (UID: \"07cdf1a8-aec4-42ca-a564-c91e7132663d\") " pod="openstack/horizon-68b447d964-6llq5" Jan 21 11:19:55 crc kubenswrapper[4881]: I0121 11:19:55.424819 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/07cdf1a8-aec4-42ca-a564-c91e7132663d-horizon-tls-certs\") pod \"horizon-68b447d964-6llq5\" (UID: \"07cdf1a8-aec4-42ca-a564-c91e7132663d\") " pod="openstack/horizon-68b447d964-6llq5" Jan 21 11:19:55 crc kubenswrapper[4881]: I0121 11:19:55.424838 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/07cdf1a8-aec4-42ca-a564-c91e7132663d-horizon-secret-key\") pod \"horizon-68b447d964-6llq5\" (UID: \"07cdf1a8-aec4-42ca-a564-c91e7132663d\") " pod="openstack/horizon-68b447d964-6llq5" Jan 21 11:19:55 crc kubenswrapper[4881]: I0121 11:19:55.426122 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07cdf1a8-aec4-42ca-a564-c91e7132663d-logs\") pod \"horizon-68b447d964-6llq5\" (UID: \"07cdf1a8-aec4-42ca-a564-c91e7132663d\") " pod="openstack/horizon-68b447d964-6llq5" Jan 21 11:19:55 crc kubenswrapper[4881]: I0121 11:19:55.426951 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/07cdf1a8-aec4-42ca-a564-c91e7132663d-config-data\") pod \"horizon-68b447d964-6llq5\" (UID: \"07cdf1a8-aec4-42ca-a564-c91e7132663d\") " pod="openstack/horizon-68b447d964-6llq5" Jan 21 11:19:55 crc kubenswrapper[4881]: I0121 11:19:55.427520 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/07cdf1a8-aec4-42ca-a564-c91e7132663d-scripts\") pod \"horizon-68b447d964-6llq5\" (UID: \"07cdf1a8-aec4-42ca-a564-c91e7132663d\") " pod="openstack/horizon-68b447d964-6llq5" Jan 21 11:19:55 crc kubenswrapper[4881]: I0121 11:19:55.432456 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/07cdf1a8-aec4-42ca-a564-c91e7132663d-horizon-tls-certs\") pod \"horizon-68b447d964-6llq5\" (UID: \"07cdf1a8-aec4-42ca-a564-c91e7132663d\") " pod="openstack/horizon-68b447d964-6llq5" Jan 21 11:19:55 crc kubenswrapper[4881]: I0121 11:19:55.445265 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/07cdf1a8-aec4-42ca-a564-c91e7132663d-horizon-secret-key\") pod \"horizon-68b447d964-6llq5\" (UID: \"07cdf1a8-aec4-42ca-a564-c91e7132663d\") " pod="openstack/horizon-68b447d964-6llq5" Jan 21 11:19:55 crc kubenswrapper[4881]: I0121 11:19:55.461766 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07cdf1a8-aec4-42ca-a564-c91e7132663d-combined-ca-bundle\") pod \"horizon-68b447d964-6llq5\" (UID: \"07cdf1a8-aec4-42ca-a564-c91e7132663d\") " pod="openstack/horizon-68b447d964-6llq5" Jan 21 11:19:55 crc kubenswrapper[4881]: I0121 11:19:55.462699 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlg56\" (UniqueName: \"kubernetes.io/projected/07cdf1a8-aec4-42ca-a564-c91e7132663d-kube-api-access-rlg56\") pod \"horizon-68b447d964-6llq5\" (UID: \"07cdf1a8-aec4-42ca-a564-c91e7132663d\") " pod="openstack/horizon-68b447d964-6llq5" Jan 21 11:19:55 crc kubenswrapper[4881]: I0121 11:19:55.755739 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-68b447d964-6llq5" Jan 21 11:19:57 crc kubenswrapper[4881]: I0121 11:19:57.095092 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bb8f8b9c9-cwqc2" Jan 21 11:19:57 crc kubenswrapper[4881]: I0121 11:19:57.184043 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c88945fd5-tqqvj"] Jan 21 11:19:57 crc kubenswrapper[4881]: I0121 11:19:57.184293 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" podUID="e51b074c-ae44-4db9-9ce6-b656a961dfaf" containerName="dnsmasq-dns" containerID="cri-o://942d5c3de6fa62e5024b8e526fb126bf73a64902207ddcb2a51d04aa20661a8c" gracePeriod=10 Jan 21 11:19:57 crc kubenswrapper[4881]: I0121 11:19:57.338826 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-t4mx7" Jan 21 11:19:57 crc kubenswrapper[4881]: I0121 11:19:57.480713 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bc7e598c-b449-4e8c-9214-44e27cb45e53-db-sync-config-data\") pod \"bc7e598c-b449-4e8c-9214-44e27cb45e53\" (UID: \"bc7e598c-b449-4e8c-9214-44e27cb45e53\") " Jan 21 11:19:57 crc kubenswrapper[4881]: I0121 11:19:57.480834 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc7e598c-b449-4e8c-9214-44e27cb45e53-combined-ca-bundle\") pod \"bc7e598c-b449-4e8c-9214-44e27cb45e53\" (UID: \"bc7e598c-b449-4e8c-9214-44e27cb45e53\") " Jan 21 11:19:57 crc kubenswrapper[4881]: I0121 11:19:57.480896 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gd8cs\" (UniqueName: \"kubernetes.io/projected/bc7e598c-b449-4e8c-9214-44e27cb45e53-kube-api-access-gd8cs\") pod \"bc7e598c-b449-4e8c-9214-44e27cb45e53\" (UID: \"bc7e598c-b449-4e8c-9214-44e27cb45e53\") " Jan 21 11:19:57 crc kubenswrapper[4881]: I0121 11:19:57.480969 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc7e598c-b449-4e8c-9214-44e27cb45e53-config-data\") pod \"bc7e598c-b449-4e8c-9214-44e27cb45e53\" (UID: \"bc7e598c-b449-4e8c-9214-44e27cb45e53\") " Jan 21 11:19:57 crc kubenswrapper[4881]: I0121 11:19:57.488213 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc7e598c-b449-4e8c-9214-44e27cb45e53-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "bc7e598c-b449-4e8c-9214-44e27cb45e53" (UID: "bc7e598c-b449-4e8c-9214-44e27cb45e53"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:19:57 crc kubenswrapper[4881]: I0121 11:19:57.509109 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc7e598c-b449-4e8c-9214-44e27cb45e53-kube-api-access-gd8cs" (OuterVolumeSpecName: "kube-api-access-gd8cs") pod "bc7e598c-b449-4e8c-9214-44e27cb45e53" (UID: "bc7e598c-b449-4e8c-9214-44e27cb45e53"). InnerVolumeSpecName "kube-api-access-gd8cs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:19:57 crc kubenswrapper[4881]: I0121 11:19:57.527959 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc7e598c-b449-4e8c-9214-44e27cb45e53-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc7e598c-b449-4e8c-9214-44e27cb45e53" (UID: "bc7e598c-b449-4e8c-9214-44e27cb45e53"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:19:57 crc kubenswrapper[4881]: I0121 11:19:57.735419 4881 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bc7e598c-b449-4e8c-9214-44e27cb45e53-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:19:57 crc kubenswrapper[4881]: I0121 11:19:57.735487 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc7e598c-b449-4e8c-9214-44e27cb45e53-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:19:57 crc kubenswrapper[4881]: I0121 11:19:57.735503 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gd8cs\" (UniqueName: \"kubernetes.io/projected/bc7e598c-b449-4e8c-9214-44e27cb45e53-kube-api-access-gd8cs\") on node \"crc\" DevicePath \"\"" Jan 21 11:19:57 crc kubenswrapper[4881]: I0121 11:19:57.744175 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc7e598c-b449-4e8c-9214-44e27cb45e53-config-data" (OuterVolumeSpecName: "config-data") pod "bc7e598c-b449-4e8c-9214-44e27cb45e53" (UID: "bc7e598c-b449-4e8c-9214-44e27cb45e53"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:19:57 crc kubenswrapper[4881]: I0121 11:19:57.837797 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc7e598c-b449-4e8c-9214-44e27cb45e53-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:19:57 crc kubenswrapper[4881]: I0121 11:19:57.893155 4881 generic.go:334] "Generic (PLEG): container finished" podID="e51b074c-ae44-4db9-9ce6-b656a961dfaf" containerID="942d5c3de6fa62e5024b8e526fb126bf73a64902207ddcb2a51d04aa20661a8c" exitCode=0 Jan 21 11:19:57 crc kubenswrapper[4881]: I0121 11:19:57.893239 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" event={"ID":"e51b074c-ae44-4db9-9ce6-b656a961dfaf","Type":"ContainerDied","Data":"942d5c3de6fa62e5024b8e526fb126bf73a64902207ddcb2a51d04aa20661a8c"} Jan 21 11:19:57 crc kubenswrapper[4881]: I0121 11:19:57.894947 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-t4mx7" event={"ID":"bc7e598c-b449-4e8c-9214-44e27cb45e53","Type":"ContainerDied","Data":"7f0bea9e9dc943e576802d8c9a13363afa658fe4236f457e4490a5dbcd4320bd"} Jan 21 11:19:57 crc kubenswrapper[4881]: I0121 11:19:57.894971 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f0bea9e9dc943e576802d8c9a13363afa658fe4236f457e4490a5dbcd4320bd" Jan 21 11:19:57 crc kubenswrapper[4881]: I0121 11:19:57.895034 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-t4mx7" Jan 21 11:19:58 crc kubenswrapper[4881]: I0121 11:19:58.757916 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Jan 21 11:19:58 crc kubenswrapper[4881]: E0121 11:19:58.758389 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc7e598c-b449-4e8c-9214-44e27cb45e53" containerName="watcher-db-sync" Jan 21 11:19:58 crc kubenswrapper[4881]: I0121 11:19:58.758402 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc7e598c-b449-4e8c-9214-44e27cb45e53" containerName="watcher-db-sync" Jan 21 11:19:58 crc kubenswrapper[4881]: I0121 11:19:58.758657 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc7e598c-b449-4e8c-9214-44e27cb45e53" containerName="watcher-db-sync" Jan 21 11:19:58 crc kubenswrapper[4881]: I0121 11:19:58.759869 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 21 11:19:58 crc kubenswrapper[4881]: I0121 11:19:58.768883 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-vlkhp" Jan 21 11:19:58 crc kubenswrapper[4881]: I0121 11:19:58.769385 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Jan 21 11:19:58 crc kubenswrapper[4881]: I0121 11:19:58.790290 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Jan 21 11:19:58 crc kubenswrapper[4881]: I0121 11:19:58.806646 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Jan 21 11:19:58 crc kubenswrapper[4881]: I0121 11:19:58.820307 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Jan 21 11:19:58 crc kubenswrapper[4881]: I0121 11:19:58.821880 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 21 11:19:58 crc kubenswrapper[4881]: I0121 11:19:58.865925 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6244bcac-82b7-4bd4-b93d-3def53490380-logs\") pod \"watcher-api-0\" (UID: \"6244bcac-82b7-4bd4-b93d-3def53490380\") " pod="openstack/watcher-api-0" Jan 21 11:19:58 crc kubenswrapper[4881]: I0121 11:19:58.865983 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6244bcac-82b7-4bd4-b93d-3def53490380-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"6244bcac-82b7-4bd4-b93d-3def53490380\") " pod="openstack/watcher-api-0" Jan 21 11:19:58 crc kubenswrapper[4881]: I0121 11:19:58.866018 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/6244bcac-82b7-4bd4-b93d-3def53490380-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"6244bcac-82b7-4bd4-b93d-3def53490380\") " pod="openstack/watcher-api-0" Jan 21 11:19:58 crc kubenswrapper[4881]: I0121 11:19:58.866053 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgt4b\" (UniqueName: \"kubernetes.io/projected/6244bcac-82b7-4bd4-b93d-3def53490380-kube-api-access-sgt4b\") pod \"watcher-api-0\" (UID: \"6244bcac-82b7-4bd4-b93d-3def53490380\") " pod="openstack/watcher-api-0" Jan 21 11:19:58 crc kubenswrapper[4881]: I0121 11:19:58.866305 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6244bcac-82b7-4bd4-b93d-3def53490380-config-data\") pod \"watcher-api-0\" (UID: \"6244bcac-82b7-4bd4-b93d-3def53490380\") " pod="openstack/watcher-api-0" Jan 21 11:19:58 crc kubenswrapper[4881]: I0121 11:19:58.875368 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Jan 21 11:19:58 crc kubenswrapper[4881]: I0121 11:19:58.917714 4881 generic.go:334] "Generic (PLEG): container finished" podID="cc3f2556-7427-4715-a56d-bbd3d7f8422f" containerID="20252506bf2921633b620e12ae73d258d135c6a818c92bcf4d604ddbc1f5e46d" exitCode=0 Jan 21 11:19:58 crc kubenswrapper[4881]: I0121 11:19:58.917774 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wg7xs" event={"ID":"cc3f2556-7427-4715-a56d-bbd3d7f8422f","Type":"ContainerDied","Data":"20252506bf2921633b620e12ae73d258d135c6a818c92bcf4d604ddbc1f5e46d"} Jan 21 11:19:58 crc kubenswrapper[4881]: I0121 11:19:58.969001 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6244bcac-82b7-4bd4-b93d-3def53490380-logs\") pod \"watcher-api-0\" (UID: \"6244bcac-82b7-4bd4-b93d-3def53490380\") " pod="openstack/watcher-api-0" Jan 21 11:19:58 crc kubenswrapper[4881]: I0121 11:19:58.969068 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6244bcac-82b7-4bd4-b93d-3def53490380-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"6244bcac-82b7-4bd4-b93d-3def53490380\") " pod="openstack/watcher-api-0" Jan 21 11:19:58 crc kubenswrapper[4881]: I0121 11:19:58.969090 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/6244bcac-82b7-4bd4-b93d-3def53490380-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"6244bcac-82b7-4bd4-b93d-3def53490380\") " pod="openstack/watcher-api-0" Jan 21 11:19:58 crc kubenswrapper[4881]: I0121 11:19:58.969117 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgt4b\" (UniqueName: \"kubernetes.io/projected/6244bcac-82b7-4bd4-b93d-3def53490380-kube-api-access-sgt4b\") pod \"watcher-api-0\" (UID: \"6244bcac-82b7-4bd4-b93d-3def53490380\") " pod="openstack/watcher-api-0" Jan 21 11:19:58 crc kubenswrapper[4881]: I0121 11:19:58.969211 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/937bcc33-ee83-4f94-ab76-84f534cfd05a-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"937bcc33-ee83-4f94-ab76-84f534cfd05a\") " pod="openstack/watcher-applier-0" Jan 21 11:19:58 crc kubenswrapper[4881]: I0121 11:19:58.969256 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/937bcc33-ee83-4f94-ab76-84f534cfd05a-logs\") pod \"watcher-applier-0\" (UID: \"937bcc33-ee83-4f94-ab76-84f534cfd05a\") " pod="openstack/watcher-applier-0" Jan 21 11:19:58 crc kubenswrapper[4881]: I0121 11:19:58.969631 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlmkr\" (UniqueName: \"kubernetes.io/projected/937bcc33-ee83-4f94-ab76-84f534cfd05a-kube-api-access-rlmkr\") pod \"watcher-applier-0\" (UID: \"937bcc33-ee83-4f94-ab76-84f534cfd05a\") " pod="openstack/watcher-applier-0" Jan 21 11:19:58 crc kubenswrapper[4881]: I0121 11:19:58.969715 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/937bcc33-ee83-4f94-ab76-84f534cfd05a-config-data\") pod \"watcher-applier-0\" (UID: \"937bcc33-ee83-4f94-ab76-84f534cfd05a\") " pod="openstack/watcher-applier-0" Jan 21 11:19:58 crc kubenswrapper[4881]: I0121 11:19:58.969745 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6244bcac-82b7-4bd4-b93d-3def53490380-logs\") pod \"watcher-api-0\" (UID: \"6244bcac-82b7-4bd4-b93d-3def53490380\") " pod="openstack/watcher-api-0" Jan 21 11:19:58 crc kubenswrapper[4881]: I0121 11:19:58.969777 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6244bcac-82b7-4bd4-b93d-3def53490380-config-data\") pod \"watcher-api-0\" (UID: \"6244bcac-82b7-4bd4-b93d-3def53490380\") " pod="openstack/watcher-api-0" Jan 21 11:19:58 crc kubenswrapper[4881]: I0121 11:19:58.974961 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6244bcac-82b7-4bd4-b93d-3def53490380-config-data\") pod \"watcher-api-0\" (UID: \"6244bcac-82b7-4bd4-b93d-3def53490380\") " pod="openstack/watcher-api-0" Jan 21 11:19:58 crc kubenswrapper[4881]: I0121 11:19:58.975263 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/6244bcac-82b7-4bd4-b93d-3def53490380-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"6244bcac-82b7-4bd4-b93d-3def53490380\") " pod="openstack/watcher-api-0" Jan 21 11:19:58 crc kubenswrapper[4881]: I0121 11:19:58.994485 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6244bcac-82b7-4bd4-b93d-3def53490380-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"6244bcac-82b7-4bd4-b93d-3def53490380\") " pod="openstack/watcher-api-0" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.005051 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.010983 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.015670 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.017458 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.020699 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgt4b\" (UniqueName: \"kubernetes.io/projected/6244bcac-82b7-4bd4-b93d-3def53490380-kube-api-access-sgt4b\") pod \"watcher-api-0\" (UID: \"6244bcac-82b7-4bd4-b93d-3def53490380\") " pod="openstack/watcher-api-0" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.074452 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/937bcc33-ee83-4f94-ab76-84f534cfd05a-config-data\") pod \"watcher-applier-0\" (UID: \"937bcc33-ee83-4f94-ab76-84f534cfd05a\") " pod="openstack/watcher-applier-0" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.074747 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/937bcc33-ee83-4f94-ab76-84f534cfd05a-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"937bcc33-ee83-4f94-ab76-84f534cfd05a\") " pod="openstack/watcher-applier-0" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.075480 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/937bcc33-ee83-4f94-ab76-84f534cfd05a-logs\") pod \"watcher-applier-0\" (UID: \"937bcc33-ee83-4f94-ab76-84f534cfd05a\") " pod="openstack/watcher-applier-0" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.076423 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/937bcc33-ee83-4f94-ab76-84f534cfd05a-logs\") pod \"watcher-applier-0\" (UID: \"937bcc33-ee83-4f94-ab76-84f534cfd05a\") " pod="openstack/watcher-applier-0" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.076541 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlmkr\" (UniqueName: \"kubernetes.io/projected/937bcc33-ee83-4f94-ab76-84f534cfd05a-kube-api-access-rlmkr\") pod \"watcher-applier-0\" (UID: \"937bcc33-ee83-4f94-ab76-84f534cfd05a\") " pod="openstack/watcher-applier-0" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.079376 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/937bcc33-ee83-4f94-ab76-84f534cfd05a-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"937bcc33-ee83-4f94-ab76-84f534cfd05a\") " pod="openstack/watcher-applier-0" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.079493 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/937bcc33-ee83-4f94-ab76-84f534cfd05a-config-data\") pod \"watcher-applier-0\" (UID: \"937bcc33-ee83-4f94-ab76-84f534cfd05a\") " pod="openstack/watcher-applier-0" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.086714 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.097198 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlmkr\" (UniqueName: \"kubernetes.io/projected/937bcc33-ee83-4f94-ab76-84f534cfd05a-kube-api-access-rlmkr\") pod \"watcher-applier-0\" (UID: \"937bcc33-ee83-4f94-ab76-84f534cfd05a\") " pod="openstack/watcher-applier-0" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.147323 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.175711 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ncbfx"] Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.181820 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ee4e7116-c2cd-43d5-af6b-9f30b5053e0e-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e\") " pod="openstack/watcher-decision-engine-0" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.181936 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee4e7116-c2cd-43d5-af6b-9f30b5053e0e-config-data\") pod \"watcher-decision-engine-0\" (UID: \"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e\") " pod="openstack/watcher-decision-engine-0" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.181968 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee4e7116-c2cd-43d5-af6b-9f30b5053e0e-logs\") pod \"watcher-decision-engine-0\" (UID: \"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e\") " pod="openstack/watcher-decision-engine-0" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.182012 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wffxr\" (UniqueName: \"kubernetes.io/projected/ee4e7116-c2cd-43d5-af6b-9f30b5053e0e-kube-api-access-wffxr\") pod \"watcher-decision-engine-0\" (UID: \"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e\") " pod="openstack/watcher-decision-engine-0" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.183423 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee4e7116-c2cd-43d5-af6b-9f30b5053e0e-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e\") " pod="openstack/watcher-decision-engine-0" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.196828 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ncbfx" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.220566 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ncbfx"] Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.286508 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79rxx\" (UniqueName: \"kubernetes.io/projected/6a8083e9-c68d-40ca-bde9-b84e43b65ab8-kube-api-access-79rxx\") pod \"redhat-operators-ncbfx\" (UID: \"6a8083e9-c68d-40ca-bde9-b84e43b65ab8\") " pod="openshift-marketplace/redhat-operators-ncbfx" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.286890 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ee4e7116-c2cd-43d5-af6b-9f30b5053e0e-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e\") " pod="openstack/watcher-decision-engine-0" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.287142 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee4e7116-c2cd-43d5-af6b-9f30b5053e0e-config-data\") pod \"watcher-decision-engine-0\" (UID: \"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e\") " pod="openstack/watcher-decision-engine-0" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.287201 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee4e7116-c2cd-43d5-af6b-9f30b5053e0e-logs\") pod \"watcher-decision-engine-0\" (UID: \"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e\") " pod="openstack/watcher-decision-engine-0" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.287262 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a8083e9-c68d-40ca-bde9-b84e43b65ab8-catalog-content\") pod \"redhat-operators-ncbfx\" (UID: \"6a8083e9-c68d-40ca-bde9-b84e43b65ab8\") " pod="openshift-marketplace/redhat-operators-ncbfx" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.287309 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wffxr\" (UniqueName: \"kubernetes.io/projected/ee4e7116-c2cd-43d5-af6b-9f30b5053e0e-kube-api-access-wffxr\") pod \"watcher-decision-engine-0\" (UID: \"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e\") " pod="openstack/watcher-decision-engine-0" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.287693 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a8083e9-c68d-40ca-bde9-b84e43b65ab8-utilities\") pod \"redhat-operators-ncbfx\" (UID: \"6a8083e9-c68d-40ca-bde9-b84e43b65ab8\") " pod="openshift-marketplace/redhat-operators-ncbfx" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.288188 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee4e7116-c2cd-43d5-af6b-9f30b5053e0e-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e\") " pod="openstack/watcher-decision-engine-0" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.291728 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee4e7116-c2cd-43d5-af6b-9f30b5053e0e-logs\") pod \"watcher-decision-engine-0\" (UID: \"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e\") " pod="openstack/watcher-decision-engine-0" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.304660 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ee4e7116-c2cd-43d5-af6b-9f30b5053e0e-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e\") " pod="openstack/watcher-decision-engine-0" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.307298 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee4e7116-c2cd-43d5-af6b-9f30b5053e0e-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e\") " pod="openstack/watcher-decision-engine-0" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.312108 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wffxr\" (UniqueName: \"kubernetes.io/projected/ee4e7116-c2cd-43d5-af6b-9f30b5053e0e-kube-api-access-wffxr\") pod \"watcher-decision-engine-0\" (UID: \"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e\") " pod="openstack/watcher-decision-engine-0" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.326831 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee4e7116-c2cd-43d5-af6b-9f30b5053e0e-config-data\") pod \"watcher-decision-engine-0\" (UID: \"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e\") " pod="openstack/watcher-decision-engine-0" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.391114 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a8083e9-c68d-40ca-bde9-b84e43b65ab8-utilities\") pod \"redhat-operators-ncbfx\" (UID: \"6a8083e9-c68d-40ca-bde9-b84e43b65ab8\") " pod="openshift-marketplace/redhat-operators-ncbfx" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.391337 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79rxx\" (UniqueName: \"kubernetes.io/projected/6a8083e9-c68d-40ca-bde9-b84e43b65ab8-kube-api-access-79rxx\") pod \"redhat-operators-ncbfx\" (UID: \"6a8083e9-c68d-40ca-bde9-b84e43b65ab8\") " pod="openshift-marketplace/redhat-operators-ncbfx" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.391463 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a8083e9-c68d-40ca-bde9-b84e43b65ab8-catalog-content\") pod \"redhat-operators-ncbfx\" (UID: \"6a8083e9-c68d-40ca-bde9-b84e43b65ab8\") " pod="openshift-marketplace/redhat-operators-ncbfx" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.391610 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a8083e9-c68d-40ca-bde9-b84e43b65ab8-utilities\") pod \"redhat-operators-ncbfx\" (UID: \"6a8083e9-c68d-40ca-bde9-b84e43b65ab8\") " pod="openshift-marketplace/redhat-operators-ncbfx" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.391978 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a8083e9-c68d-40ca-bde9-b84e43b65ab8-catalog-content\") pod \"redhat-operators-ncbfx\" (UID: \"6a8083e9-c68d-40ca-bde9-b84e43b65ab8\") " pod="openshift-marketplace/redhat-operators-ncbfx" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.418286 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79rxx\" (UniqueName: \"kubernetes.io/projected/6a8083e9-c68d-40ca-bde9-b84e43b65ab8-kube-api-access-79rxx\") pod \"redhat-operators-ncbfx\" (UID: \"6a8083e9-c68d-40ca-bde9-b84e43b65ab8\") " pod="openshift-marketplace/redhat-operators-ncbfx" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.496743 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 21 11:19:59 crc kubenswrapper[4881]: I0121 11:19:59.526637 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ncbfx" Jan 21 11:20:00 crc kubenswrapper[4881]: I0121 11:20:00.639830 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:20:00 crc kubenswrapper[4881]: I0121 11:20:00.639874 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:20:00 crc kubenswrapper[4881]: I0121 11:20:00.644249 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" podUID="e51b074c-ae44-4db9-9ce6-b656a961dfaf" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.137:5353: connect: connection refused" Jan 21 11:20:05 crc kubenswrapper[4881]: I0121 11:20:05.604033 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" podUID="e51b074c-ae44-4db9-9ce6-b656a961dfaf" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.137:5353: connect: connection refused" Jan 21 11:20:10 crc kubenswrapper[4881]: E0121 11:20:10.069075 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-horizon:watcher_latest" Jan 21 11:20:10 crc kubenswrapper[4881]: E0121 11:20:10.069731 4881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-horizon:watcher_latest" Jan 21 11:20:10 crc kubenswrapper[4881]: E0121 11:20:10.070039 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.102.83.182:5001/podified-master-centos10/openstack-horizon:watcher_latest,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n67dhc5h58dh698h669h54h5bfh557hf9h77h58bh76h5d4h67bh56fh5d9h5f5h68fh5b7h696h544h67fh5c4h56dh57dh584h556h67ch676h589h684hf7q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6pfn2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-f67997f9f-4cvfc_openstack(71dc95ca-296b-4989-8b57-db806091feea): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:20:10 crc kubenswrapper[4881]: E0121 11:20:10.073656 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.182:5001/podified-master-centos10/openstack-horizon:watcher_latest\\\"\"]" pod="openstack/horizon-f67997f9f-4cvfc" podUID="71dc95ca-296b-4989-8b57-db806091feea" Jan 21 11:20:10 crc kubenswrapper[4881]: I0121 11:20:10.608810 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" podUID="e51b074c-ae44-4db9-9ce6-b656a961dfaf" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.137:5353: connect: connection refused" Jan 21 11:20:10 crc kubenswrapper[4881]: I0121 11:20:10.608973 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" Jan 21 11:20:12 crc kubenswrapper[4881]: E0121 11:20:12.408440 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-placement-api:watcher_latest" Jan 21 11:20:12 crc kubenswrapper[4881]: E0121 11:20:12.408765 4881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-placement-api:watcher_latest" Jan 21 11:20:12 crc kubenswrapper[4881]: E0121 11:20:12.408915 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:38.102.83.182:5001/podified-master-centos10/openstack-placement-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gv7qz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-kc9jz_openstack(f568ffda-82a9-4f47-89d3-13b89a35c9b4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:20:12 crc kubenswrapper[4881]: E0121 11:20:12.410070 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-kc9jz" podUID="f568ffda-82a9-4f47-89d3-13b89a35c9b4" Jan 21 11:20:12 crc kubenswrapper[4881]: E0121 11:20:12.621093 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.182:5001/podified-master-centos10/openstack-placement-api:watcher_latest\\\"\"" pod="openstack/placement-db-sync-kc9jz" podUID="f568ffda-82a9-4f47-89d3-13b89a35c9b4" Jan 21 11:20:14 crc kubenswrapper[4881]: E0121 11:20:14.223720 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-glance-api:watcher_latest" Jan 21 11:20:14 crc kubenswrapper[4881]: E0121 11:20:14.224107 4881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-glance-api:watcher_latest" Jan 21 11:20:14 crc kubenswrapper[4881]: E0121 11:20:14.224261 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:38.102.83.182:5001/podified-master-centos10/openstack-glance-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gvn9r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-mxb97_openstack(349e8898-8b7c-414a-8357-d431c8b81bf4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:20:14 crc kubenswrapper[4881]: E0121 11:20:14.225565 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-mxb97" podUID="349e8898-8b7c-414a-8357-d431c8b81bf4" Jan 21 11:20:14 crc kubenswrapper[4881]: E0121 11:20:14.242517 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-horizon:watcher_latest" Jan 21 11:20:14 crc kubenswrapper[4881]: E0121 11:20:14.242642 4881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-horizon:watcher_latest" Jan 21 11:20:14 crc kubenswrapper[4881]: E0121 11:20:14.242808 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.102.83.182:5001/podified-master-centos10/openstack-horizon:watcher_latest,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5cfh694h8fh56ch9dh578h658h8dh58h5ch59ch5f6hd6h54ch88h57ch66fh596h8h5cbh576h547h84h5c8h654hcch55fh5b7h678h5b6h9dh78q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-62jn4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-67c79cd6d5-lrpwx_openstack(ab2b33fa-d171-4525-b7a6-5bfc3a732fa4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:20:14 crc kubenswrapper[4881]: E0121 11:20:14.245379 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.182:5001/podified-master-centos10/openstack-horizon:watcher_latest\\\"\"]" pod="openstack/horizon-67c79cd6d5-lrpwx" podUID="ab2b33fa-d171-4525-b7a6-5bfc3a732fa4" Jan 21 11:20:14 crc kubenswrapper[4881]: E0121 11:20:14.250149 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-horizon:watcher_latest" Jan 21 11:20:14 crc kubenswrapper[4881]: E0121 11:20:14.250234 4881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-horizon:watcher_latest" Jan 21 11:20:14 crc kubenswrapper[4881]: E0121 11:20:14.250382 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.102.83.182:5001/podified-master-centos10/openstack-horizon:watcher_latest,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n77h668h67h548h54fhd5hd6h5f5h578h79h5dh87h95h99h59bh568h689h65ch5dbh74h554h5d6h5fbh9bh586h566h5b8h5f4h76h5c6h565h6bq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t6lrj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-77fb486557-zjtxw_openstack(d96c79b7-58c4-4bcc-9e56-02f2a8860764): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:20:14 crc kubenswrapper[4881]: E0121 11:20:14.253607 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.182:5001/podified-master-centos10/openstack-horizon:watcher_latest\\\"\"]" pod="openstack/horizon-77fb486557-zjtxw" podUID="d96c79b7-58c4-4bcc-9e56-02f2a8860764" Jan 21 11:20:14 crc kubenswrapper[4881]: I0121 11:20:14.318730 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wg7xs" Jan 21 11:20:14 crc kubenswrapper[4881]: I0121 11:20:14.375599 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc3f2556-7427-4715-a56d-bbd3d7f8422f-combined-ca-bundle\") pod \"cc3f2556-7427-4715-a56d-bbd3d7f8422f\" (UID: \"cc3f2556-7427-4715-a56d-bbd3d7f8422f\") " Jan 21 11:20:14 crc kubenswrapper[4881]: I0121 11:20:14.375720 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vbj2\" (UniqueName: \"kubernetes.io/projected/cc3f2556-7427-4715-a56d-bbd3d7f8422f-kube-api-access-6vbj2\") pod \"cc3f2556-7427-4715-a56d-bbd3d7f8422f\" (UID: \"cc3f2556-7427-4715-a56d-bbd3d7f8422f\") " Jan 21 11:20:14 crc kubenswrapper[4881]: I0121 11:20:14.375822 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc3f2556-7427-4715-a56d-bbd3d7f8422f-scripts\") pod \"cc3f2556-7427-4715-a56d-bbd3d7f8422f\" (UID: \"cc3f2556-7427-4715-a56d-bbd3d7f8422f\") " Jan 21 11:20:14 crc kubenswrapper[4881]: I0121 11:20:14.375871 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc3f2556-7427-4715-a56d-bbd3d7f8422f-config-data\") pod \"cc3f2556-7427-4715-a56d-bbd3d7f8422f\" (UID: \"cc3f2556-7427-4715-a56d-bbd3d7f8422f\") " Jan 21 11:20:14 crc kubenswrapper[4881]: I0121 11:20:14.375927 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cc3f2556-7427-4715-a56d-bbd3d7f8422f-credential-keys\") pod \"cc3f2556-7427-4715-a56d-bbd3d7f8422f\" (UID: \"cc3f2556-7427-4715-a56d-bbd3d7f8422f\") " Jan 21 11:20:14 crc kubenswrapper[4881]: I0121 11:20:14.375945 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cc3f2556-7427-4715-a56d-bbd3d7f8422f-fernet-keys\") pod \"cc3f2556-7427-4715-a56d-bbd3d7f8422f\" (UID: \"cc3f2556-7427-4715-a56d-bbd3d7f8422f\") " Jan 21 11:20:14 crc kubenswrapper[4881]: I0121 11:20:14.385012 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc3f2556-7427-4715-a56d-bbd3d7f8422f-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "cc3f2556-7427-4715-a56d-bbd3d7f8422f" (UID: "cc3f2556-7427-4715-a56d-bbd3d7f8422f"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:20:14 crc kubenswrapper[4881]: I0121 11:20:14.385753 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc3f2556-7427-4715-a56d-bbd3d7f8422f-scripts" (OuterVolumeSpecName: "scripts") pod "cc3f2556-7427-4715-a56d-bbd3d7f8422f" (UID: "cc3f2556-7427-4715-a56d-bbd3d7f8422f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:20:14 crc kubenswrapper[4881]: I0121 11:20:14.385860 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc3f2556-7427-4715-a56d-bbd3d7f8422f-kube-api-access-6vbj2" (OuterVolumeSpecName: "kube-api-access-6vbj2") pod "cc3f2556-7427-4715-a56d-bbd3d7f8422f" (UID: "cc3f2556-7427-4715-a56d-bbd3d7f8422f"). InnerVolumeSpecName "kube-api-access-6vbj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:20:14 crc kubenswrapper[4881]: I0121 11:20:14.388463 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc3f2556-7427-4715-a56d-bbd3d7f8422f-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "cc3f2556-7427-4715-a56d-bbd3d7f8422f" (UID: "cc3f2556-7427-4715-a56d-bbd3d7f8422f"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:20:14 crc kubenswrapper[4881]: I0121 11:20:14.448271 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc3f2556-7427-4715-a56d-bbd3d7f8422f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cc3f2556-7427-4715-a56d-bbd3d7f8422f" (UID: "cc3f2556-7427-4715-a56d-bbd3d7f8422f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:20:14 crc kubenswrapper[4881]: I0121 11:20:14.455971 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc3f2556-7427-4715-a56d-bbd3d7f8422f-config-data" (OuterVolumeSpecName: "config-data") pod "cc3f2556-7427-4715-a56d-bbd3d7f8422f" (UID: "cc3f2556-7427-4715-a56d-bbd3d7f8422f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:20:14 crc kubenswrapper[4881]: I0121 11:20:14.483559 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vbj2\" (UniqueName: \"kubernetes.io/projected/cc3f2556-7427-4715-a56d-bbd3d7f8422f-kube-api-access-6vbj2\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:14 crc kubenswrapper[4881]: I0121 11:20:14.483636 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc3f2556-7427-4715-a56d-bbd3d7f8422f-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:14 crc kubenswrapper[4881]: I0121 11:20:14.483651 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc3f2556-7427-4715-a56d-bbd3d7f8422f-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:14 crc kubenswrapper[4881]: I0121 11:20:14.483663 4881 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cc3f2556-7427-4715-a56d-bbd3d7f8422f-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:14 crc kubenswrapper[4881]: I0121 11:20:14.483674 4881 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cc3f2556-7427-4715-a56d-bbd3d7f8422f-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:14 crc kubenswrapper[4881]: I0121 11:20:14.483686 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc3f2556-7427-4715-a56d-bbd3d7f8422f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:14 crc kubenswrapper[4881]: I0121 11:20:14.670873 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wg7xs" Jan 21 11:20:14 crc kubenswrapper[4881]: I0121 11:20:14.676980 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wg7xs" event={"ID":"cc3f2556-7427-4715-a56d-bbd3d7f8422f","Type":"ContainerDied","Data":"255feaa412fc0f66dab19086ce14a7162b45237578665b2935e062ce5998cebf"} Jan 21 11:20:14 crc kubenswrapper[4881]: I0121 11:20:14.677076 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="255feaa412fc0f66dab19086ce14a7162b45237578665b2935e062ce5998cebf" Jan 21 11:20:15 crc kubenswrapper[4881]: I0121 11:20:15.437536 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-wg7xs"] Jan 21 11:20:15 crc kubenswrapper[4881]: I0121 11:20:15.445829 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-wg7xs"] Jan 21 11:20:15 crc kubenswrapper[4881]: I0121 11:20:15.524349 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-mzhtm"] Jan 21 11:20:15 crc kubenswrapper[4881]: E0121 11:20:15.524993 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc3f2556-7427-4715-a56d-bbd3d7f8422f" containerName="keystone-bootstrap" Jan 21 11:20:15 crc kubenswrapper[4881]: I0121 11:20:15.525016 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc3f2556-7427-4715-a56d-bbd3d7f8422f" containerName="keystone-bootstrap" Jan 21 11:20:15 crc kubenswrapper[4881]: I0121 11:20:15.525267 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc3f2556-7427-4715-a56d-bbd3d7f8422f" containerName="keystone-bootstrap" Jan 21 11:20:15 crc kubenswrapper[4881]: I0121 11:20:15.526207 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mzhtm" Jan 21 11:20:15 crc kubenswrapper[4881]: I0121 11:20:15.530690 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 11:20:15 crc kubenswrapper[4881]: I0121 11:20:15.530865 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 11:20:15 crc kubenswrapper[4881]: I0121 11:20:15.531031 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 21 11:20:15 crc kubenswrapper[4881]: I0121 11:20:15.531172 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 11:20:15 crc kubenswrapper[4881]: I0121 11:20:15.531279 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-j54nk" Jan 21 11:20:15 crc kubenswrapper[4881]: I0121 11:20:15.542044 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-mzhtm"] Jan 21 11:20:15 crc kubenswrapper[4881]: I0121 11:20:15.721172 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/33f9442b-24ee-47d4-b914-19d32a5cad74-fernet-keys\") pod \"keystone-bootstrap-mzhtm\" (UID: \"33f9442b-24ee-47d4-b914-19d32a5cad74\") " pod="openstack/keystone-bootstrap-mzhtm" Jan 21 11:20:15 crc kubenswrapper[4881]: I0121 11:20:15.723775 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33f9442b-24ee-47d4-b914-19d32a5cad74-combined-ca-bundle\") pod \"keystone-bootstrap-mzhtm\" (UID: \"33f9442b-24ee-47d4-b914-19d32a5cad74\") " pod="openstack/keystone-bootstrap-mzhtm" Jan 21 11:20:15 crc kubenswrapper[4881]: I0121 11:20:15.723970 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/33f9442b-24ee-47d4-b914-19d32a5cad74-credential-keys\") pod \"keystone-bootstrap-mzhtm\" (UID: \"33f9442b-24ee-47d4-b914-19d32a5cad74\") " pod="openstack/keystone-bootstrap-mzhtm" Jan 21 11:20:15 crc kubenswrapper[4881]: I0121 11:20:15.724118 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2wvg\" (UniqueName: \"kubernetes.io/projected/33f9442b-24ee-47d4-b914-19d32a5cad74-kube-api-access-n2wvg\") pod \"keystone-bootstrap-mzhtm\" (UID: \"33f9442b-24ee-47d4-b914-19d32a5cad74\") " pod="openstack/keystone-bootstrap-mzhtm" Jan 21 11:20:15 crc kubenswrapper[4881]: I0121 11:20:15.724221 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33f9442b-24ee-47d4-b914-19d32a5cad74-scripts\") pod \"keystone-bootstrap-mzhtm\" (UID: \"33f9442b-24ee-47d4-b914-19d32a5cad74\") " pod="openstack/keystone-bootstrap-mzhtm" Jan 21 11:20:15 crc kubenswrapper[4881]: I0121 11:20:15.724375 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33f9442b-24ee-47d4-b914-19d32a5cad74-config-data\") pod \"keystone-bootstrap-mzhtm\" (UID: \"33f9442b-24ee-47d4-b914-19d32a5cad74\") " pod="openstack/keystone-bootstrap-mzhtm" Jan 21 11:20:15 crc kubenswrapper[4881]: I0121 11:20:15.827177 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/33f9442b-24ee-47d4-b914-19d32a5cad74-fernet-keys\") pod \"keystone-bootstrap-mzhtm\" (UID: \"33f9442b-24ee-47d4-b914-19d32a5cad74\") " pod="openstack/keystone-bootstrap-mzhtm" Jan 21 11:20:15 crc kubenswrapper[4881]: I0121 11:20:15.827251 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33f9442b-24ee-47d4-b914-19d32a5cad74-combined-ca-bundle\") pod \"keystone-bootstrap-mzhtm\" (UID: \"33f9442b-24ee-47d4-b914-19d32a5cad74\") " pod="openstack/keystone-bootstrap-mzhtm" Jan 21 11:20:15 crc kubenswrapper[4881]: I0121 11:20:15.827289 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/33f9442b-24ee-47d4-b914-19d32a5cad74-credential-keys\") pod \"keystone-bootstrap-mzhtm\" (UID: \"33f9442b-24ee-47d4-b914-19d32a5cad74\") " pod="openstack/keystone-bootstrap-mzhtm" Jan 21 11:20:15 crc kubenswrapper[4881]: I0121 11:20:15.827350 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2wvg\" (UniqueName: \"kubernetes.io/projected/33f9442b-24ee-47d4-b914-19d32a5cad74-kube-api-access-n2wvg\") pod \"keystone-bootstrap-mzhtm\" (UID: \"33f9442b-24ee-47d4-b914-19d32a5cad74\") " pod="openstack/keystone-bootstrap-mzhtm" Jan 21 11:20:15 crc kubenswrapper[4881]: I0121 11:20:15.827390 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33f9442b-24ee-47d4-b914-19d32a5cad74-scripts\") pod \"keystone-bootstrap-mzhtm\" (UID: \"33f9442b-24ee-47d4-b914-19d32a5cad74\") " pod="openstack/keystone-bootstrap-mzhtm" Jan 21 11:20:15 crc kubenswrapper[4881]: I0121 11:20:15.827462 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33f9442b-24ee-47d4-b914-19d32a5cad74-config-data\") pod \"keystone-bootstrap-mzhtm\" (UID: \"33f9442b-24ee-47d4-b914-19d32a5cad74\") " pod="openstack/keystone-bootstrap-mzhtm" Jan 21 11:20:15 crc kubenswrapper[4881]: I0121 11:20:15.833604 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33f9442b-24ee-47d4-b914-19d32a5cad74-combined-ca-bundle\") pod \"keystone-bootstrap-mzhtm\" (UID: \"33f9442b-24ee-47d4-b914-19d32a5cad74\") " pod="openstack/keystone-bootstrap-mzhtm" Jan 21 11:20:15 crc kubenswrapper[4881]: I0121 11:20:15.833699 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33f9442b-24ee-47d4-b914-19d32a5cad74-config-data\") pod \"keystone-bootstrap-mzhtm\" (UID: \"33f9442b-24ee-47d4-b914-19d32a5cad74\") " pod="openstack/keystone-bootstrap-mzhtm" Jan 21 11:20:15 crc kubenswrapper[4881]: I0121 11:20:15.834947 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/33f9442b-24ee-47d4-b914-19d32a5cad74-credential-keys\") pod \"keystone-bootstrap-mzhtm\" (UID: \"33f9442b-24ee-47d4-b914-19d32a5cad74\") " pod="openstack/keystone-bootstrap-mzhtm" Jan 21 11:20:15 crc kubenswrapper[4881]: I0121 11:20:15.847236 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2wvg\" (UniqueName: \"kubernetes.io/projected/33f9442b-24ee-47d4-b914-19d32a5cad74-kube-api-access-n2wvg\") pod \"keystone-bootstrap-mzhtm\" (UID: \"33f9442b-24ee-47d4-b914-19d32a5cad74\") " pod="openstack/keystone-bootstrap-mzhtm" Jan 21 11:20:15 crc kubenswrapper[4881]: I0121 11:20:15.850777 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33f9442b-24ee-47d4-b914-19d32a5cad74-scripts\") pod \"keystone-bootstrap-mzhtm\" (UID: \"33f9442b-24ee-47d4-b914-19d32a5cad74\") " pod="openstack/keystone-bootstrap-mzhtm" Jan 21 11:20:15 crc kubenswrapper[4881]: I0121 11:20:15.854301 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/33f9442b-24ee-47d4-b914-19d32a5cad74-fernet-keys\") pod \"keystone-bootstrap-mzhtm\" (UID: \"33f9442b-24ee-47d4-b914-19d32a5cad74\") " pod="openstack/keystone-bootstrap-mzhtm" Jan 21 11:20:15 crc kubenswrapper[4881]: I0121 11:20:15.858895 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mzhtm" Jan 21 11:20:17 crc kubenswrapper[4881]: I0121 11:20:17.327564 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc3f2556-7427-4715-a56d-bbd3d7f8422f" path="/var/lib/kubelet/pods/cc3f2556-7427-4715-a56d-bbd3d7f8422f/volumes" Jan 21 11:20:20 crc kubenswrapper[4881]: I0121 11:20:20.607051 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" podUID="e51b074c-ae44-4db9-9ce6-b656a961dfaf" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.137:5353: i/o timeout" Jan 21 11:20:24 crc kubenswrapper[4881]: I0121 11:20:24.532758 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f67997f9f-4cvfc" Jan 21 11:20:24 crc kubenswrapper[4881]: I0121 11:20:24.636776 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/71dc95ca-296b-4989-8b57-db806091feea-config-data\") pod \"71dc95ca-296b-4989-8b57-db806091feea\" (UID: \"71dc95ca-296b-4989-8b57-db806091feea\") " Jan 21 11:20:24 crc kubenswrapper[4881]: I0121 11:20:24.636966 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/71dc95ca-296b-4989-8b57-db806091feea-scripts\") pod \"71dc95ca-296b-4989-8b57-db806091feea\" (UID: \"71dc95ca-296b-4989-8b57-db806091feea\") " Jan 21 11:20:24 crc kubenswrapper[4881]: I0121 11:20:24.637187 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/71dc95ca-296b-4989-8b57-db806091feea-horizon-secret-key\") pod \"71dc95ca-296b-4989-8b57-db806091feea\" (UID: \"71dc95ca-296b-4989-8b57-db806091feea\") " Jan 21 11:20:24 crc kubenswrapper[4881]: I0121 11:20:24.637245 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71dc95ca-296b-4989-8b57-db806091feea-logs\") pod \"71dc95ca-296b-4989-8b57-db806091feea\" (UID: \"71dc95ca-296b-4989-8b57-db806091feea\") " Jan 21 11:20:24 crc kubenswrapper[4881]: I0121 11:20:24.637318 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pfn2\" (UniqueName: \"kubernetes.io/projected/71dc95ca-296b-4989-8b57-db806091feea-kube-api-access-6pfn2\") pod \"71dc95ca-296b-4989-8b57-db806091feea\" (UID: \"71dc95ca-296b-4989-8b57-db806091feea\") " Jan 21 11:20:24 crc kubenswrapper[4881]: I0121 11:20:24.637593 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71dc95ca-296b-4989-8b57-db806091feea-scripts" (OuterVolumeSpecName: "scripts") pod "71dc95ca-296b-4989-8b57-db806091feea" (UID: "71dc95ca-296b-4989-8b57-db806091feea"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:20:24 crc kubenswrapper[4881]: I0121 11:20:24.637749 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71dc95ca-296b-4989-8b57-db806091feea-logs" (OuterVolumeSpecName: "logs") pod "71dc95ca-296b-4989-8b57-db806091feea" (UID: "71dc95ca-296b-4989-8b57-db806091feea"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:20:24 crc kubenswrapper[4881]: I0121 11:20:24.638347 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/71dc95ca-296b-4989-8b57-db806091feea-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:24 crc kubenswrapper[4881]: I0121 11:20:24.638374 4881 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71dc95ca-296b-4989-8b57-db806091feea-logs\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:24 crc kubenswrapper[4881]: I0121 11:20:24.638564 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71dc95ca-296b-4989-8b57-db806091feea-config-data" (OuterVolumeSpecName: "config-data") pod "71dc95ca-296b-4989-8b57-db806091feea" (UID: "71dc95ca-296b-4989-8b57-db806091feea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:20:24 crc kubenswrapper[4881]: I0121 11:20:24.643503 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71dc95ca-296b-4989-8b57-db806091feea-kube-api-access-6pfn2" (OuterVolumeSpecName: "kube-api-access-6pfn2") pod "71dc95ca-296b-4989-8b57-db806091feea" (UID: "71dc95ca-296b-4989-8b57-db806091feea"). InnerVolumeSpecName "kube-api-access-6pfn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:20:24 crc kubenswrapper[4881]: I0121 11:20:24.645542 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71dc95ca-296b-4989-8b57-db806091feea-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "71dc95ca-296b-4989-8b57-db806091feea" (UID: "71dc95ca-296b-4989-8b57-db806091feea"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:20:24 crc kubenswrapper[4881]: I0121 11:20:24.740292 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pfn2\" (UniqueName: \"kubernetes.io/projected/71dc95ca-296b-4989-8b57-db806091feea-kube-api-access-6pfn2\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:24 crc kubenswrapper[4881]: I0121 11:20:24.740336 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/71dc95ca-296b-4989-8b57-db806091feea-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:24 crc kubenswrapper[4881]: I0121 11:20:24.740348 4881 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/71dc95ca-296b-4989-8b57-db806091feea-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:24 crc kubenswrapper[4881]: I0121 11:20:24.793459 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f67997f9f-4cvfc" event={"ID":"71dc95ca-296b-4989-8b57-db806091feea","Type":"ContainerDied","Data":"c28d2087f01d52faf0bfd56ba4bbb293832881e04f8418954c0e024ee5bf824b"} Jan 21 11:20:24 crc kubenswrapper[4881]: I0121 11:20:24.793527 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f67997f9f-4cvfc" Jan 21 11:20:24 crc kubenswrapper[4881]: I0121 11:20:24.881750 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-f67997f9f-4cvfc"] Jan 21 11:20:24 crc kubenswrapper[4881]: I0121 11:20:24.889676 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-f67997f9f-4cvfc"] Jan 21 11:20:25 crc kubenswrapper[4881]: E0121 11:20:25.064256 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-barbican-api:watcher_latest" Jan 21 11:20:25 crc kubenswrapper[4881]: E0121 11:20:25.064313 4881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-barbican-api:watcher_latest" Jan 21 11:20:25 crc kubenswrapper[4881]: E0121 11:20:25.064456 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:38.102.83.182:5001/podified-master-centos10/openstack-barbican-api:watcher_latest,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j7pcb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-slhtz_openstack(4bf52889-d5f3-44f8-b657-8ff3790962d1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:20:25 crc kubenswrapper[4881]: E0121 11:20:25.065646 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-slhtz" podUID="4bf52889-d5f3-44f8-b657-8ff3790962d1" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.252472 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.262502 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-67c79cd6d5-lrpwx" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.296942 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77fb486557-zjtxw" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.353175 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e51b074c-ae44-4db9-9ce6-b656a961dfaf-dns-svc\") pod \"e51b074c-ae44-4db9-9ce6-b656a961dfaf\" (UID: \"e51b074c-ae44-4db9-9ce6-b656a961dfaf\") " Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.353428 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e51b074c-ae44-4db9-9ce6-b656a961dfaf-config\") pod \"e51b074c-ae44-4db9-9ce6-b656a961dfaf\" (UID: \"e51b074c-ae44-4db9-9ce6-b656a961dfaf\") " Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.353468 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4gqq\" (UniqueName: \"kubernetes.io/projected/e51b074c-ae44-4db9-9ce6-b656a961dfaf-kube-api-access-m4gqq\") pod \"e51b074c-ae44-4db9-9ce6-b656a961dfaf\" (UID: \"e51b074c-ae44-4db9-9ce6-b656a961dfaf\") " Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.353491 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e51b074c-ae44-4db9-9ce6-b656a961dfaf-ovsdbserver-nb\") pod \"e51b074c-ae44-4db9-9ce6-b656a961dfaf\" (UID: \"e51b074c-ae44-4db9-9ce6-b656a961dfaf\") " Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.353526 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e51b074c-ae44-4db9-9ce6-b656a961dfaf-dns-swift-storage-0\") pod \"e51b074c-ae44-4db9-9ce6-b656a961dfaf\" (UID: \"e51b074c-ae44-4db9-9ce6-b656a961dfaf\") " Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.353641 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e51b074c-ae44-4db9-9ce6-b656a961dfaf-ovsdbserver-sb\") pod \"e51b074c-ae44-4db9-9ce6-b656a961dfaf\" (UID: \"e51b074c-ae44-4db9-9ce6-b656a961dfaf\") " Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.355032 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71dc95ca-296b-4989-8b57-db806091feea" path="/var/lib/kubelet/pods/71dc95ca-296b-4989-8b57-db806091feea/volumes" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.365291 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e51b074c-ae44-4db9-9ce6-b656a961dfaf-kube-api-access-m4gqq" (OuterVolumeSpecName: "kube-api-access-m4gqq") pod "e51b074c-ae44-4db9-9ce6-b656a961dfaf" (UID: "e51b074c-ae44-4db9-9ce6-b656a961dfaf"). InnerVolumeSpecName "kube-api-access-m4gqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.420720 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e51b074c-ae44-4db9-9ce6-b656a961dfaf-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e51b074c-ae44-4db9-9ce6-b656a961dfaf" (UID: "e51b074c-ae44-4db9-9ce6-b656a961dfaf"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.420892 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e51b074c-ae44-4db9-9ce6-b656a961dfaf-config" (OuterVolumeSpecName: "config") pod "e51b074c-ae44-4db9-9ce6-b656a961dfaf" (UID: "e51b074c-ae44-4db9-9ce6-b656a961dfaf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.423681 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e51b074c-ae44-4db9-9ce6-b656a961dfaf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e51b074c-ae44-4db9-9ce6-b656a961dfaf" (UID: "e51b074c-ae44-4db9-9ce6-b656a961dfaf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.432777 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e51b074c-ae44-4db9-9ce6-b656a961dfaf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e51b074c-ae44-4db9-9ce6-b656a961dfaf" (UID: "e51b074c-ae44-4db9-9ce6-b656a961dfaf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.435070 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e51b074c-ae44-4db9-9ce6-b656a961dfaf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e51b074c-ae44-4db9-9ce6-b656a961dfaf" (UID: "e51b074c-ae44-4db9-9ce6-b656a961dfaf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.455573 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d96c79b7-58c4-4bcc-9e56-02f2a8860764-scripts\") pod \"d96c79b7-58c4-4bcc-9e56-02f2a8860764\" (UID: \"d96c79b7-58c4-4bcc-9e56-02f2a8860764\") " Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.455691 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d96c79b7-58c4-4bcc-9e56-02f2a8860764-horizon-secret-key\") pod \"d96c79b7-58c4-4bcc-9e56-02f2a8860764\" (UID: \"d96c79b7-58c4-4bcc-9e56-02f2a8860764\") " Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.455740 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ab2b33fa-d171-4525-b7a6-5bfc3a732fa4-config-data\") pod \"ab2b33fa-d171-4525-b7a6-5bfc3a732fa4\" (UID: \"ab2b33fa-d171-4525-b7a6-5bfc3a732fa4\") " Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.455760 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab2b33fa-d171-4525-b7a6-5bfc3a732fa4-scripts\") pod \"ab2b33fa-d171-4525-b7a6-5bfc3a732fa4\" (UID: \"ab2b33fa-d171-4525-b7a6-5bfc3a732fa4\") " Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.455807 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62jn4\" (UniqueName: \"kubernetes.io/projected/ab2b33fa-d171-4525-b7a6-5bfc3a732fa4-kube-api-access-62jn4\") pod \"ab2b33fa-d171-4525-b7a6-5bfc3a732fa4\" (UID: \"ab2b33fa-d171-4525-b7a6-5bfc3a732fa4\") " Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.455836 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab2b33fa-d171-4525-b7a6-5bfc3a732fa4-logs\") pod \"ab2b33fa-d171-4525-b7a6-5bfc3a732fa4\" (UID: \"ab2b33fa-d171-4525-b7a6-5bfc3a732fa4\") " Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.455878 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6lrj\" (UniqueName: \"kubernetes.io/projected/d96c79b7-58c4-4bcc-9e56-02f2a8860764-kube-api-access-t6lrj\") pod \"d96c79b7-58c4-4bcc-9e56-02f2a8860764\" (UID: \"d96c79b7-58c4-4bcc-9e56-02f2a8860764\") " Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.455946 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ab2b33fa-d171-4525-b7a6-5bfc3a732fa4-horizon-secret-key\") pod \"ab2b33fa-d171-4525-b7a6-5bfc3a732fa4\" (UID: \"ab2b33fa-d171-4525-b7a6-5bfc3a732fa4\") " Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.456019 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d96c79b7-58c4-4bcc-9e56-02f2a8860764-logs\") pod \"d96c79b7-58c4-4bcc-9e56-02f2a8860764\" (UID: \"d96c79b7-58c4-4bcc-9e56-02f2a8860764\") " Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.456085 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d96c79b7-58c4-4bcc-9e56-02f2a8860764-scripts" (OuterVolumeSpecName: "scripts") pod "d96c79b7-58c4-4bcc-9e56-02f2a8860764" (UID: "d96c79b7-58c4-4bcc-9e56-02f2a8860764"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.456109 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d96c79b7-58c4-4bcc-9e56-02f2a8860764-config-data\") pod \"d96c79b7-58c4-4bcc-9e56-02f2a8860764\" (UID: \"d96c79b7-58c4-4bcc-9e56-02f2a8860764\") " Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.456494 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab2b33fa-d171-4525-b7a6-5bfc3a732fa4-logs" (OuterVolumeSpecName: "logs") pod "ab2b33fa-d171-4525-b7a6-5bfc3a732fa4" (UID: "ab2b33fa-d171-4525-b7a6-5bfc3a732fa4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.456564 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab2b33fa-d171-4525-b7a6-5bfc3a732fa4-scripts" (OuterVolumeSpecName: "scripts") pod "ab2b33fa-d171-4525-b7a6-5bfc3a732fa4" (UID: "ab2b33fa-d171-4525-b7a6-5bfc3a732fa4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.456766 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d96c79b7-58c4-4bcc-9e56-02f2a8860764-logs" (OuterVolumeSpecName: "logs") pod "d96c79b7-58c4-4bcc-9e56-02f2a8860764" (UID: "d96c79b7-58c4-4bcc-9e56-02f2a8860764"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.456834 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab2b33fa-d171-4525-b7a6-5bfc3a732fa4-config-data" (OuterVolumeSpecName: "config-data") pod "ab2b33fa-d171-4525-b7a6-5bfc3a732fa4" (UID: "ab2b33fa-d171-4525-b7a6-5bfc3a732fa4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.456895 4881 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e51b074c-ae44-4db9-9ce6-b656a961dfaf-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.456917 4881 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d96c79b7-58c4-4bcc-9e56-02f2a8860764-logs\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.456932 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e51b074c-ae44-4db9-9ce6-b656a961dfaf-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.456943 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d96c79b7-58c4-4bcc-9e56-02f2a8860764-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.456955 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4gqq\" (UniqueName: \"kubernetes.io/projected/e51b074c-ae44-4db9-9ce6-b656a961dfaf-kube-api-access-m4gqq\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.456969 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e51b074c-ae44-4db9-9ce6-b656a961dfaf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.456980 4881 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e51b074c-ae44-4db9-9ce6-b656a961dfaf-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.456993 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab2b33fa-d171-4525-b7a6-5bfc3a732fa4-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.457003 4881 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab2b33fa-d171-4525-b7a6-5bfc3a732fa4-logs\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.457013 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e51b074c-ae44-4db9-9ce6-b656a961dfaf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.457254 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d96c79b7-58c4-4bcc-9e56-02f2a8860764-config-data" (OuterVolumeSpecName: "config-data") pod "d96c79b7-58c4-4bcc-9e56-02f2a8860764" (UID: "d96c79b7-58c4-4bcc-9e56-02f2a8860764"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.460304 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab2b33fa-d171-4525-b7a6-5bfc3a732fa4-kube-api-access-62jn4" (OuterVolumeSpecName: "kube-api-access-62jn4") pod "ab2b33fa-d171-4525-b7a6-5bfc3a732fa4" (UID: "ab2b33fa-d171-4525-b7a6-5bfc3a732fa4"). InnerVolumeSpecName "kube-api-access-62jn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.460585 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d96c79b7-58c4-4bcc-9e56-02f2a8860764-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "d96c79b7-58c4-4bcc-9e56-02f2a8860764" (UID: "d96c79b7-58c4-4bcc-9e56-02f2a8860764"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.460688 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab2b33fa-d171-4525-b7a6-5bfc3a732fa4-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "ab2b33fa-d171-4525-b7a6-5bfc3a732fa4" (UID: "ab2b33fa-d171-4525-b7a6-5bfc3a732fa4"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.468220 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d96c79b7-58c4-4bcc-9e56-02f2a8860764-kube-api-access-t6lrj" (OuterVolumeSpecName: "kube-api-access-t6lrj") pod "d96c79b7-58c4-4bcc-9e56-02f2a8860764" (UID: "d96c79b7-58c4-4bcc-9e56-02f2a8860764"). InnerVolumeSpecName "kube-api-access-t6lrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.559470 4881 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d96c79b7-58c4-4bcc-9e56-02f2a8860764-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.560208 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ab2b33fa-d171-4525-b7a6-5bfc3a732fa4-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.560268 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62jn4\" (UniqueName: \"kubernetes.io/projected/ab2b33fa-d171-4525-b7a6-5bfc3a732fa4-kube-api-access-62jn4\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.560352 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6lrj\" (UniqueName: \"kubernetes.io/projected/d96c79b7-58c4-4bcc-9e56-02f2a8860764-kube-api-access-t6lrj\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.560407 4881 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/ab2b33fa-d171-4525-b7a6-5bfc3a732fa4-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.560469 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d96c79b7-58c4-4bcc-9e56-02f2a8860764-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.612067 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" podUID="e51b074c-ae44-4db9-9ce6-b656a961dfaf" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.137:5353: i/o timeout" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.655435 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-68b447d964-6llq5"] Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.805150 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" event={"ID":"e51b074c-ae44-4db9-9ce6-b656a961dfaf","Type":"ContainerDied","Data":"485dc8c96eb7030a8e95c465abb23eb90b718f53333b55d575fff9445925584c"} Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.805525 4881 scope.go:117] "RemoveContainer" containerID="942d5c3de6fa62e5024b8e526fb126bf73a64902207ddcb2a51d04aa20661a8c" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.805195 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c88945fd5-tqqvj" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.807595 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-67c79cd6d5-lrpwx" event={"ID":"ab2b33fa-d171-4525-b7a6-5bfc3a732fa4","Type":"ContainerDied","Data":"c6064c9f2031907151a3a773338a4fc1c8d9b098f896f5cca5bc2a461a7bc91d"} Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.807696 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-67c79cd6d5-lrpwx" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.835421 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77fb486557-zjtxw" event={"ID":"d96c79b7-58c4-4bcc-9e56-02f2a8860764","Type":"ContainerDied","Data":"af85a7051ff9ab4c70d7145be172f02be844f0b1a0972620051139b6c311b772"} Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.835468 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77fb486557-zjtxw" Jan 21 11:20:25 crc kubenswrapper[4881]: E0121 11:20:25.837565 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.182:5001/podified-master-centos10/openstack-barbican-api:watcher_latest\\\"\"" pod="openstack/barbican-db-sync-slhtz" podUID="4bf52889-d5f3-44f8-b657-8ff3790962d1" Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.937466 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-67c79cd6d5-lrpwx"] Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.961374 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-67c79cd6d5-lrpwx"] Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.971085 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c88945fd5-tqqvj"] Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.979457 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c88945fd5-tqqvj"] Jan 21 11:20:25 crc kubenswrapper[4881]: I0121 11:20:25.996950 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-77fb486557-zjtxw"] Jan 21 11:20:26 crc kubenswrapper[4881]: I0121 11:20:26.006599 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-77fb486557-zjtxw"] Jan 21 11:20:27 crc kubenswrapper[4881]: E0121 11:20:27.313075 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.182:5001/podified-master-centos10/openstack-glance-api:watcher_latest\\\"\"" pod="openstack/glance-db-sync-mxb97" podUID="349e8898-8b7c-414a-8357-d431c8b81bf4" Jan 21 11:20:27 crc kubenswrapper[4881]: I0121 11:20:27.324408 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab2b33fa-d171-4525-b7a6-5bfc3a732fa4" path="/var/lib/kubelet/pods/ab2b33fa-d171-4525-b7a6-5bfc3a732fa4/volumes" Jan 21 11:20:27 crc kubenswrapper[4881]: I0121 11:20:27.325025 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d96c79b7-58c4-4bcc-9e56-02f2a8860764" path="/var/lib/kubelet/pods/d96c79b7-58c4-4bcc-9e56-02f2a8860764/volumes" Jan 21 11:20:27 crc kubenswrapper[4881]: I0121 11:20:27.325486 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e51b074c-ae44-4db9-9ce6-b656a961dfaf" path="/var/lib/kubelet/pods/e51b074c-ae44-4db9-9ce6-b656a961dfaf/volumes" Jan 21 11:20:29 crc kubenswrapper[4881]: I0121 11:20:29.851319 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:20:29 crc kubenswrapper[4881]: I0121 11:20:29.851880 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:20:31 crc kubenswrapper[4881]: E0121 11:20:31.493267 4881 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-cinder-api:watcher_latest" Jan 21 11:20:31 crc kubenswrapper[4881]: E0121 11:20:31.493768 4881 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.182:5001/podified-master-centos10/openstack-cinder-api:watcher_latest" Jan 21 11:20:31 crc kubenswrapper[4881]: E0121 11:20:31.493984 4881 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:38.102.83.182:5001/podified-master-centos10/openstack-cinder-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ltkw6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-4wxvl_openstack(65250dcf-0f0f-4fa6-8d57-e07d3d29f290): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:20:31 crc kubenswrapper[4881]: E0121 11:20:31.496555 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-4wxvl" podUID="65250dcf-0f0f-4fa6-8d57-e07d3d29f290" Jan 21 11:20:31 crc kubenswrapper[4881]: W0121 11:20:31.502825 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07cdf1a8_aec4_42ca_a564_c91e7132663d.slice/crio-b3a69110b13ed57551e9e7b2d409e0ce6c41734f7980f8a68242d767ea7507c3 WatchSource:0}: Error finding container b3a69110b13ed57551e9e7b2d409e0ce6c41734f7980f8a68242d767ea7507c3: Status 404 returned error can't find the container with id b3a69110b13ed57551e9e7b2d409e0ce6c41734f7980f8a68242d767ea7507c3 Jan 21 11:20:31 crc kubenswrapper[4881]: I0121 11:20:31.520989 4881 scope.go:117] "RemoveContainer" containerID="596eab5e695f6c4af1ee0501f1a922c8b4ac8e567cedab5865035324bb33f0cb" Jan 21 11:20:31 crc kubenswrapper[4881]: I0121 11:20:31.902890 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68b447d964-6llq5" event={"ID":"07cdf1a8-aec4-42ca-a564-c91e7132663d","Type":"ContainerStarted","Data":"b3a69110b13ed57551e9e7b2d409e0ce6c41734f7980f8a68242d767ea7507c3"} Jan 21 11:20:31 crc kubenswrapper[4881]: E0121 11:20:31.907808 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.182:5001/podified-master-centos10/openstack-cinder-api:watcher_latest\\\"\"" pod="openstack/cinder-db-sync-4wxvl" podUID="65250dcf-0f0f-4fa6-8d57-e07d3d29f290" Jan 21 11:20:32 crc kubenswrapper[4881]: I0121 11:20:32.039409 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-69c96776fd-k2z88"] Jan 21 11:20:32 crc kubenswrapper[4881]: I0121 11:20:32.057212 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ncbfx"] Jan 21 11:20:32 crc kubenswrapper[4881]: I0121 11:20:32.130721 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-mzhtm"] Jan 21 11:20:32 crc kubenswrapper[4881]: W0121 11:20:32.136448 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33f9442b_24ee_47d4_b914_19d32a5cad74.slice/crio-5eb630cacdc975524e9b6b35c212c8b27a6bcc9b84c6f9d78fe4ce312021f066 WatchSource:0}: Error finding container 5eb630cacdc975524e9b6b35c212c8b27a6bcc9b84c6f9d78fe4ce312021f066: Status 404 returned error can't find the container with id 5eb630cacdc975524e9b6b35c212c8b27a6bcc9b84c6f9d78fe4ce312021f066 Jan 21 11:20:32 crc kubenswrapper[4881]: I0121 11:20:32.207323 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 21 11:20:32 crc kubenswrapper[4881]: I0121 11:20:32.218038 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 21 11:20:32 crc kubenswrapper[4881]: I0121 11:20:32.281325 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Jan 21 11:20:32 crc kubenswrapper[4881]: I0121 11:20:32.918538 4881 generic.go:334] "Generic (PLEG): container finished" podID="6a8083e9-c68d-40ca-bde9-b84e43b65ab8" containerID="932fbf80100df4b5aa3c652842e044641d2f0a31589d5beff4fb8c850ca3a5fe" exitCode=0 Jan 21 11:20:32 crc kubenswrapper[4881]: I0121 11:20:32.918602 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ncbfx" event={"ID":"6a8083e9-c68d-40ca-bde9-b84e43b65ab8","Type":"ContainerDied","Data":"932fbf80100df4b5aa3c652842e044641d2f0a31589d5beff4fb8c850ca3a5fe"} Jan 21 11:20:32 crc kubenswrapper[4881]: I0121 11:20:32.919208 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ncbfx" event={"ID":"6a8083e9-c68d-40ca-bde9-b84e43b65ab8","Type":"ContainerStarted","Data":"a06c31c201ce60f211d95724861d78b4cdd096d87a4ed5b0a3ede7c018cd2b3c"} Jan 21 11:20:32 crc kubenswrapper[4881]: I0121 11:20:32.924659 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"937bcc33-ee83-4f94-ab76-84f534cfd05a","Type":"ContainerStarted","Data":"997fa5dba21bdf7b6f00e7dc8dc9683ca1d4ab25cea9e4061e18e3bf275550a5"} Jan 21 11:20:32 crc kubenswrapper[4881]: I0121 11:20:32.931079 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bcec3c24-87bd-4c22-a800-d3835455a38b","Type":"ContainerStarted","Data":"04c2a8411b86bd02035922d4fe1ad96f1a1dbf240fbfa10221b52bc6ac101706"} Jan 21 11:20:32 crc kubenswrapper[4881]: I0121 11:20:32.934194 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mzhtm" event={"ID":"33f9442b-24ee-47d4-b914-19d32a5cad74","Type":"ContainerStarted","Data":"b750c2c4c79eaa65d01394c5ce39a3b9970863a1b04d7248173d08889a7ae0be"} Jan 21 11:20:32 crc kubenswrapper[4881]: I0121 11:20:32.934223 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mzhtm" event={"ID":"33f9442b-24ee-47d4-b914-19d32a5cad74","Type":"ContainerStarted","Data":"5eb630cacdc975524e9b6b35c212c8b27a6bcc9b84c6f9d78fe4ce312021f066"} Jan 21 11:20:32 crc kubenswrapper[4881]: I0121 11:20:32.936392 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69c96776fd-k2z88" event={"ID":"2f516fb6-322b-4eee-9d4d-a10176959bbb","Type":"ContainerStarted","Data":"c37cb0dabfc7bd198de45353bd7d592c9381160bf0f186350e93353fe2ea4470"} Jan 21 11:20:32 crc kubenswrapper[4881]: I0121 11:20:32.936418 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69c96776fd-k2z88" event={"ID":"2f516fb6-322b-4eee-9d4d-a10176959bbb","Type":"ContainerStarted","Data":"1c1c6837f2242fbd603bbb32074adc55de9c3121097b94c5088bc30db69ba787"} Jan 21 11:20:32 crc kubenswrapper[4881]: I0121 11:20:32.938522 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"6244bcac-82b7-4bd4-b93d-3def53490380","Type":"ContainerStarted","Data":"438e3940a5181d3570b1ef008c9096ec2907b5d774f74ab745931fd0122208c4"} Jan 21 11:20:32 crc kubenswrapper[4881]: I0121 11:20:32.939133 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"6244bcac-82b7-4bd4-b93d-3def53490380","Type":"ContainerStarted","Data":"97811bb6b6cd1ac4b1dbc5094a9eed081460120416cffcb6a63fe48350301d28"} Jan 21 11:20:32 crc kubenswrapper[4881]: I0121 11:20:32.946522 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-kc9jz" event={"ID":"f568ffda-82a9-4f47-89d3-13b89a35c9b4","Type":"ContainerStarted","Data":"e31e701604fd33a6bb82c0b6900e3f3bdeaa0b71abb7488fd4edd2c71ed37a56"} Jan 21 11:20:32 crc kubenswrapper[4881]: I0121 11:20:32.951725 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68b447d964-6llq5" event={"ID":"07cdf1a8-aec4-42ca-a564-c91e7132663d","Type":"ContainerStarted","Data":"1ca550c7d5401e7c4177774caca16529ac7e810b26de193d9119b30ce371973d"} Jan 21 11:20:32 crc kubenswrapper[4881]: I0121 11:20:32.951802 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-68b447d964-6llq5" event={"ID":"07cdf1a8-aec4-42ca-a564-c91e7132663d","Type":"ContainerStarted","Data":"d08b5a01336542626157ff229e969c250cd28df9c3cb1c31d812c84ee47db821"} Jan 21 11:20:32 crc kubenswrapper[4881]: I0121 11:20:32.952658 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e","Type":"ContainerStarted","Data":"29d3adbd836eae43fe470435c7cc82a51d0ed6187ef1f30da41d37c41cb401fb"} Jan 21 11:20:32 crc kubenswrapper[4881]: I0121 11:20:32.977522 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-mzhtm" podStartSLOduration=17.977498167 podStartE2EDuration="17.977498167s" podCreationTimestamp="2026-01-21 11:20:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:20:32.971544498 +0000 UTC m=+1420.231500967" watchObservedRunningTime="2026-01-21 11:20:32.977498167 +0000 UTC m=+1420.237454636" Jan 21 11:20:32 crc kubenswrapper[4881]: I0121 11:20:32.998646 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-68b447d964-6llq5" podStartSLOduration=38.544068883 podStartE2EDuration="38.998624995s" podCreationTimestamp="2026-01-21 11:19:54 +0000 UTC" firstStartedPulling="2026-01-21 11:20:31.521114652 +0000 UTC m=+1418.781071121" lastFinishedPulling="2026-01-21 11:20:31.975670764 +0000 UTC m=+1419.235627233" observedRunningTime="2026-01-21 11:20:32.993495937 +0000 UTC m=+1420.253452416" watchObservedRunningTime="2026-01-21 11:20:32.998624995 +0000 UTC m=+1420.258581464" Jan 21 11:20:33 crc kubenswrapper[4881]: I0121 11:20:33.025866 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-kc9jz" podStartSLOduration=5.202317943 podStartE2EDuration="48.025837116s" podCreationTimestamp="2026-01-21 11:19:45 +0000 UTC" firstStartedPulling="2026-01-21 11:19:48.730220375 +0000 UTC m=+1375.990176844" lastFinishedPulling="2026-01-21 11:20:31.553739548 +0000 UTC m=+1418.813696017" observedRunningTime="2026-01-21 11:20:33.015578119 +0000 UTC m=+1420.275534588" watchObservedRunningTime="2026-01-21 11:20:33.025837116 +0000 UTC m=+1420.285793595" Jan 21 11:20:34 crc kubenswrapper[4881]: I0121 11:20:34.977848 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ncbfx" event={"ID":"6a8083e9-c68d-40ca-bde9-b84e43b65ab8","Type":"ContainerStarted","Data":"c80c8a89877e92046c31c2139dd4330c1447f9d23ecebf26a3928b9515ff61af"} Jan 21 11:20:34 crc kubenswrapper[4881]: I0121 11:20:34.979121 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"937bcc33-ee83-4f94-ab76-84f534cfd05a","Type":"ContainerStarted","Data":"c3bbd97ebdf9aca32eeb94781f993e7cfdd9203a6bf9ab481c3c0b8ff6f0ae1e"} Jan 21 11:20:34 crc kubenswrapper[4881]: I0121 11:20:34.981344 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bcec3c24-87bd-4c22-a800-d3835455a38b","Type":"ContainerStarted","Data":"b14382df533ca3054b8542bddeff2d41d2f1e579142ea3b20b1a7a9c276362b8"} Jan 21 11:20:34 crc kubenswrapper[4881]: I0121 11:20:34.983177 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e","Type":"ContainerStarted","Data":"5db7a5c0d23dd82d2a5258870db858ab9345870f09ad31cd41b42f8d9eaa1f90"} Jan 21 11:20:34 crc kubenswrapper[4881]: I0121 11:20:34.985646 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69c96776fd-k2z88" event={"ID":"2f516fb6-322b-4eee-9d4d-a10176959bbb","Type":"ContainerStarted","Data":"20e9501e200b98586a1c9e7d12e2adf41d01903bd2505ab83e7f8f0fc5404f52"} Jan 21 11:20:34 crc kubenswrapper[4881]: I0121 11:20:34.987258 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"6244bcac-82b7-4bd4-b93d-3def53490380","Type":"ContainerStarted","Data":"d951ea6875808772b952ed153f0f2d5544ca533b9519802da41d11d9d1f68396"} Jan 21 11:20:34 crc kubenswrapper[4881]: I0121 11:20:34.987446 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 21 11:20:35 crc kubenswrapper[4881]: I0121 11:20:35.028365 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=34.925924616 podStartE2EDuration="37.028343213s" podCreationTimestamp="2026-01-21 11:19:58 +0000 UTC" firstStartedPulling="2026-01-21 11:20:32.298699645 +0000 UTC m=+1419.558656104" lastFinishedPulling="2026-01-21 11:20:34.401118232 +0000 UTC m=+1421.661074701" observedRunningTime="2026-01-21 11:20:35.023463911 +0000 UTC m=+1422.283420390" watchObservedRunningTime="2026-01-21 11:20:35.028343213 +0000 UTC m=+1422.288299682" Jan 21 11:20:35 crc kubenswrapper[4881]: I0121 11:20:35.059598 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=34.999073806 podStartE2EDuration="37.059581044s" podCreationTimestamp="2026-01-21 11:19:58 +0000 UTC" firstStartedPulling="2026-01-21 11:20:32.256609262 +0000 UTC m=+1419.516565731" lastFinishedPulling="2026-01-21 11:20:34.3171165 +0000 UTC m=+1421.577072969" observedRunningTime="2026-01-21 11:20:35.052488147 +0000 UTC m=+1422.312444616" watchObservedRunningTime="2026-01-21 11:20:35.059581044 +0000 UTC m=+1422.319537513" Jan 21 11:20:35 crc kubenswrapper[4881]: I0121 11:20:35.111719 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-69c96776fd-k2z88" podStartSLOduration=41.111696658 podStartE2EDuration="41.111696658s" podCreationTimestamp="2026-01-21 11:19:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:20:35.107443062 +0000 UTC m=+1422.367399551" watchObservedRunningTime="2026-01-21 11:20:35.111696658 +0000 UTC m=+1422.371653127" Jan 21 11:20:35 crc kubenswrapper[4881]: I0121 11:20:35.124639 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-69c96776fd-k2z88" Jan 21 11:20:35 crc kubenswrapper[4881]: I0121 11:20:35.124740 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-69c96776fd-k2z88" Jan 21 11:20:35 crc kubenswrapper[4881]: I0121 11:20:35.133363 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=37.13333779 podStartE2EDuration="37.13333779s" podCreationTimestamp="2026-01-21 11:19:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:20:35.082445516 +0000 UTC m=+1422.342401985" watchObservedRunningTime="2026-01-21 11:20:35.13333779 +0000 UTC m=+1422.393294269" Jan 21 11:20:35 crc kubenswrapper[4881]: I0121 11:20:35.756746 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-68b447d964-6llq5" Jan 21 11:20:35 crc kubenswrapper[4881]: I0121 11:20:35.756820 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-68b447d964-6llq5" Jan 21 11:20:37 crc kubenswrapper[4881]: I0121 11:20:37.022089 4881 generic.go:334] "Generic (PLEG): container finished" podID="6a8083e9-c68d-40ca-bde9-b84e43b65ab8" containerID="c80c8a89877e92046c31c2139dd4330c1447f9d23ecebf26a3928b9515ff61af" exitCode=0 Jan 21 11:20:37 crc kubenswrapper[4881]: I0121 11:20:37.022143 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ncbfx" event={"ID":"6a8083e9-c68d-40ca-bde9-b84e43b65ab8","Type":"ContainerDied","Data":"c80c8a89877e92046c31c2139dd4330c1447f9d23ecebf26a3928b9515ff61af"} Jan 21 11:20:37 crc kubenswrapper[4881]: I0121 11:20:37.803554 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 21 11:20:39 crc kubenswrapper[4881]: I0121 11:20:39.087254 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Jan 21 11:20:39 crc kubenswrapper[4881]: I0121 11:20:39.087382 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 21 11:20:39 crc kubenswrapper[4881]: I0121 11:20:39.092440 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Jan 21 11:20:39 crc kubenswrapper[4881]: I0121 11:20:39.148613 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Jan 21 11:20:39 crc kubenswrapper[4881]: I0121 11:20:39.149294 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Jan 21 11:20:39 crc kubenswrapper[4881]: I0121 11:20:39.192744 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Jan 21 11:20:39 crc kubenswrapper[4881]: I0121 11:20:39.498745 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 21 11:20:39 crc kubenswrapper[4881]: I0121 11:20:39.534728 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Jan 21 11:20:40 crc kubenswrapper[4881]: I0121 11:20:40.051165 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 21 11:20:40 crc kubenswrapper[4881]: I0121 11:20:40.060039 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 21 11:20:40 crc kubenswrapper[4881]: I0121 11:20:40.098351 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Jan 21 11:20:40 crc kubenswrapper[4881]: I0121 11:20:40.114083 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Jan 21 11:20:41 crc kubenswrapper[4881]: I0121 11:20:41.062994 4881 generic.go:334] "Generic (PLEG): container finished" podID="33f9442b-24ee-47d4-b914-19d32a5cad74" containerID="b750c2c4c79eaa65d01394c5ce39a3b9970863a1b04d7248173d08889a7ae0be" exitCode=0 Jan 21 11:20:41 crc kubenswrapper[4881]: I0121 11:20:41.063097 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mzhtm" event={"ID":"33f9442b-24ee-47d4-b914-19d32a5cad74","Type":"ContainerDied","Data":"b750c2c4c79eaa65d01394c5ce39a3b9970863a1b04d7248173d08889a7ae0be"} Jan 21 11:20:43 crc kubenswrapper[4881]: I0121 11:20:43.091620 4881 generic.go:334] "Generic (PLEG): container finished" podID="f568ffda-82a9-4f47-89d3-13b89a35c9b4" containerID="e31e701604fd33a6bb82c0b6900e3f3bdeaa0b71abb7488fd4edd2c71ed37a56" exitCode=0 Jan 21 11:20:43 crc kubenswrapper[4881]: I0121 11:20:43.091724 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-kc9jz" event={"ID":"f568ffda-82a9-4f47-89d3-13b89a35c9b4","Type":"ContainerDied","Data":"e31e701604fd33a6bb82c0b6900e3f3bdeaa0b71abb7488fd4edd2c71ed37a56"} Jan 21 11:20:43 crc kubenswrapper[4881]: I0121 11:20:43.479841 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Jan 21 11:20:43 crc kubenswrapper[4881]: I0121 11:20:43.480093 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="6244bcac-82b7-4bd4-b93d-3def53490380" containerName="watcher-api-log" containerID="cri-o://438e3940a5181d3570b1ef008c9096ec2907b5d774f74ab745931fd0122208c4" gracePeriod=30 Jan 21 11:20:43 crc kubenswrapper[4881]: I0121 11:20:43.480178 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="6244bcac-82b7-4bd4-b93d-3def53490380" containerName="watcher-api" containerID="cri-o://d951ea6875808772b952ed153f0f2d5544ca533b9519802da41d11d9d1f68396" gracePeriod=30 Jan 21 11:20:44 crc kubenswrapper[4881]: I0121 11:20:44.107663 4881 generic.go:334] "Generic (PLEG): container finished" podID="6244bcac-82b7-4bd4-b93d-3def53490380" containerID="438e3940a5181d3570b1ef008c9096ec2907b5d774f74ab745931fd0122208c4" exitCode=143 Jan 21 11:20:44 crc kubenswrapper[4881]: I0121 11:20:44.107755 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"6244bcac-82b7-4bd4-b93d-3def53490380","Type":"ContainerDied","Data":"438e3940a5181d3570b1ef008c9096ec2907b5d774f74ab745931fd0122208c4"} Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.127998 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-69c96776fd-k2z88" podUID="2f516fb6-322b-4eee-9d4d-a10176959bbb" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.160:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.160:8443: connect: connection refused" Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.131214 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mzhtm" event={"ID":"33f9442b-24ee-47d4-b914-19d32a5cad74","Type":"ContainerDied","Data":"5eb630cacdc975524e9b6b35c212c8b27a6bcc9b84c6f9d78fe4ce312021f066"} Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.131346 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5eb630cacdc975524e9b6b35c212c8b27a6bcc9b84c6f9d78fe4ce312021f066" Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.138505 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-kc9jz" Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.139122 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-kc9jz" event={"ID":"f568ffda-82a9-4f47-89d3-13b89a35c9b4","Type":"ContainerDied","Data":"73872e6c614646bff532d76f6a6a2af8c1af4b2996c3b90c9492f6b03925e082"} Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.139211 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73872e6c614646bff532d76f6a6a2af8c1af4b2996c3b90c9492f6b03925e082" Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.139907 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mzhtm" Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.140372 4881 generic.go:334] "Generic (PLEG): container finished" podID="ee4e7116-c2cd-43d5-af6b-9f30b5053e0e" containerID="5db7a5c0d23dd82d2a5258870db858ab9345870f09ad31cd41b42f8d9eaa1f90" exitCode=1 Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.140456 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e","Type":"ContainerDied","Data":"5db7a5c0d23dd82d2a5258870db858ab9345870f09ad31cd41b42f8d9eaa1f90"} Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.141209 4881 scope.go:117] "RemoveContainer" containerID="5db7a5c0d23dd82d2a5258870db858ab9345870f09ad31cd41b42f8d9eaa1f90" Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.158357 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2wvg\" (UniqueName: \"kubernetes.io/projected/33f9442b-24ee-47d4-b914-19d32a5cad74-kube-api-access-n2wvg\") pod \"33f9442b-24ee-47d4-b914-19d32a5cad74\" (UID: \"33f9442b-24ee-47d4-b914-19d32a5cad74\") " Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.158651 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33f9442b-24ee-47d4-b914-19d32a5cad74-config-data\") pod \"33f9442b-24ee-47d4-b914-19d32a5cad74\" (UID: \"33f9442b-24ee-47d4-b914-19d32a5cad74\") " Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.158770 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f568ffda-82a9-4f47-89d3-13b89a35c9b4-combined-ca-bundle\") pod \"f568ffda-82a9-4f47-89d3-13b89a35c9b4\" (UID: \"f568ffda-82a9-4f47-89d3-13b89a35c9b4\") " Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.158909 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33f9442b-24ee-47d4-b914-19d32a5cad74-scripts\") pod \"33f9442b-24ee-47d4-b914-19d32a5cad74\" (UID: \"33f9442b-24ee-47d4-b914-19d32a5cad74\") " Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.159129 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/33f9442b-24ee-47d4-b914-19d32a5cad74-fernet-keys\") pod \"33f9442b-24ee-47d4-b914-19d32a5cad74\" (UID: \"33f9442b-24ee-47d4-b914-19d32a5cad74\") " Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.159194 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/33f9442b-24ee-47d4-b914-19d32a5cad74-credential-keys\") pod \"33f9442b-24ee-47d4-b914-19d32a5cad74\" (UID: \"33f9442b-24ee-47d4-b914-19d32a5cad74\") " Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.159282 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f568ffda-82a9-4f47-89d3-13b89a35c9b4-logs\") pod \"f568ffda-82a9-4f47-89d3-13b89a35c9b4\" (UID: \"f568ffda-82a9-4f47-89d3-13b89a35c9b4\") " Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.159355 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gv7qz\" (UniqueName: \"kubernetes.io/projected/f568ffda-82a9-4f47-89d3-13b89a35c9b4-kube-api-access-gv7qz\") pod \"f568ffda-82a9-4f47-89d3-13b89a35c9b4\" (UID: \"f568ffda-82a9-4f47-89d3-13b89a35c9b4\") " Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.159460 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33f9442b-24ee-47d4-b914-19d32a5cad74-combined-ca-bundle\") pod \"33f9442b-24ee-47d4-b914-19d32a5cad74\" (UID: \"33f9442b-24ee-47d4-b914-19d32a5cad74\") " Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.159560 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f568ffda-82a9-4f47-89d3-13b89a35c9b4-config-data\") pod \"f568ffda-82a9-4f47-89d3-13b89a35c9b4\" (UID: \"f568ffda-82a9-4f47-89d3-13b89a35c9b4\") " Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.160075 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f568ffda-82a9-4f47-89d3-13b89a35c9b4-scripts\") pod \"f568ffda-82a9-4f47-89d3-13b89a35c9b4\" (UID: \"f568ffda-82a9-4f47-89d3-13b89a35c9b4\") " Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.168475 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f568ffda-82a9-4f47-89d3-13b89a35c9b4-logs" (OuterVolumeSpecName: "logs") pod "f568ffda-82a9-4f47-89d3-13b89a35c9b4" (UID: "f568ffda-82a9-4f47-89d3-13b89a35c9b4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.169913 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33f9442b-24ee-47d4-b914-19d32a5cad74-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "33f9442b-24ee-47d4-b914-19d32a5cad74" (UID: "33f9442b-24ee-47d4-b914-19d32a5cad74"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.170383 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33f9442b-24ee-47d4-b914-19d32a5cad74-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "33f9442b-24ee-47d4-b914-19d32a5cad74" (UID: "33f9442b-24ee-47d4-b914-19d32a5cad74"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.172843 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33f9442b-24ee-47d4-b914-19d32a5cad74-kube-api-access-n2wvg" (OuterVolumeSpecName: "kube-api-access-n2wvg") pod "33f9442b-24ee-47d4-b914-19d32a5cad74" (UID: "33f9442b-24ee-47d4-b914-19d32a5cad74"). InnerVolumeSpecName "kube-api-access-n2wvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.177463 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33f9442b-24ee-47d4-b914-19d32a5cad74-scripts" (OuterVolumeSpecName: "scripts") pod "33f9442b-24ee-47d4-b914-19d32a5cad74" (UID: "33f9442b-24ee-47d4-b914-19d32a5cad74"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.178126 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f568ffda-82a9-4f47-89d3-13b89a35c9b4-kube-api-access-gv7qz" (OuterVolumeSpecName: "kube-api-access-gv7qz") pod "f568ffda-82a9-4f47-89d3-13b89a35c9b4" (UID: "f568ffda-82a9-4f47-89d3-13b89a35c9b4"). InnerVolumeSpecName "kube-api-access-gv7qz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.193207 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f568ffda-82a9-4f47-89d3-13b89a35c9b4-scripts" (OuterVolumeSpecName: "scripts") pod "f568ffda-82a9-4f47-89d3-13b89a35c9b4" (UID: "f568ffda-82a9-4f47-89d3-13b89a35c9b4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.209047 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33f9442b-24ee-47d4-b914-19d32a5cad74-config-data" (OuterVolumeSpecName: "config-data") pod "33f9442b-24ee-47d4-b914-19d32a5cad74" (UID: "33f9442b-24ee-47d4-b914-19d32a5cad74"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.213114 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33f9442b-24ee-47d4-b914-19d32a5cad74-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "33f9442b-24ee-47d4-b914-19d32a5cad74" (UID: "33f9442b-24ee-47d4-b914-19d32a5cad74"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.248097 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f568ffda-82a9-4f47-89d3-13b89a35c9b4-config-data" (OuterVolumeSpecName: "config-data") pod "f568ffda-82a9-4f47-89d3-13b89a35c9b4" (UID: "f568ffda-82a9-4f47-89d3-13b89a35c9b4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.268221 4881 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/33f9442b-24ee-47d4-b914-19d32a5cad74-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.268262 4881 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/33f9442b-24ee-47d4-b914-19d32a5cad74-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.268273 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gv7qz\" (UniqueName: \"kubernetes.io/projected/f568ffda-82a9-4f47-89d3-13b89a35c9b4-kube-api-access-gv7qz\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.268284 4881 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f568ffda-82a9-4f47-89d3-13b89a35c9b4-logs\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.268295 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33f9442b-24ee-47d4-b914-19d32a5cad74-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.268304 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f568ffda-82a9-4f47-89d3-13b89a35c9b4-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.268312 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f568ffda-82a9-4f47-89d3-13b89a35c9b4-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.268321 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2wvg\" (UniqueName: \"kubernetes.io/projected/33f9442b-24ee-47d4-b914-19d32a5cad74-kube-api-access-n2wvg\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.268329 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33f9442b-24ee-47d4-b914-19d32a5cad74-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.268337 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33f9442b-24ee-47d4-b914-19d32a5cad74-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.291935 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f568ffda-82a9-4f47-89d3-13b89a35c9b4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f568ffda-82a9-4f47-89d3-13b89a35c9b4" (UID: "f568ffda-82a9-4f47-89d3-13b89a35c9b4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.370469 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f568ffda-82a9-4f47-89d3-13b89a35c9b4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.518205 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="6244bcac-82b7-4bd4-b93d-3def53490380" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.162:9322/\": read tcp 10.217.0.2:50344->10.217.0.162:9322: read: connection reset by peer" Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.518259 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="6244bcac-82b7-4bd4-b93d-3def53490380" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.162:9322/\": read tcp 10.217.0.2:50334->10.217.0.162:9322: read: connection reset by peer" Jan 21 11:20:45 crc kubenswrapper[4881]: I0121 11:20:45.766049 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-68b447d964-6llq5" podUID="07cdf1a8-aec4-42ca-a564-c91e7132663d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.161:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.161:8443: connect: connection refused" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.221317 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ncbfx" event={"ID":"6a8083e9-c68d-40ca-bde9-b84e43b65ab8","Type":"ContainerStarted","Data":"bb51d30f717ade21f99893a221476158fedbab913c5592a0655c1dfba33d69c7"} Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.232863 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-slhtz" event={"ID":"4bf52889-d5f3-44f8-b657-8ff3790962d1","Type":"ContainerStarted","Data":"3a796b1b54b7432132400a5a214afb4cf61aaada5f5054cc747d5e74194d9dae"} Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.225678 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.293692 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bcec3c24-87bd-4c22-a800-d3835455a38b","Type":"ContainerStarted","Data":"ca18caa0fee509128e7ffae2755d6b5b1126bfe1c63366090fd0947db93d8443"} Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.332079 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6244bcac-82b7-4bd4-b93d-3def53490380-logs\") pod \"6244bcac-82b7-4bd4-b93d-3def53490380\" (UID: \"6244bcac-82b7-4bd4-b93d-3def53490380\") " Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.332156 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6244bcac-82b7-4bd4-b93d-3def53490380-config-data\") pod \"6244bcac-82b7-4bd4-b93d-3def53490380\" (UID: \"6244bcac-82b7-4bd4-b93d-3def53490380\") " Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.332207 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/6244bcac-82b7-4bd4-b93d-3def53490380-custom-prometheus-ca\") pod \"6244bcac-82b7-4bd4-b93d-3def53490380\" (UID: \"6244bcac-82b7-4bd4-b93d-3def53490380\") " Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.332231 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6244bcac-82b7-4bd4-b93d-3def53490380-combined-ca-bundle\") pod \"6244bcac-82b7-4bd4-b93d-3def53490380\" (UID: \"6244bcac-82b7-4bd4-b93d-3def53490380\") " Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.332327 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgt4b\" (UniqueName: \"kubernetes.io/projected/6244bcac-82b7-4bd4-b93d-3def53490380-kube-api-access-sgt4b\") pod \"6244bcac-82b7-4bd4-b93d-3def53490380\" (UID: \"6244bcac-82b7-4bd4-b93d-3def53490380\") " Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.333959 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6244bcac-82b7-4bd4-b93d-3def53490380-logs" (OuterVolumeSpecName: "logs") pod "6244bcac-82b7-4bd4-b93d-3def53490380" (UID: "6244bcac-82b7-4bd4-b93d-3def53490380"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.340136 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e","Type":"ContainerStarted","Data":"61f6b4008e5afe3c84bc4dbf116ba996728224955a2729f3dc2de6c1a2eeb445"} Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.359986 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6244bcac-82b7-4bd4-b93d-3def53490380-kube-api-access-sgt4b" (OuterVolumeSpecName: "kube-api-access-sgt4b") pod "6244bcac-82b7-4bd4-b93d-3def53490380" (UID: "6244bcac-82b7-4bd4-b93d-3def53490380"). InnerVolumeSpecName "kube-api-access-sgt4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.386066 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ncbfx" podStartSLOduration=41.675279952 podStartE2EDuration="47.386036259s" podCreationTimestamp="2026-01-21 11:19:59 +0000 UTC" firstStartedPulling="2026-01-21 11:20:33.224095146 +0000 UTC m=+1420.484051615" lastFinishedPulling="2026-01-21 11:20:38.934851453 +0000 UTC m=+1426.194807922" observedRunningTime="2026-01-21 11:20:46.260536945 +0000 UTC m=+1433.520493424" watchObservedRunningTime="2026-01-21 11:20:46.386036259 +0000 UTC m=+1433.645992728" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.399303 4881 generic.go:334] "Generic (PLEG): container finished" podID="6244bcac-82b7-4bd4-b93d-3def53490380" containerID="d951ea6875808772b952ed153f0f2d5544ca533b9519802da41d11d9d1f68396" exitCode=0 Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.401006 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mzhtm" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.399383 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.402040 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-kc9jz" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.399407 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"6244bcac-82b7-4bd4-b93d-3def53490380","Type":"ContainerDied","Data":"d951ea6875808772b952ed153f0f2d5544ca533b9519802da41d11d9d1f68396"} Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.402481 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"6244bcac-82b7-4bd4-b93d-3def53490380","Type":"ContainerDied","Data":"97811bb6b6cd1ac4b1dbc5094a9eed081460120416cffcb6a63fe48350301d28"} Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.402501 4881 scope.go:117] "RemoveContainer" containerID="d951ea6875808772b952ed153f0f2d5544ca533b9519802da41d11d9d1f68396" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.429943 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6244bcac-82b7-4bd4-b93d-3def53490380-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "6244bcac-82b7-4bd4-b93d-3def53490380" (UID: "6244bcac-82b7-4bd4-b93d-3def53490380"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.437591 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-slhtz" podStartSLOduration=3.811970868 podStartE2EDuration="1m1.437565981s" podCreationTimestamp="2026-01-21 11:19:45 +0000 UTC" firstStartedPulling="2026-01-21 11:19:47.926706403 +0000 UTC m=+1375.186662872" lastFinishedPulling="2026-01-21 11:20:45.552301516 +0000 UTC m=+1432.812257985" observedRunningTime="2026-01-21 11:20:46.300053778 +0000 UTC m=+1433.560010247" watchObservedRunningTime="2026-01-21 11:20:46.437565981 +0000 UTC m=+1433.697522450" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.440958 4881 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6244bcac-82b7-4bd4-b93d-3def53490380-logs\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.441237 4881 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/6244bcac-82b7-4bd4-b93d-3def53490380-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.441252 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgt4b\" (UniqueName: \"kubernetes.io/projected/6244bcac-82b7-4bd4-b93d-3def53490380-kube-api-access-sgt4b\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.466855 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-857c5cc966-ggkc4"] Jan 21 11:20:46 crc kubenswrapper[4881]: E0121 11:20:46.467357 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33f9442b-24ee-47d4-b914-19d32a5cad74" containerName="keystone-bootstrap" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.467377 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="33f9442b-24ee-47d4-b914-19d32a5cad74" containerName="keystone-bootstrap" Jan 21 11:20:46 crc kubenswrapper[4881]: E0121 11:20:46.467395 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e51b074c-ae44-4db9-9ce6-b656a961dfaf" containerName="init" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.467404 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="e51b074c-ae44-4db9-9ce6-b656a961dfaf" containerName="init" Jan 21 11:20:46 crc kubenswrapper[4881]: E0121 11:20:46.467419 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e51b074c-ae44-4db9-9ce6-b656a961dfaf" containerName="dnsmasq-dns" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.467429 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="e51b074c-ae44-4db9-9ce6-b656a961dfaf" containerName="dnsmasq-dns" Jan 21 11:20:46 crc kubenswrapper[4881]: E0121 11:20:46.467454 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6244bcac-82b7-4bd4-b93d-3def53490380" containerName="watcher-api-log" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.467462 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="6244bcac-82b7-4bd4-b93d-3def53490380" containerName="watcher-api-log" Jan 21 11:20:46 crc kubenswrapper[4881]: E0121 11:20:46.467486 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f568ffda-82a9-4f47-89d3-13b89a35c9b4" containerName="placement-db-sync" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.467494 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="f568ffda-82a9-4f47-89d3-13b89a35c9b4" containerName="placement-db-sync" Jan 21 11:20:46 crc kubenswrapper[4881]: E0121 11:20:46.467514 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6244bcac-82b7-4bd4-b93d-3def53490380" containerName="watcher-api" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.467520 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="6244bcac-82b7-4bd4-b93d-3def53490380" containerName="watcher-api" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.467724 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="f568ffda-82a9-4f47-89d3-13b89a35c9b4" containerName="placement-db-sync" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.467744 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="6244bcac-82b7-4bd4-b93d-3def53490380" containerName="watcher-api" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.467760 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="e51b074c-ae44-4db9-9ce6-b656a961dfaf" containerName="dnsmasq-dns" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.467794 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="6244bcac-82b7-4bd4-b93d-3def53490380" containerName="watcher-api-log" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.467818 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="33f9442b-24ee-47d4-b914-19d32a5cad74" containerName="keystone-bootstrap" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.468595 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-857c5cc966-ggkc4" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.475040 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.475254 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6244bcac-82b7-4bd4-b93d-3def53490380-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6244bcac-82b7-4bd4-b93d-3def53490380" (UID: "6244bcac-82b7-4bd4-b93d-3def53490380"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.475504 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.475597 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.475654 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-j54nk" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.479181 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.484437 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-59bf6c8c7b-wvc46"] Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.488921 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.489659 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-59bf6c8c7b-wvc46" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.501755 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-857c5cc966-ggkc4"] Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.501775 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6244bcac-82b7-4bd4-b93d-3def53490380-config-data" (OuterVolumeSpecName: "config-data") pod "6244bcac-82b7-4bd4-b93d-3def53490380" (UID: "6244bcac-82b7-4bd4-b93d-3def53490380"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.504611 4881 scope.go:117] "RemoveContainer" containerID="438e3940a5181d3570b1ef008c9096ec2907b5d774f74ab745931fd0122208c4" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.506258 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.506281 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.506434 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.506632 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.506895 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-dndng" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.543171 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpfnt\" (UniqueName: \"kubernetes.io/projected/cacf36ac-8c52-43a6-9fcb-2cfc5b27a952-kube-api-access-jpfnt\") pod \"keystone-857c5cc966-ggkc4\" (UID: \"cacf36ac-8c52-43a6-9fcb-2cfc5b27a952\") " pod="openstack/keystone-857c5cc966-ggkc4" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.543227 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9358f706-24c3-46c5-8490-89402a85e9a4-config-data\") pod \"placement-59bf6c8c7b-wvc46\" (UID: \"9358f706-24c3-46c5-8490-89402a85e9a4\") " pod="openstack/placement-59bf6c8c7b-wvc46" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.543259 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9358f706-24c3-46c5-8490-89402a85e9a4-scripts\") pod \"placement-59bf6c8c7b-wvc46\" (UID: \"9358f706-24c3-46c5-8490-89402a85e9a4\") " pod="openstack/placement-59bf6c8c7b-wvc46" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.543292 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9358f706-24c3-46c5-8490-89402a85e9a4-logs\") pod \"placement-59bf6c8c7b-wvc46\" (UID: \"9358f706-24c3-46c5-8490-89402a85e9a4\") " pod="openstack/placement-59bf6c8c7b-wvc46" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.543324 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9358f706-24c3-46c5-8490-89402a85e9a4-public-tls-certs\") pod \"placement-59bf6c8c7b-wvc46\" (UID: \"9358f706-24c3-46c5-8490-89402a85e9a4\") " pod="openstack/placement-59bf6c8c7b-wvc46" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.543348 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6jns\" (UniqueName: \"kubernetes.io/projected/9358f706-24c3-46c5-8490-89402a85e9a4-kube-api-access-f6jns\") pod \"placement-59bf6c8c7b-wvc46\" (UID: \"9358f706-24c3-46c5-8490-89402a85e9a4\") " pod="openstack/placement-59bf6c8c7b-wvc46" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.543406 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cacf36ac-8c52-43a6-9fcb-2cfc5b27a952-credential-keys\") pod \"keystone-857c5cc966-ggkc4\" (UID: \"cacf36ac-8c52-43a6-9fcb-2cfc5b27a952\") " pod="openstack/keystone-857c5cc966-ggkc4" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.543428 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cacf36ac-8c52-43a6-9fcb-2cfc5b27a952-public-tls-certs\") pod \"keystone-857c5cc966-ggkc4\" (UID: \"cacf36ac-8c52-43a6-9fcb-2cfc5b27a952\") " pod="openstack/keystone-857c5cc966-ggkc4" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.543506 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cacf36ac-8c52-43a6-9fcb-2cfc5b27a952-combined-ca-bundle\") pod \"keystone-857c5cc966-ggkc4\" (UID: \"cacf36ac-8c52-43a6-9fcb-2cfc5b27a952\") " pod="openstack/keystone-857c5cc966-ggkc4" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.543574 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cacf36ac-8c52-43a6-9fcb-2cfc5b27a952-internal-tls-certs\") pod \"keystone-857c5cc966-ggkc4\" (UID: \"cacf36ac-8c52-43a6-9fcb-2cfc5b27a952\") " pod="openstack/keystone-857c5cc966-ggkc4" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.543629 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cacf36ac-8c52-43a6-9fcb-2cfc5b27a952-fernet-keys\") pod \"keystone-857c5cc966-ggkc4\" (UID: \"cacf36ac-8c52-43a6-9fcb-2cfc5b27a952\") " pod="openstack/keystone-857c5cc966-ggkc4" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.543667 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9358f706-24c3-46c5-8490-89402a85e9a4-combined-ca-bundle\") pod \"placement-59bf6c8c7b-wvc46\" (UID: \"9358f706-24c3-46c5-8490-89402a85e9a4\") " pod="openstack/placement-59bf6c8c7b-wvc46" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.543708 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cacf36ac-8c52-43a6-9fcb-2cfc5b27a952-scripts\") pod \"keystone-857c5cc966-ggkc4\" (UID: \"cacf36ac-8c52-43a6-9fcb-2cfc5b27a952\") " pod="openstack/keystone-857c5cc966-ggkc4" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.543739 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9358f706-24c3-46c5-8490-89402a85e9a4-internal-tls-certs\") pod \"placement-59bf6c8c7b-wvc46\" (UID: \"9358f706-24c3-46c5-8490-89402a85e9a4\") " pod="openstack/placement-59bf6c8c7b-wvc46" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.543761 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cacf36ac-8c52-43a6-9fcb-2cfc5b27a952-config-data\") pod \"keystone-857c5cc966-ggkc4\" (UID: \"cacf36ac-8c52-43a6-9fcb-2cfc5b27a952\") " pod="openstack/keystone-857c5cc966-ggkc4" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.543899 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6244bcac-82b7-4bd4-b93d-3def53490380-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.543916 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6244bcac-82b7-4bd4-b93d-3def53490380-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.563993 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-59bf6c8c7b-wvc46"] Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.587618 4881 scope.go:117] "RemoveContainer" containerID="d951ea6875808772b952ed153f0f2d5544ca533b9519802da41d11d9d1f68396" Jan 21 11:20:46 crc kubenswrapper[4881]: E0121 11:20:46.591926 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d951ea6875808772b952ed153f0f2d5544ca533b9519802da41d11d9d1f68396\": container with ID starting with d951ea6875808772b952ed153f0f2d5544ca533b9519802da41d11d9d1f68396 not found: ID does not exist" containerID="d951ea6875808772b952ed153f0f2d5544ca533b9519802da41d11d9d1f68396" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.591984 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d951ea6875808772b952ed153f0f2d5544ca533b9519802da41d11d9d1f68396"} err="failed to get container status \"d951ea6875808772b952ed153f0f2d5544ca533b9519802da41d11d9d1f68396\": rpc error: code = NotFound desc = could not find container \"d951ea6875808772b952ed153f0f2d5544ca533b9519802da41d11d9d1f68396\": container with ID starting with d951ea6875808772b952ed153f0f2d5544ca533b9519802da41d11d9d1f68396 not found: ID does not exist" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.592021 4881 scope.go:117] "RemoveContainer" containerID="438e3940a5181d3570b1ef008c9096ec2907b5d774f74ab745931fd0122208c4" Jan 21 11:20:46 crc kubenswrapper[4881]: E0121 11:20:46.593948 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"438e3940a5181d3570b1ef008c9096ec2907b5d774f74ab745931fd0122208c4\": container with ID starting with 438e3940a5181d3570b1ef008c9096ec2907b5d774f74ab745931fd0122208c4 not found: ID does not exist" containerID="438e3940a5181d3570b1ef008c9096ec2907b5d774f74ab745931fd0122208c4" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.593981 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"438e3940a5181d3570b1ef008c9096ec2907b5d774f74ab745931fd0122208c4"} err="failed to get container status \"438e3940a5181d3570b1ef008c9096ec2907b5d774f74ab745931fd0122208c4\": rpc error: code = NotFound desc = could not find container \"438e3940a5181d3570b1ef008c9096ec2907b5d774f74ab745931fd0122208c4\": container with ID starting with 438e3940a5181d3570b1ef008c9096ec2907b5d774f74ab745931fd0122208c4 not found: ID does not exist" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.645412 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cacf36ac-8c52-43a6-9fcb-2cfc5b27a952-fernet-keys\") pod \"keystone-857c5cc966-ggkc4\" (UID: \"cacf36ac-8c52-43a6-9fcb-2cfc5b27a952\") " pod="openstack/keystone-857c5cc966-ggkc4" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.645473 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9358f706-24c3-46c5-8490-89402a85e9a4-combined-ca-bundle\") pod \"placement-59bf6c8c7b-wvc46\" (UID: \"9358f706-24c3-46c5-8490-89402a85e9a4\") " pod="openstack/placement-59bf6c8c7b-wvc46" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.645507 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cacf36ac-8c52-43a6-9fcb-2cfc5b27a952-scripts\") pod \"keystone-857c5cc966-ggkc4\" (UID: \"cacf36ac-8c52-43a6-9fcb-2cfc5b27a952\") " pod="openstack/keystone-857c5cc966-ggkc4" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.645538 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9358f706-24c3-46c5-8490-89402a85e9a4-internal-tls-certs\") pod \"placement-59bf6c8c7b-wvc46\" (UID: \"9358f706-24c3-46c5-8490-89402a85e9a4\") " pod="openstack/placement-59bf6c8c7b-wvc46" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.645561 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cacf36ac-8c52-43a6-9fcb-2cfc5b27a952-config-data\") pod \"keystone-857c5cc966-ggkc4\" (UID: \"cacf36ac-8c52-43a6-9fcb-2cfc5b27a952\") " pod="openstack/keystone-857c5cc966-ggkc4" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.645616 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpfnt\" (UniqueName: \"kubernetes.io/projected/cacf36ac-8c52-43a6-9fcb-2cfc5b27a952-kube-api-access-jpfnt\") pod \"keystone-857c5cc966-ggkc4\" (UID: \"cacf36ac-8c52-43a6-9fcb-2cfc5b27a952\") " pod="openstack/keystone-857c5cc966-ggkc4" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.645644 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9358f706-24c3-46c5-8490-89402a85e9a4-config-data\") pod \"placement-59bf6c8c7b-wvc46\" (UID: \"9358f706-24c3-46c5-8490-89402a85e9a4\") " pod="openstack/placement-59bf6c8c7b-wvc46" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.645667 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9358f706-24c3-46c5-8490-89402a85e9a4-scripts\") pod \"placement-59bf6c8c7b-wvc46\" (UID: \"9358f706-24c3-46c5-8490-89402a85e9a4\") " pod="openstack/placement-59bf6c8c7b-wvc46" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.645691 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9358f706-24c3-46c5-8490-89402a85e9a4-logs\") pod \"placement-59bf6c8c7b-wvc46\" (UID: \"9358f706-24c3-46c5-8490-89402a85e9a4\") " pod="openstack/placement-59bf6c8c7b-wvc46" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.645718 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9358f706-24c3-46c5-8490-89402a85e9a4-public-tls-certs\") pod \"placement-59bf6c8c7b-wvc46\" (UID: \"9358f706-24c3-46c5-8490-89402a85e9a4\") " pod="openstack/placement-59bf6c8c7b-wvc46" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.645744 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6jns\" (UniqueName: \"kubernetes.io/projected/9358f706-24c3-46c5-8490-89402a85e9a4-kube-api-access-f6jns\") pod \"placement-59bf6c8c7b-wvc46\" (UID: \"9358f706-24c3-46c5-8490-89402a85e9a4\") " pod="openstack/placement-59bf6c8c7b-wvc46" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.645808 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cacf36ac-8c52-43a6-9fcb-2cfc5b27a952-credential-keys\") pod \"keystone-857c5cc966-ggkc4\" (UID: \"cacf36ac-8c52-43a6-9fcb-2cfc5b27a952\") " pod="openstack/keystone-857c5cc966-ggkc4" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.645831 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cacf36ac-8c52-43a6-9fcb-2cfc5b27a952-public-tls-certs\") pod \"keystone-857c5cc966-ggkc4\" (UID: \"cacf36ac-8c52-43a6-9fcb-2cfc5b27a952\") " pod="openstack/keystone-857c5cc966-ggkc4" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.645900 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cacf36ac-8c52-43a6-9fcb-2cfc5b27a952-combined-ca-bundle\") pod \"keystone-857c5cc966-ggkc4\" (UID: \"cacf36ac-8c52-43a6-9fcb-2cfc5b27a952\") " pod="openstack/keystone-857c5cc966-ggkc4" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.645951 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cacf36ac-8c52-43a6-9fcb-2cfc5b27a952-internal-tls-certs\") pod \"keystone-857c5cc966-ggkc4\" (UID: \"cacf36ac-8c52-43a6-9fcb-2cfc5b27a952\") " pod="openstack/keystone-857c5cc966-ggkc4" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.649854 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9358f706-24c3-46c5-8490-89402a85e9a4-logs\") pod \"placement-59bf6c8c7b-wvc46\" (UID: \"9358f706-24c3-46c5-8490-89402a85e9a4\") " pod="openstack/placement-59bf6c8c7b-wvc46" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.656992 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9358f706-24c3-46c5-8490-89402a85e9a4-combined-ca-bundle\") pod \"placement-59bf6c8c7b-wvc46\" (UID: \"9358f706-24c3-46c5-8490-89402a85e9a4\") " pod="openstack/placement-59bf6c8c7b-wvc46" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.657508 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cacf36ac-8c52-43a6-9fcb-2cfc5b27a952-fernet-keys\") pod \"keystone-857c5cc966-ggkc4\" (UID: \"cacf36ac-8c52-43a6-9fcb-2cfc5b27a952\") " pod="openstack/keystone-857c5cc966-ggkc4" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.659111 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cacf36ac-8c52-43a6-9fcb-2cfc5b27a952-combined-ca-bundle\") pod \"keystone-857c5cc966-ggkc4\" (UID: \"cacf36ac-8c52-43a6-9fcb-2cfc5b27a952\") " pod="openstack/keystone-857c5cc966-ggkc4" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.659352 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9358f706-24c3-46c5-8490-89402a85e9a4-scripts\") pod \"placement-59bf6c8c7b-wvc46\" (UID: \"9358f706-24c3-46c5-8490-89402a85e9a4\") " pod="openstack/placement-59bf6c8c7b-wvc46" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.659419 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9358f706-24c3-46c5-8490-89402a85e9a4-internal-tls-certs\") pod \"placement-59bf6c8c7b-wvc46\" (UID: \"9358f706-24c3-46c5-8490-89402a85e9a4\") " pod="openstack/placement-59bf6c8c7b-wvc46" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.660015 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cacf36ac-8c52-43a6-9fcb-2cfc5b27a952-credential-keys\") pod \"keystone-857c5cc966-ggkc4\" (UID: \"cacf36ac-8c52-43a6-9fcb-2cfc5b27a952\") " pod="openstack/keystone-857c5cc966-ggkc4" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.660990 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cacf36ac-8c52-43a6-9fcb-2cfc5b27a952-internal-tls-certs\") pod \"keystone-857c5cc966-ggkc4\" (UID: \"cacf36ac-8c52-43a6-9fcb-2cfc5b27a952\") " pod="openstack/keystone-857c5cc966-ggkc4" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.662376 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9358f706-24c3-46c5-8490-89402a85e9a4-config-data\") pod \"placement-59bf6c8c7b-wvc46\" (UID: \"9358f706-24c3-46c5-8490-89402a85e9a4\") " pod="openstack/placement-59bf6c8c7b-wvc46" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.664461 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cacf36ac-8c52-43a6-9fcb-2cfc5b27a952-scripts\") pod \"keystone-857c5cc966-ggkc4\" (UID: \"cacf36ac-8c52-43a6-9fcb-2cfc5b27a952\") " pod="openstack/keystone-857c5cc966-ggkc4" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.664631 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9358f706-24c3-46c5-8490-89402a85e9a4-public-tls-certs\") pod \"placement-59bf6c8c7b-wvc46\" (UID: \"9358f706-24c3-46c5-8490-89402a85e9a4\") " pod="openstack/placement-59bf6c8c7b-wvc46" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.665619 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cacf36ac-8c52-43a6-9fcb-2cfc5b27a952-public-tls-certs\") pod \"keystone-857c5cc966-ggkc4\" (UID: \"cacf36ac-8c52-43a6-9fcb-2cfc5b27a952\") " pod="openstack/keystone-857c5cc966-ggkc4" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.668156 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cacf36ac-8c52-43a6-9fcb-2cfc5b27a952-config-data\") pod \"keystone-857c5cc966-ggkc4\" (UID: \"cacf36ac-8c52-43a6-9fcb-2cfc5b27a952\") " pod="openstack/keystone-857c5cc966-ggkc4" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.668749 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6jns\" (UniqueName: \"kubernetes.io/projected/9358f706-24c3-46c5-8490-89402a85e9a4-kube-api-access-f6jns\") pod \"placement-59bf6c8c7b-wvc46\" (UID: \"9358f706-24c3-46c5-8490-89402a85e9a4\") " pod="openstack/placement-59bf6c8c7b-wvc46" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.671149 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpfnt\" (UniqueName: \"kubernetes.io/projected/cacf36ac-8c52-43a6-9fcb-2cfc5b27a952-kube-api-access-jpfnt\") pod \"keystone-857c5cc966-ggkc4\" (UID: \"cacf36ac-8c52-43a6-9fcb-2cfc5b27a952\") " pod="openstack/keystone-857c5cc966-ggkc4" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.737326 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.746135 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.763090 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.773555 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.779650 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-public-svc" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.779859 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-internal-svc" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.779953 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.793247 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.847416 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-857c5cc966-ggkc4" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.857981 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmr72\" (UniqueName: \"kubernetes.io/projected/bf14e65c-4c95-4766-a2e2-57b040e9f192-kube-api-access-qmr72\") pod \"watcher-api-0\" (UID: \"bf14e65c-4c95-4766-a2e2-57b040e9f192\") " pod="openstack/watcher-api-0" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.858063 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf14e65c-4c95-4766-a2e2-57b040e9f192-logs\") pod \"watcher-api-0\" (UID: \"bf14e65c-4c95-4766-a2e2-57b040e9f192\") " pod="openstack/watcher-api-0" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.858090 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf14e65c-4c95-4766-a2e2-57b040e9f192-config-data\") pod \"watcher-api-0\" (UID: \"bf14e65c-4c95-4766-a2e2-57b040e9f192\") " pod="openstack/watcher-api-0" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.858141 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/bf14e65c-4c95-4766-a2e2-57b040e9f192-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"bf14e65c-4c95-4766-a2e2-57b040e9f192\") " pod="openstack/watcher-api-0" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.858237 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf14e65c-4c95-4766-a2e2-57b040e9f192-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"bf14e65c-4c95-4766-a2e2-57b040e9f192\") " pod="openstack/watcher-api-0" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.858287 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf14e65c-4c95-4766-a2e2-57b040e9f192-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"bf14e65c-4c95-4766-a2e2-57b040e9f192\") " pod="openstack/watcher-api-0" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.858373 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf14e65c-4c95-4766-a2e2-57b040e9f192-public-tls-certs\") pod \"watcher-api-0\" (UID: \"bf14e65c-4c95-4766-a2e2-57b040e9f192\") " pod="openstack/watcher-api-0" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.883756 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-59bf6c8c7b-wvc46" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.959756 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf14e65c-4c95-4766-a2e2-57b040e9f192-logs\") pod \"watcher-api-0\" (UID: \"bf14e65c-4c95-4766-a2e2-57b040e9f192\") " pod="openstack/watcher-api-0" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.959819 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf14e65c-4c95-4766-a2e2-57b040e9f192-config-data\") pod \"watcher-api-0\" (UID: \"bf14e65c-4c95-4766-a2e2-57b040e9f192\") " pod="openstack/watcher-api-0" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.959853 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/bf14e65c-4c95-4766-a2e2-57b040e9f192-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"bf14e65c-4c95-4766-a2e2-57b040e9f192\") " pod="openstack/watcher-api-0" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.959908 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf14e65c-4c95-4766-a2e2-57b040e9f192-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"bf14e65c-4c95-4766-a2e2-57b040e9f192\") " pod="openstack/watcher-api-0" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.959931 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf14e65c-4c95-4766-a2e2-57b040e9f192-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"bf14e65c-4c95-4766-a2e2-57b040e9f192\") " pod="openstack/watcher-api-0" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.959975 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf14e65c-4c95-4766-a2e2-57b040e9f192-public-tls-certs\") pod \"watcher-api-0\" (UID: \"bf14e65c-4c95-4766-a2e2-57b040e9f192\") " pod="openstack/watcher-api-0" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.960057 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmr72\" (UniqueName: \"kubernetes.io/projected/bf14e65c-4c95-4766-a2e2-57b040e9f192-kube-api-access-qmr72\") pod \"watcher-api-0\" (UID: \"bf14e65c-4c95-4766-a2e2-57b040e9f192\") " pod="openstack/watcher-api-0" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.960211 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf14e65c-4c95-4766-a2e2-57b040e9f192-logs\") pod \"watcher-api-0\" (UID: \"bf14e65c-4c95-4766-a2e2-57b040e9f192\") " pod="openstack/watcher-api-0" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.970745 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf14e65c-4c95-4766-a2e2-57b040e9f192-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"bf14e65c-4c95-4766-a2e2-57b040e9f192\") " pod="openstack/watcher-api-0" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.971494 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf14e65c-4c95-4766-a2e2-57b040e9f192-config-data\") pod \"watcher-api-0\" (UID: \"bf14e65c-4c95-4766-a2e2-57b040e9f192\") " pod="openstack/watcher-api-0" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.971858 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf14e65c-4c95-4766-a2e2-57b040e9f192-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"bf14e65c-4c95-4766-a2e2-57b040e9f192\") " pod="openstack/watcher-api-0" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.978484 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmr72\" (UniqueName: \"kubernetes.io/projected/bf14e65c-4c95-4766-a2e2-57b040e9f192-kube-api-access-qmr72\") pod \"watcher-api-0\" (UID: \"bf14e65c-4c95-4766-a2e2-57b040e9f192\") " pod="openstack/watcher-api-0" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.979993 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/bf14e65c-4c95-4766-a2e2-57b040e9f192-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"bf14e65c-4c95-4766-a2e2-57b040e9f192\") " pod="openstack/watcher-api-0" Jan 21 11:20:46 crc kubenswrapper[4881]: I0121 11:20:46.981496 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bf14e65c-4c95-4766-a2e2-57b040e9f192-public-tls-certs\") pod \"watcher-api-0\" (UID: \"bf14e65c-4c95-4766-a2e2-57b040e9f192\") " pod="openstack/watcher-api-0" Jan 21 11:20:47 crc kubenswrapper[4881]: I0121 11:20:47.125543 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Jan 21 11:20:47 crc kubenswrapper[4881]: I0121 11:20:47.329482 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6244bcac-82b7-4bd4-b93d-3def53490380" path="/var/lib/kubelet/pods/6244bcac-82b7-4bd4-b93d-3def53490380/volumes" Jan 21 11:20:47 crc kubenswrapper[4881]: I0121 11:20:47.489384 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mxb97" event={"ID":"349e8898-8b7c-414a-8357-d431c8b81bf4","Type":"ContainerStarted","Data":"c648692c811ad6f54f474e55240cf83d10bccce020989330faa953f52c62836c"} Jan 21 11:20:47 crc kubenswrapper[4881]: I0121 11:20:47.501142 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-857c5cc966-ggkc4"] Jan 21 11:20:47 crc kubenswrapper[4881]: I0121 11:20:47.552400 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-mxb97" podStartSLOduration=3.578856455 podStartE2EDuration="1m26.552374701s" podCreationTimestamp="2026-01-21 11:19:21 +0000 UTC" firstStartedPulling="2026-01-21 11:19:22.581563109 +0000 UTC m=+1349.841519568" lastFinishedPulling="2026-01-21 11:20:45.555081345 +0000 UTC m=+1432.815037814" observedRunningTime="2026-01-21 11:20:47.535957912 +0000 UTC m=+1434.795914381" watchObservedRunningTime="2026-01-21 11:20:47.552374701 +0000 UTC m=+1434.812331180" Jan 21 11:20:47 crc kubenswrapper[4881]: I0121 11:20:47.630741 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-59bf6c8c7b-wvc46"] Jan 21 11:20:47 crc kubenswrapper[4881]: W0121 11:20:47.705740 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9358f706_24c3_46c5_8490_89402a85e9a4.slice/crio-51cdd1269b38f5140e053e8d16ad4f55fb2eb455fa7567d79efdfa9a592d3a75 WatchSource:0}: Error finding container 51cdd1269b38f5140e053e8d16ad4f55fb2eb455fa7567d79efdfa9a592d3a75: Status 404 returned error can't find the container with id 51cdd1269b38f5140e053e8d16ad4f55fb2eb455fa7567d79efdfa9a592d3a75 Jan 21 11:20:48 crc kubenswrapper[4881]: I0121 11:20:48.174525 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Jan 21 11:20:48 crc kubenswrapper[4881]: I0121 11:20:48.506551 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"bf14e65c-4c95-4766-a2e2-57b040e9f192","Type":"ContainerStarted","Data":"6b80183fa2b269acf09d29b84e08613370a4044c48a698df3a6c8b59e8ebfec7"} Jan 21 11:20:48 crc kubenswrapper[4881]: I0121 11:20:48.509681 4881 generic.go:334] "Generic (PLEG): container finished" podID="869a596b-159c-4185-a4ab-0e36c5d130fc" containerID="60c7ee63bf67b35a7137c545eb5e36b0ba7f24fe96f583c9314a3bcf2ea933c6" exitCode=0 Jan 21 11:20:48 crc kubenswrapper[4881]: I0121 11:20:48.509747 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-t6mz2" event={"ID":"869a596b-159c-4185-a4ab-0e36c5d130fc","Type":"ContainerDied","Data":"60c7ee63bf67b35a7137c545eb5e36b0ba7f24fe96f583c9314a3bcf2ea933c6"} Jan 21 11:20:48 crc kubenswrapper[4881]: I0121 11:20:48.521876 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-59bf6c8c7b-wvc46" event={"ID":"9358f706-24c3-46c5-8490-89402a85e9a4","Type":"ContainerStarted","Data":"f5edee1d07e346d14eb5323aedec597a7a2da39a3e6b4d62b96bd2921e5c2f54"} Jan 21 11:20:48 crc kubenswrapper[4881]: I0121 11:20:48.521920 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-59bf6c8c7b-wvc46" event={"ID":"9358f706-24c3-46c5-8490-89402a85e9a4","Type":"ContainerStarted","Data":"51cdd1269b38f5140e053e8d16ad4f55fb2eb455fa7567d79efdfa9a592d3a75"} Jan 21 11:20:48 crc kubenswrapper[4881]: I0121 11:20:48.536761 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4wxvl" event={"ID":"65250dcf-0f0f-4fa6-8d57-e07d3d29f290","Type":"ContainerStarted","Data":"6641f95a17dea3fe9aff6d4faf3bd17425257c19253868f2b83b7d7d759a48fd"} Jan 21 11:20:48 crc kubenswrapper[4881]: I0121 11:20:48.551952 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-857c5cc966-ggkc4" event={"ID":"cacf36ac-8c52-43a6-9fcb-2cfc5b27a952","Type":"ContainerStarted","Data":"5fb9d1c4eabc2cf0819a1fa3677c7d9fe8945f3612149fe9af8c01e80ad3006a"} Jan 21 11:20:48 crc kubenswrapper[4881]: I0121 11:20:48.551991 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-857c5cc966-ggkc4" event={"ID":"cacf36ac-8c52-43a6-9fcb-2cfc5b27a952","Type":"ContainerStarted","Data":"89e3b8f2fee171d30e8a7e5bbdb1527af0e178f6abf0bb7076780ed8e2c03cd2"} Jan 21 11:20:48 crc kubenswrapper[4881]: I0121 11:20:48.558321 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-857c5cc966-ggkc4" Jan 21 11:20:48 crc kubenswrapper[4881]: I0121 11:20:48.583312 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-4wxvl" podStartSLOduration=6.123219748 podStartE2EDuration="1m3.583291221s" podCreationTimestamp="2026-01-21 11:19:45 +0000 UTC" firstStartedPulling="2026-01-21 11:19:48.090696275 +0000 UTC m=+1375.350652744" lastFinishedPulling="2026-01-21 11:20:45.550767748 +0000 UTC m=+1432.810724217" observedRunningTime="2026-01-21 11:20:48.567804855 +0000 UTC m=+1435.827761324" watchObservedRunningTime="2026-01-21 11:20:48.583291221 +0000 UTC m=+1435.843247690" Jan 21 11:20:48 crc kubenswrapper[4881]: I0121 11:20:48.615255 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-857c5cc966-ggkc4" podStartSLOduration=2.6152269759999998 podStartE2EDuration="2.615226976s" podCreationTimestamp="2026-01-21 11:20:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:20:48.596655484 +0000 UTC m=+1435.856611963" watchObservedRunningTime="2026-01-21 11:20:48.615226976 +0000 UTC m=+1435.875183445" Jan 21 11:20:49 crc kubenswrapper[4881]: I0121 11:20:49.498668 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 21 11:20:49 crc kubenswrapper[4881]: I0121 11:20:49.527054 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ncbfx" Jan 21 11:20:49 crc kubenswrapper[4881]: I0121 11:20:49.528841 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ncbfx" Jan 21 11:20:49 crc kubenswrapper[4881]: I0121 11:20:49.534315 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Jan 21 11:20:49 crc kubenswrapper[4881]: I0121 11:20:49.587725 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-59bf6c8c7b-wvc46" event={"ID":"9358f706-24c3-46c5-8490-89402a85e9a4","Type":"ContainerStarted","Data":"efb662df28813811348cba77f05d7d8acb958e1416f129d11a16e0b31591d4b8"} Jan 21 11:20:49 crc kubenswrapper[4881]: I0121 11:20:49.587837 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-59bf6c8c7b-wvc46" Jan 21 11:20:49 crc kubenswrapper[4881]: I0121 11:20:49.587875 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-59bf6c8c7b-wvc46" Jan 21 11:20:49 crc kubenswrapper[4881]: I0121 11:20:49.591161 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"bf14e65c-4c95-4766-a2e2-57b040e9f192","Type":"ContainerStarted","Data":"8a5aad798e8071a262f3a24177b130f3e97233d2d837f365a875625312c98420"} Jan 21 11:20:49 crc kubenswrapper[4881]: I0121 11:20:49.591201 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"bf14e65c-4c95-4766-a2e2-57b040e9f192","Type":"ContainerStarted","Data":"a3b941a2ad0b66190a31ef6f2915a1a156d561cd57311dc9b96d730cd5bfc66c"} Jan 21 11:20:49 crc kubenswrapper[4881]: I0121 11:20:49.591630 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 21 11:20:49 crc kubenswrapper[4881]: I0121 11:20:49.624198 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-59bf6c8c7b-wvc46" podStartSLOduration=3.624175069 podStartE2EDuration="3.624175069s" podCreationTimestamp="2026-01-21 11:20:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:20:49.615190636 +0000 UTC m=+1436.875147105" watchObservedRunningTime="2026-01-21 11:20:49.624175069 +0000 UTC m=+1436.884131538" Jan 21 11:20:49 crc kubenswrapper[4881]: I0121 11:20:49.659116 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Jan 21 11:20:49 crc kubenswrapper[4881]: I0121 11:20:49.667402 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=3.667376984 podStartE2EDuration="3.667376984s" podCreationTimestamp="2026-01-21 11:20:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:20:49.644385532 +0000 UTC m=+1436.904342001" watchObservedRunningTime="2026-01-21 11:20:49.667376984 +0000 UTC m=+1436.927333443" Jan 21 11:20:50 crc kubenswrapper[4881]: I0121 11:20:50.091887 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-t6mz2" Jan 21 11:20:50 crc kubenswrapper[4881]: I0121 11:20:50.281509 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/869a596b-159c-4185-a4ab-0e36c5d130fc-config\") pod \"869a596b-159c-4185-a4ab-0e36c5d130fc\" (UID: \"869a596b-159c-4185-a4ab-0e36c5d130fc\") " Jan 21 11:20:50 crc kubenswrapper[4881]: I0121 11:20:50.281734 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/869a596b-159c-4185-a4ab-0e36c5d130fc-combined-ca-bundle\") pod \"869a596b-159c-4185-a4ab-0e36c5d130fc\" (UID: \"869a596b-159c-4185-a4ab-0e36c5d130fc\") " Jan 21 11:20:50 crc kubenswrapper[4881]: I0121 11:20:50.281780 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dscc6\" (UniqueName: \"kubernetes.io/projected/869a596b-159c-4185-a4ab-0e36c5d130fc-kube-api-access-dscc6\") pod \"869a596b-159c-4185-a4ab-0e36c5d130fc\" (UID: \"869a596b-159c-4185-a4ab-0e36c5d130fc\") " Jan 21 11:20:50 crc kubenswrapper[4881]: I0121 11:20:50.304552 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869a596b-159c-4185-a4ab-0e36c5d130fc-kube-api-access-dscc6" (OuterVolumeSpecName: "kube-api-access-dscc6") pod "869a596b-159c-4185-a4ab-0e36c5d130fc" (UID: "869a596b-159c-4185-a4ab-0e36c5d130fc"). InnerVolumeSpecName "kube-api-access-dscc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:20:50 crc kubenswrapper[4881]: I0121 11:20:50.312646 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/869a596b-159c-4185-a4ab-0e36c5d130fc-config" (OuterVolumeSpecName: "config") pod "869a596b-159c-4185-a4ab-0e36c5d130fc" (UID: "869a596b-159c-4185-a4ab-0e36c5d130fc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:20:50 crc kubenswrapper[4881]: I0121 11:20:50.313100 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/869a596b-159c-4185-a4ab-0e36c5d130fc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "869a596b-159c-4185-a4ab-0e36c5d130fc" (UID: "869a596b-159c-4185-a4ab-0e36c5d130fc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:20:50 crc kubenswrapper[4881]: I0121 11:20:50.385246 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/869a596b-159c-4185-a4ab-0e36c5d130fc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:50 crc kubenswrapper[4881]: I0121 11:20:50.385312 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dscc6\" (UniqueName: \"kubernetes.io/projected/869a596b-159c-4185-a4ab-0e36c5d130fc-kube-api-access-dscc6\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:50 crc kubenswrapper[4881]: I0121 11:20:50.385330 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/869a596b-159c-4185-a4ab-0e36c5d130fc-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:50 crc kubenswrapper[4881]: I0121 11:20:50.606464 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-t6mz2" Jan 21 11:20:50 crc kubenswrapper[4881]: I0121 11:20:50.615873 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-t6mz2" event={"ID":"869a596b-159c-4185-a4ab-0e36c5d130fc","Type":"ContainerDied","Data":"60332241610e38a80a618de620e24fb0c01532db2d0020dd0177b716555cd915"} Jan 21 11:20:50 crc kubenswrapper[4881]: I0121 11:20:50.615932 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60332241610e38a80a618de620e24fb0c01532db2d0020dd0177b716555cd915" Jan 21 11:20:50 crc kubenswrapper[4881]: I0121 11:20:50.616287 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 21 11:20:50 crc kubenswrapper[4881]: I0121 11:20:50.636552 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ncbfx" podUID="6a8083e9-c68d-40ca-bde9-b84e43b65ab8" containerName="registry-server" probeResult="failure" output=< Jan 21 11:20:50 crc kubenswrapper[4881]: timeout: failed to connect service ":50051" within 1s Jan 21 11:20:50 crc kubenswrapper[4881]: > Jan 21 11:20:50 crc kubenswrapper[4881]: I0121 11:20:50.891135 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-66498f95d9-n6nvg"] Jan 21 11:20:50 crc kubenswrapper[4881]: E0121 11:20:50.897853 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="869a596b-159c-4185-a4ab-0e36c5d130fc" containerName="neutron-db-sync" Jan 21 11:20:50 crc kubenswrapper[4881]: I0121 11:20:50.898281 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="869a596b-159c-4185-a4ab-0e36c5d130fc" containerName="neutron-db-sync" Jan 21 11:20:50 crc kubenswrapper[4881]: I0121 11:20:50.904088 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="869a596b-159c-4185-a4ab-0e36c5d130fc" containerName="neutron-db-sync" Jan 21 11:20:50 crc kubenswrapper[4881]: I0121 11:20:50.910250 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66498f95d9-n6nvg" Jan 21 11:20:50 crc kubenswrapper[4881]: I0121 11:20:50.991522 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66498f95d9-n6nvg"] Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.003189 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-ovsdbserver-sb\") pod \"dnsmasq-dns-66498f95d9-n6nvg\" (UID: \"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe\") " pod="openstack/dnsmasq-dns-66498f95d9-n6nvg" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.003346 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcslt\" (UniqueName: \"kubernetes.io/projected/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-kube-api-access-zcslt\") pod \"dnsmasq-dns-66498f95d9-n6nvg\" (UID: \"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe\") " pod="openstack/dnsmasq-dns-66498f95d9-n6nvg" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.003434 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-config\") pod \"dnsmasq-dns-66498f95d9-n6nvg\" (UID: \"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe\") " pod="openstack/dnsmasq-dns-66498f95d9-n6nvg" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.003482 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-dns-svc\") pod \"dnsmasq-dns-66498f95d9-n6nvg\" (UID: \"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe\") " pod="openstack/dnsmasq-dns-66498f95d9-n6nvg" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.003507 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-ovsdbserver-nb\") pod \"dnsmasq-dns-66498f95d9-n6nvg\" (UID: \"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe\") " pod="openstack/dnsmasq-dns-66498f95d9-n6nvg" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.003538 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-dns-swift-storage-0\") pod \"dnsmasq-dns-66498f95d9-n6nvg\" (UID: \"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe\") " pod="openstack/dnsmasq-dns-66498f95d9-n6nvg" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.022948 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-796dd99876-gb7nt"] Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.024981 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-796dd99876-gb7nt" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.031682 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-kj7bj" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.032481 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.033263 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.033580 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.041405 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-796dd99876-gb7nt"] Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.105925 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcslt\" (UniqueName: \"kubernetes.io/projected/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-kube-api-access-zcslt\") pod \"dnsmasq-dns-66498f95d9-n6nvg\" (UID: \"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe\") " pod="openstack/dnsmasq-dns-66498f95d9-n6nvg" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.106021 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-config\") pod \"dnsmasq-dns-66498f95d9-n6nvg\" (UID: \"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe\") " pod="openstack/dnsmasq-dns-66498f95d9-n6nvg" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.106080 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-ovsdbserver-nb\") pod \"dnsmasq-dns-66498f95d9-n6nvg\" (UID: \"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe\") " pod="openstack/dnsmasq-dns-66498f95d9-n6nvg" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.106101 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-dns-svc\") pod \"dnsmasq-dns-66498f95d9-n6nvg\" (UID: \"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe\") " pod="openstack/dnsmasq-dns-66498f95d9-n6nvg" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.106129 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-dns-swift-storage-0\") pod \"dnsmasq-dns-66498f95d9-n6nvg\" (UID: \"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe\") " pod="openstack/dnsmasq-dns-66498f95d9-n6nvg" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.106214 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-ovsdbserver-sb\") pod \"dnsmasq-dns-66498f95d9-n6nvg\" (UID: \"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe\") " pod="openstack/dnsmasq-dns-66498f95d9-n6nvg" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.107497 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-config\") pod \"dnsmasq-dns-66498f95d9-n6nvg\" (UID: \"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe\") " pod="openstack/dnsmasq-dns-66498f95d9-n6nvg" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.107687 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-dns-svc\") pod \"dnsmasq-dns-66498f95d9-n6nvg\" (UID: \"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe\") " pod="openstack/dnsmasq-dns-66498f95d9-n6nvg" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.107880 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-ovsdbserver-nb\") pod \"dnsmasq-dns-66498f95d9-n6nvg\" (UID: \"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe\") " pod="openstack/dnsmasq-dns-66498f95d9-n6nvg" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.108024 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-dns-swift-storage-0\") pod \"dnsmasq-dns-66498f95d9-n6nvg\" (UID: \"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe\") " pod="openstack/dnsmasq-dns-66498f95d9-n6nvg" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.108132 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-ovsdbserver-sb\") pod \"dnsmasq-dns-66498f95d9-n6nvg\" (UID: \"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe\") " pod="openstack/dnsmasq-dns-66498f95d9-n6nvg" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.131712 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcslt\" (UniqueName: \"kubernetes.io/projected/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-kube-api-access-zcslt\") pod \"dnsmasq-dns-66498f95d9-n6nvg\" (UID: \"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe\") " pod="openstack/dnsmasq-dns-66498f95d9-n6nvg" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.207994 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f51f915e-f553-4130-a16b-9e6af68a5a15-combined-ca-bundle\") pod \"neutron-796dd99876-gb7nt\" (UID: \"f51f915e-f553-4130-a16b-9e6af68a5a15\") " pod="openstack/neutron-796dd99876-gb7nt" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.208067 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f51f915e-f553-4130-a16b-9e6af68a5a15-ovndb-tls-certs\") pod \"neutron-796dd99876-gb7nt\" (UID: \"f51f915e-f553-4130-a16b-9e6af68a5a15\") " pod="openstack/neutron-796dd99876-gb7nt" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.208147 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f51f915e-f553-4130-a16b-9e6af68a5a15-httpd-config\") pod \"neutron-796dd99876-gb7nt\" (UID: \"f51f915e-f553-4130-a16b-9e6af68a5a15\") " pod="openstack/neutron-796dd99876-gb7nt" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.208561 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f51f915e-f553-4130-a16b-9e6af68a5a15-config\") pod \"neutron-796dd99876-gb7nt\" (UID: \"f51f915e-f553-4130-a16b-9e6af68a5a15\") " pod="openstack/neutron-796dd99876-gb7nt" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.208606 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgwv9\" (UniqueName: \"kubernetes.io/projected/f51f915e-f553-4130-a16b-9e6af68a5a15-kube-api-access-lgwv9\") pod \"neutron-796dd99876-gb7nt\" (UID: \"f51f915e-f553-4130-a16b-9e6af68a5a15\") " pod="openstack/neutron-796dd99876-gb7nt" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.275193 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66498f95d9-n6nvg" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.311413 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f51f915e-f553-4130-a16b-9e6af68a5a15-config\") pod \"neutron-796dd99876-gb7nt\" (UID: \"f51f915e-f553-4130-a16b-9e6af68a5a15\") " pod="openstack/neutron-796dd99876-gb7nt" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.312186 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgwv9\" (UniqueName: \"kubernetes.io/projected/f51f915e-f553-4130-a16b-9e6af68a5a15-kube-api-access-lgwv9\") pod \"neutron-796dd99876-gb7nt\" (UID: \"f51f915e-f553-4130-a16b-9e6af68a5a15\") " pod="openstack/neutron-796dd99876-gb7nt" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.312303 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f51f915e-f553-4130-a16b-9e6af68a5a15-combined-ca-bundle\") pod \"neutron-796dd99876-gb7nt\" (UID: \"f51f915e-f553-4130-a16b-9e6af68a5a15\") " pod="openstack/neutron-796dd99876-gb7nt" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.312346 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f51f915e-f553-4130-a16b-9e6af68a5a15-ovndb-tls-certs\") pod \"neutron-796dd99876-gb7nt\" (UID: \"f51f915e-f553-4130-a16b-9e6af68a5a15\") " pod="openstack/neutron-796dd99876-gb7nt" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.312476 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f51f915e-f553-4130-a16b-9e6af68a5a15-httpd-config\") pod \"neutron-796dd99876-gb7nt\" (UID: \"f51f915e-f553-4130-a16b-9e6af68a5a15\") " pod="openstack/neutron-796dd99876-gb7nt" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.318897 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f51f915e-f553-4130-a16b-9e6af68a5a15-config\") pod \"neutron-796dd99876-gb7nt\" (UID: \"f51f915e-f553-4130-a16b-9e6af68a5a15\") " pod="openstack/neutron-796dd99876-gb7nt" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.319549 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f51f915e-f553-4130-a16b-9e6af68a5a15-httpd-config\") pod \"neutron-796dd99876-gb7nt\" (UID: \"f51f915e-f553-4130-a16b-9e6af68a5a15\") " pod="openstack/neutron-796dd99876-gb7nt" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.323329 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f51f915e-f553-4130-a16b-9e6af68a5a15-combined-ca-bundle\") pod \"neutron-796dd99876-gb7nt\" (UID: \"f51f915e-f553-4130-a16b-9e6af68a5a15\") " pod="openstack/neutron-796dd99876-gb7nt" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.327915 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f51f915e-f553-4130-a16b-9e6af68a5a15-ovndb-tls-certs\") pod \"neutron-796dd99876-gb7nt\" (UID: \"f51f915e-f553-4130-a16b-9e6af68a5a15\") " pod="openstack/neutron-796dd99876-gb7nt" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.331426 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgwv9\" (UniqueName: \"kubernetes.io/projected/f51f915e-f553-4130-a16b-9e6af68a5a15-kube-api-access-lgwv9\") pod \"neutron-796dd99876-gb7nt\" (UID: \"f51f915e-f553-4130-a16b-9e6af68a5a15\") " pod="openstack/neutron-796dd99876-gb7nt" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.363600 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-796dd99876-gb7nt" Jan 21 11:20:51 crc kubenswrapper[4881]: I0121 11:20:51.972532 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66498f95d9-n6nvg"] Jan 21 11:20:51 crc kubenswrapper[4881]: W0121 11:20:51.985675 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a4d2e63_3d53_44ef_8968_22a7ced8d0fe.slice/crio-53bbfd2a49add8edadc389aeebfde92d8828c88f0f666671d93498d8d53c2567 WatchSource:0}: Error finding container 53bbfd2a49add8edadc389aeebfde92d8828c88f0f666671d93498d8d53c2567: Status 404 returned error can't find the container with id 53bbfd2a49add8edadc389aeebfde92d8828c88f0f666671d93498d8d53c2567 Jan 21 11:20:52 crc kubenswrapper[4881]: I0121 11:20:52.129813 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Jan 21 11:20:52 crc kubenswrapper[4881]: I0121 11:20:52.226858 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-796dd99876-gb7nt"] Jan 21 11:20:52 crc kubenswrapper[4881]: W0121 11:20:52.254484 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf51f915e_f553_4130_a16b_9e6af68a5a15.slice/crio-2e4be17fa483a6184f2eda034f9fc33ec23230c3292d5bb3f6f80cd50bfff6e9 WatchSource:0}: Error finding container 2e4be17fa483a6184f2eda034f9fc33ec23230c3292d5bb3f6f80cd50bfff6e9: Status 404 returned error can't find the container with id 2e4be17fa483a6184f2eda034f9fc33ec23230c3292d5bb3f6f80cd50bfff6e9 Jan 21 11:20:52 crc kubenswrapper[4881]: I0121 11:20:52.643863 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-796dd99876-gb7nt" event={"ID":"f51f915e-f553-4130-a16b-9e6af68a5a15","Type":"ContainerStarted","Data":"2e4be17fa483a6184f2eda034f9fc33ec23230c3292d5bb3f6f80cd50bfff6e9"} Jan 21 11:20:52 crc kubenswrapper[4881]: I0121 11:20:52.647830 4881 generic.go:334] "Generic (PLEG): container finished" podID="3a4d2e63-3d53-44ef-8968-22a7ced8d0fe" containerID="fa55b39990f74afb936b29eb6ca3dc719ebcf2a4b47a29af77516eac502e8d26" exitCode=0 Jan 21 11:20:52 crc kubenswrapper[4881]: I0121 11:20:52.647933 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66498f95d9-n6nvg" event={"ID":"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe","Type":"ContainerDied","Data":"fa55b39990f74afb936b29eb6ca3dc719ebcf2a4b47a29af77516eac502e8d26"} Jan 21 11:20:52 crc kubenswrapper[4881]: I0121 11:20:52.647966 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66498f95d9-n6nvg" event={"ID":"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe","Type":"ContainerStarted","Data":"53bbfd2a49add8edadc389aeebfde92d8828c88f0f666671d93498d8d53c2567"} Jan 21 11:20:52 crc kubenswrapper[4881]: I0121 11:20:52.652887 4881 generic.go:334] "Generic (PLEG): container finished" podID="ee4e7116-c2cd-43d5-af6b-9f30b5053e0e" containerID="61f6b4008e5afe3c84bc4dbf116ba996728224955a2729f3dc2de6c1a2eeb445" exitCode=1 Jan 21 11:20:52 crc kubenswrapper[4881]: I0121 11:20:52.652985 4881 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 11:20:52 crc kubenswrapper[4881]: I0121 11:20:52.653174 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e","Type":"ContainerDied","Data":"61f6b4008e5afe3c84bc4dbf116ba996728224955a2729f3dc2de6c1a2eeb445"} Jan 21 11:20:52 crc kubenswrapper[4881]: I0121 11:20:52.653224 4881 scope.go:117] "RemoveContainer" containerID="5db7a5c0d23dd82d2a5258870db858ab9345870f09ad31cd41b42f8d9eaa1f90" Jan 21 11:20:52 crc kubenswrapper[4881]: I0121 11:20:52.653580 4881 scope.go:117] "RemoveContainer" containerID="61f6b4008e5afe3c84bc4dbf116ba996728224955a2729f3dc2de6c1a2eeb445" Jan 21 11:20:52 crc kubenswrapper[4881]: E0121 11:20:52.653775 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(ee4e7116-c2cd-43d5-af6b-9f30b5053e0e)\"" pod="openstack/watcher-decision-engine-0" podUID="ee4e7116-c2cd-43d5-af6b-9f30b5053e0e" Jan 21 11:20:53 crc kubenswrapper[4881]: I0121 11:20:53.376948 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-667d9dbbbc-pcbhd"] Jan 21 11:20:53 crc kubenswrapper[4881]: I0121 11:20:53.397412 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-667d9dbbbc-pcbhd" Jan 21 11:20:53 crc kubenswrapper[4881]: I0121 11:20:53.400154 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 21 11:20:53 crc kubenswrapper[4881]: I0121 11:20:53.429510 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 21 11:20:53 crc kubenswrapper[4881]: I0121 11:20:53.441349 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-667d9dbbbc-pcbhd"] Jan 21 11:20:53 crc kubenswrapper[4881]: I0121 11:20:53.509081 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9-config\") pod \"neutron-667d9dbbbc-pcbhd\" (UID: \"3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9\") " pod="openstack/neutron-667d9dbbbc-pcbhd" Jan 21 11:20:53 crc kubenswrapper[4881]: I0121 11:20:53.509500 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9-ovndb-tls-certs\") pod \"neutron-667d9dbbbc-pcbhd\" (UID: \"3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9\") " pod="openstack/neutron-667d9dbbbc-pcbhd" Jan 21 11:20:53 crc kubenswrapper[4881]: I0121 11:20:53.509654 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9-httpd-config\") pod \"neutron-667d9dbbbc-pcbhd\" (UID: \"3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9\") " pod="openstack/neutron-667d9dbbbc-pcbhd" Jan 21 11:20:53 crc kubenswrapper[4881]: I0121 11:20:53.509749 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9-internal-tls-certs\") pod \"neutron-667d9dbbbc-pcbhd\" (UID: \"3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9\") " pod="openstack/neutron-667d9dbbbc-pcbhd" Jan 21 11:20:53 crc kubenswrapper[4881]: I0121 11:20:53.509851 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9-public-tls-certs\") pod \"neutron-667d9dbbbc-pcbhd\" (UID: \"3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9\") " pod="openstack/neutron-667d9dbbbc-pcbhd" Jan 21 11:20:53 crc kubenswrapper[4881]: I0121 11:20:53.509930 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9-combined-ca-bundle\") pod \"neutron-667d9dbbbc-pcbhd\" (UID: \"3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9\") " pod="openstack/neutron-667d9dbbbc-pcbhd" Jan 21 11:20:53 crc kubenswrapper[4881]: I0121 11:20:53.510070 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdf29\" (UniqueName: \"kubernetes.io/projected/3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9-kube-api-access-fdf29\") pod \"neutron-667d9dbbbc-pcbhd\" (UID: \"3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9\") " pod="openstack/neutron-667d9dbbbc-pcbhd" Jan 21 11:20:53 crc kubenswrapper[4881]: I0121 11:20:53.612066 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9-httpd-config\") pod \"neutron-667d9dbbbc-pcbhd\" (UID: \"3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9\") " pod="openstack/neutron-667d9dbbbc-pcbhd" Jan 21 11:20:53 crc kubenswrapper[4881]: I0121 11:20:53.612128 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9-internal-tls-certs\") pod \"neutron-667d9dbbbc-pcbhd\" (UID: \"3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9\") " pod="openstack/neutron-667d9dbbbc-pcbhd" Jan 21 11:20:53 crc kubenswrapper[4881]: I0121 11:20:53.612163 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9-combined-ca-bundle\") pod \"neutron-667d9dbbbc-pcbhd\" (UID: \"3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9\") " pod="openstack/neutron-667d9dbbbc-pcbhd" Jan 21 11:20:53 crc kubenswrapper[4881]: I0121 11:20:53.612177 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9-public-tls-certs\") pod \"neutron-667d9dbbbc-pcbhd\" (UID: \"3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9\") " pod="openstack/neutron-667d9dbbbc-pcbhd" Jan 21 11:20:53 crc kubenswrapper[4881]: I0121 11:20:53.612251 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdf29\" (UniqueName: \"kubernetes.io/projected/3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9-kube-api-access-fdf29\") pod \"neutron-667d9dbbbc-pcbhd\" (UID: \"3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9\") " pod="openstack/neutron-667d9dbbbc-pcbhd" Jan 21 11:20:53 crc kubenswrapper[4881]: I0121 11:20:53.612300 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9-config\") pod \"neutron-667d9dbbbc-pcbhd\" (UID: \"3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9\") " pod="openstack/neutron-667d9dbbbc-pcbhd" Jan 21 11:20:53 crc kubenswrapper[4881]: I0121 11:20:53.612345 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9-ovndb-tls-certs\") pod \"neutron-667d9dbbbc-pcbhd\" (UID: \"3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9\") " pod="openstack/neutron-667d9dbbbc-pcbhd" Jan 21 11:20:53 crc kubenswrapper[4881]: I0121 11:20:53.618879 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9-ovndb-tls-certs\") pod \"neutron-667d9dbbbc-pcbhd\" (UID: \"3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9\") " pod="openstack/neutron-667d9dbbbc-pcbhd" Jan 21 11:20:53 crc kubenswrapper[4881]: I0121 11:20:53.622709 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9-httpd-config\") pod \"neutron-667d9dbbbc-pcbhd\" (UID: \"3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9\") " pod="openstack/neutron-667d9dbbbc-pcbhd" Jan 21 11:20:53 crc kubenswrapper[4881]: I0121 11:20:53.623135 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9-combined-ca-bundle\") pod \"neutron-667d9dbbbc-pcbhd\" (UID: \"3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9\") " pod="openstack/neutron-667d9dbbbc-pcbhd" Jan 21 11:20:53 crc kubenswrapper[4881]: I0121 11:20:53.627654 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9-internal-tls-certs\") pod \"neutron-667d9dbbbc-pcbhd\" (UID: \"3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9\") " pod="openstack/neutron-667d9dbbbc-pcbhd" Jan 21 11:20:53 crc kubenswrapper[4881]: I0121 11:20:53.628356 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9-public-tls-certs\") pod \"neutron-667d9dbbbc-pcbhd\" (UID: \"3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9\") " pod="openstack/neutron-667d9dbbbc-pcbhd" Jan 21 11:20:53 crc kubenswrapper[4881]: I0121 11:20:53.637650 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9-config\") pod \"neutron-667d9dbbbc-pcbhd\" (UID: \"3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9\") " pod="openstack/neutron-667d9dbbbc-pcbhd" Jan 21 11:20:53 crc kubenswrapper[4881]: I0121 11:20:53.640666 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdf29\" (UniqueName: \"kubernetes.io/projected/3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9-kube-api-access-fdf29\") pod \"neutron-667d9dbbbc-pcbhd\" (UID: \"3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9\") " pod="openstack/neutron-667d9dbbbc-pcbhd" Jan 21 11:20:53 crc kubenswrapper[4881]: I0121 11:20:53.673105 4881 generic.go:334] "Generic (PLEG): container finished" podID="4bf52889-d5f3-44f8-b657-8ff3790962d1" containerID="3a796b1b54b7432132400a5a214afb4cf61aaada5f5054cc747d5e74194d9dae" exitCode=0 Jan 21 11:20:53 crc kubenswrapper[4881]: I0121 11:20:53.673215 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-slhtz" event={"ID":"4bf52889-d5f3-44f8-b657-8ff3790962d1","Type":"ContainerDied","Data":"3a796b1b54b7432132400a5a214afb4cf61aaada5f5054cc747d5e74194d9dae"} Jan 21 11:20:53 crc kubenswrapper[4881]: I0121 11:20:53.681145 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-796dd99876-gb7nt" event={"ID":"f51f915e-f553-4130-a16b-9e6af68a5a15","Type":"ContainerStarted","Data":"3a9e17862c5ff2f64ddcb7cb3eb9d73424fbbcd62c695e9a6f00fe4f1a20f86b"} Jan 21 11:20:53 crc kubenswrapper[4881]: I0121 11:20:53.766877 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-667d9dbbbc-pcbhd" Jan 21 11:20:54 crc kubenswrapper[4881]: I0121 11:20:54.471586 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 21 11:20:57 crc kubenswrapper[4881]: I0121 11:20:57.127877 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Jan 21 11:20:57 crc kubenswrapper[4881]: I0121 11:20:57.173283 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Jan 21 11:20:57 crc kubenswrapper[4881]: I0121 11:20:57.727654 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-69c96776fd-k2z88" Jan 21 11:20:57 crc kubenswrapper[4881]: I0121 11:20:57.755306 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Jan 21 11:20:58 crc kubenswrapper[4881]: I0121 11:20:58.003410 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-68b447d964-6llq5" Jan 21 11:20:58 crc kubenswrapper[4881]: I0121 11:20:58.964167 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-slhtz" Jan 21 11:20:59 crc kubenswrapper[4881]: I0121 11:20:59.058448 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4bf52889-d5f3-44f8-b657-8ff3790962d1-db-sync-config-data\") pod \"4bf52889-d5f3-44f8-b657-8ff3790962d1\" (UID: \"4bf52889-d5f3-44f8-b657-8ff3790962d1\") " Jan 21 11:20:59 crc kubenswrapper[4881]: I0121 11:20:59.058659 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bf52889-d5f3-44f8-b657-8ff3790962d1-combined-ca-bundle\") pod \"4bf52889-d5f3-44f8-b657-8ff3790962d1\" (UID: \"4bf52889-d5f3-44f8-b657-8ff3790962d1\") " Jan 21 11:20:59 crc kubenswrapper[4881]: I0121 11:20:59.058825 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7pcb\" (UniqueName: \"kubernetes.io/projected/4bf52889-d5f3-44f8-b657-8ff3790962d1-kube-api-access-j7pcb\") pod \"4bf52889-d5f3-44f8-b657-8ff3790962d1\" (UID: \"4bf52889-d5f3-44f8-b657-8ff3790962d1\") " Jan 21 11:20:59 crc kubenswrapper[4881]: I0121 11:20:59.068619 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bf52889-d5f3-44f8-b657-8ff3790962d1-kube-api-access-j7pcb" (OuterVolumeSpecName: "kube-api-access-j7pcb") pod "4bf52889-d5f3-44f8-b657-8ff3790962d1" (UID: "4bf52889-d5f3-44f8-b657-8ff3790962d1"). InnerVolumeSpecName "kube-api-access-j7pcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:20:59 crc kubenswrapper[4881]: I0121 11:20:59.071394 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bf52889-d5f3-44f8-b657-8ff3790962d1-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "4bf52889-d5f3-44f8-b657-8ff3790962d1" (UID: "4bf52889-d5f3-44f8-b657-8ff3790962d1"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:20:59 crc kubenswrapper[4881]: I0121 11:20:59.109964 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bf52889-d5f3-44f8-b657-8ff3790962d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4bf52889-d5f3-44f8-b657-8ff3790962d1" (UID: "4bf52889-d5f3-44f8-b657-8ff3790962d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:20:59 crc kubenswrapper[4881]: I0121 11:20:59.165474 4881 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4bf52889-d5f3-44f8-b657-8ff3790962d1-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:59 crc kubenswrapper[4881]: I0121 11:20:59.165509 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bf52889-d5f3-44f8-b657-8ff3790962d1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:59 crc kubenswrapper[4881]: I0121 11:20:59.165523 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7pcb\" (UniqueName: \"kubernetes.io/projected/4bf52889-d5f3-44f8-b657-8ff3790962d1-kube-api-access-j7pcb\") on node \"crc\" DevicePath \"\"" Jan 21 11:20:59 crc kubenswrapper[4881]: I0121 11:20:59.498529 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 21 11:20:59 crc kubenswrapper[4881]: I0121 11:20:59.498592 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 21 11:20:59 crc kubenswrapper[4881]: I0121 11:20:59.498610 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 21 11:20:59 crc kubenswrapper[4881]: I0121 11:20:59.499509 4881 scope.go:117] "RemoveContainer" containerID="61f6b4008e5afe3c84bc4dbf116ba996728224955a2729f3dc2de6c1a2eeb445" Jan 21 11:20:59 crc kubenswrapper[4881]: E0121 11:20:59.499921 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(ee4e7116-c2cd-43d5-af6b-9f30b5053e0e)\"" pod="openstack/watcher-decision-engine-0" podUID="ee4e7116-c2cd-43d5-af6b-9f30b5053e0e" Jan 21 11:20:59 crc kubenswrapper[4881]: I0121 11:20:59.818305 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-slhtz" event={"ID":"4bf52889-d5f3-44f8-b657-8ff3790962d1","Type":"ContainerDied","Data":"370f02f399b03911d8ee654e46609c08288e0d57caf3655dba13b0b2e545df19"} Jan 21 11:20:59 crc kubenswrapper[4881]: I0121 11:20:59.818686 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="370f02f399b03911d8ee654e46609c08288e0d57caf3655dba13b0b2e545df19" Jan 21 11:20:59 crc kubenswrapper[4881]: I0121 11:20:59.818767 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-slhtz" Jan 21 11:20:59 crc kubenswrapper[4881]: I0121 11:20:59.851701 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-69c96776fd-k2z88" Jan 21 11:20:59 crc kubenswrapper[4881]: I0121 11:20:59.851989 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:20:59 crc kubenswrapper[4881]: I0121 11:20:59.852134 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:20:59 crc kubenswrapper[4881]: I0121 11:20:59.852268 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 11:20:59 crc kubenswrapper[4881]: I0121 11:20:59.853173 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7331cbf4e5c1ebad90ff508798581f83536e17ac3c1ee9a79afc3f65f6e8ad1a"} pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 11:20:59 crc kubenswrapper[4881]: I0121 11:20:59.853232 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" containerID="cri-o://7331cbf4e5c1ebad90ff508798581f83536e17ac3c1ee9a79afc3f65f6e8ad1a" gracePeriod=600 Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.069301 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-68b447d964-6llq5" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.149874 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-69c96776fd-k2z88"] Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.318836 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-55755579c5-csgz2"] Jan 21 11:21:00 crc kubenswrapper[4881]: E0121 11:21:00.320193 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bf52889-d5f3-44f8-b657-8ff3790962d1" containerName="barbican-db-sync" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.320223 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bf52889-d5f3-44f8-b657-8ff3790962d1" containerName="barbican-db-sync" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.320539 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bf52889-d5f3-44f8-b657-8ff3790962d1" containerName="barbican-db-sync" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.329630 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-55755579c5-csgz2" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.333830 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.334099 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-cl6xz" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.335196 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.345836 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/90253f07-2dfb-48b3-9b75-34a653836589-config-data-custom\") pod \"barbican-worker-55755579c5-csgz2\" (UID: \"90253f07-2dfb-48b3-9b75-34a653836589\") " pod="openstack/barbican-worker-55755579c5-csgz2" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.346028 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6982w\" (UniqueName: \"kubernetes.io/projected/90253f07-2dfb-48b3-9b75-34a653836589-kube-api-access-6982w\") pod \"barbican-worker-55755579c5-csgz2\" (UID: \"90253f07-2dfb-48b3-9b75-34a653836589\") " pod="openstack/barbican-worker-55755579c5-csgz2" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.346109 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90253f07-2dfb-48b3-9b75-34a653836589-config-data\") pod \"barbican-worker-55755579c5-csgz2\" (UID: \"90253f07-2dfb-48b3-9b75-34a653836589\") " pod="openstack/barbican-worker-55755579c5-csgz2" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.346288 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90253f07-2dfb-48b3-9b75-34a653836589-logs\") pod \"barbican-worker-55755579c5-csgz2\" (UID: \"90253f07-2dfb-48b3-9b75-34a653836589\") " pod="openstack/barbican-worker-55755579c5-csgz2" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.346654 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90253f07-2dfb-48b3-9b75-34a653836589-combined-ca-bundle\") pod \"barbican-worker-55755579c5-csgz2\" (UID: \"90253f07-2dfb-48b3-9b75-34a653836589\") " pod="openstack/barbican-worker-55755579c5-csgz2" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.347325 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-54f549c774-rnptw"] Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.361957 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-55755579c5-csgz2"] Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.362168 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-54f549c774-rnptw" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.370884 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.380709 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-54f549c774-rnptw"] Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.436466 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-667d9dbbbc-pcbhd"] Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.458397 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/90253f07-2dfb-48b3-9b75-34a653836589-config-data-custom\") pod \"barbican-worker-55755579c5-csgz2\" (UID: \"90253f07-2dfb-48b3-9b75-34a653836589\") " pod="openstack/barbican-worker-55755579c5-csgz2" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.458587 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6982w\" (UniqueName: \"kubernetes.io/projected/90253f07-2dfb-48b3-9b75-34a653836589-kube-api-access-6982w\") pod \"barbican-worker-55755579c5-csgz2\" (UID: \"90253f07-2dfb-48b3-9b75-34a653836589\") " pod="openstack/barbican-worker-55755579c5-csgz2" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.458673 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90253f07-2dfb-48b3-9b75-34a653836589-config-data\") pod \"barbican-worker-55755579c5-csgz2\" (UID: \"90253f07-2dfb-48b3-9b75-34a653836589\") " pod="openstack/barbican-worker-55755579c5-csgz2" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.458706 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90253f07-2dfb-48b3-9b75-34a653836589-logs\") pod \"barbican-worker-55755579c5-csgz2\" (UID: \"90253f07-2dfb-48b3-9b75-34a653836589\") " pod="openstack/barbican-worker-55755579c5-csgz2" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.458844 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90253f07-2dfb-48b3-9b75-34a653836589-combined-ca-bundle\") pod \"barbican-worker-55755579c5-csgz2\" (UID: \"90253f07-2dfb-48b3-9b75-34a653836589\") " pod="openstack/barbican-worker-55755579c5-csgz2" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.460120 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90253f07-2dfb-48b3-9b75-34a653836589-logs\") pod \"barbican-worker-55755579c5-csgz2\" (UID: \"90253f07-2dfb-48b3-9b75-34a653836589\") " pod="openstack/barbican-worker-55755579c5-csgz2" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.467291 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90253f07-2dfb-48b3-9b75-34a653836589-combined-ca-bundle\") pod \"barbican-worker-55755579c5-csgz2\" (UID: \"90253f07-2dfb-48b3-9b75-34a653836589\") " pod="openstack/barbican-worker-55755579c5-csgz2" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.482511 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/90253f07-2dfb-48b3-9b75-34a653836589-config-data-custom\") pod \"barbican-worker-55755579c5-csgz2\" (UID: \"90253f07-2dfb-48b3-9b75-34a653836589\") " pod="openstack/barbican-worker-55755579c5-csgz2" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.492569 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6982w\" (UniqueName: \"kubernetes.io/projected/90253f07-2dfb-48b3-9b75-34a653836589-kube-api-access-6982w\") pod \"barbican-worker-55755579c5-csgz2\" (UID: \"90253f07-2dfb-48b3-9b75-34a653836589\") " pod="openstack/barbican-worker-55755579c5-csgz2" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.495477 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90253f07-2dfb-48b3-9b75-34a653836589-config-data\") pod \"barbican-worker-55755579c5-csgz2\" (UID: \"90253f07-2dfb-48b3-9b75-34a653836589\") " pod="openstack/barbican-worker-55755579c5-csgz2" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.556470 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66498f95d9-n6nvg"] Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.572099 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e80f53a-8873-4c07-b738-2854d9b8b089-logs\") pod \"barbican-keystone-listener-54f549c774-rnptw\" (UID: \"6e80f53a-8873-4c07-b738-2854d9b8b089\") " pod="openstack/barbican-keystone-listener-54f549c774-rnptw" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.572228 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb2gq\" (UniqueName: \"kubernetes.io/projected/6e80f53a-8873-4c07-b738-2854d9b8b089-kube-api-access-wb2gq\") pod \"barbican-keystone-listener-54f549c774-rnptw\" (UID: \"6e80f53a-8873-4c07-b738-2854d9b8b089\") " pod="openstack/barbican-keystone-listener-54f549c774-rnptw" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.572306 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e80f53a-8873-4c07-b738-2854d9b8b089-config-data\") pod \"barbican-keystone-listener-54f549c774-rnptw\" (UID: \"6e80f53a-8873-4c07-b738-2854d9b8b089\") " pod="openstack/barbican-keystone-listener-54f549c774-rnptw" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.572428 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e80f53a-8873-4c07-b738-2854d9b8b089-combined-ca-bundle\") pod \"barbican-keystone-listener-54f549c774-rnptw\" (UID: \"6e80f53a-8873-4c07-b738-2854d9b8b089\") " pod="openstack/barbican-keystone-listener-54f549c774-rnptw" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.572566 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e80f53a-8873-4c07-b738-2854d9b8b089-config-data-custom\") pod \"barbican-keystone-listener-54f549c774-rnptw\" (UID: \"6e80f53a-8873-4c07-b738-2854d9b8b089\") " pod="openstack/barbican-keystone-listener-54f549c774-rnptw" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.583779 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-69f96db49f-qzf9p"] Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.587087 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.588244 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ncbfx" podUID="6a8083e9-c68d-40ca-bde9-b84e43b65ab8" containerName="registry-server" probeResult="failure" output=< Jan 21 11:21:00 crc kubenswrapper[4881]: timeout: failed to connect service ":50051" within 1s Jan 21 11:21:00 crc kubenswrapper[4881]: > Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.633688 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69f96db49f-qzf9p"] Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.652373 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-55755579c5-csgz2" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.675090 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e80f53a-8873-4c07-b738-2854d9b8b089-logs\") pod \"barbican-keystone-listener-54f549c774-rnptw\" (UID: \"6e80f53a-8873-4c07-b738-2854d9b8b089\") " pod="openstack/barbican-keystone-listener-54f549c774-rnptw" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.675482 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb2gq\" (UniqueName: \"kubernetes.io/projected/6e80f53a-8873-4c07-b738-2854d9b8b089-kube-api-access-wb2gq\") pod \"barbican-keystone-listener-54f549c774-rnptw\" (UID: \"6e80f53a-8873-4c07-b738-2854d9b8b089\") " pod="openstack/barbican-keystone-listener-54f549c774-rnptw" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.675533 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e80f53a-8873-4c07-b738-2854d9b8b089-config-data\") pod \"barbican-keystone-listener-54f549c774-rnptw\" (UID: \"6e80f53a-8873-4c07-b738-2854d9b8b089\") " pod="openstack/barbican-keystone-listener-54f549c774-rnptw" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.675589 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e80f53a-8873-4c07-b738-2854d9b8b089-combined-ca-bundle\") pod \"barbican-keystone-listener-54f549c774-rnptw\" (UID: \"6e80f53a-8873-4c07-b738-2854d9b8b089\") " pod="openstack/barbican-keystone-listener-54f549c774-rnptw" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.675640 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e80f53a-8873-4c07-b738-2854d9b8b089-config-data-custom\") pod \"barbican-keystone-listener-54f549c774-rnptw\" (UID: \"6e80f53a-8873-4c07-b738-2854d9b8b089\") " pod="openstack/barbican-keystone-listener-54f549c774-rnptw" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.676564 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e80f53a-8873-4c07-b738-2854d9b8b089-logs\") pod \"barbican-keystone-listener-54f549c774-rnptw\" (UID: \"6e80f53a-8873-4c07-b738-2854d9b8b089\") " pod="openstack/barbican-keystone-listener-54f549c774-rnptw" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.683898 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e80f53a-8873-4c07-b738-2854d9b8b089-combined-ca-bundle\") pod \"barbican-keystone-listener-54f549c774-rnptw\" (UID: \"6e80f53a-8873-4c07-b738-2854d9b8b089\") " pod="openstack/barbican-keystone-listener-54f549c774-rnptw" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.685561 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e80f53a-8873-4c07-b738-2854d9b8b089-config-data\") pod \"barbican-keystone-listener-54f549c774-rnptw\" (UID: \"6e80f53a-8873-4c07-b738-2854d9b8b089\") " pod="openstack/barbican-keystone-listener-54f549c774-rnptw" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.689471 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e80f53a-8873-4c07-b738-2854d9b8b089-config-data-custom\") pod \"barbican-keystone-listener-54f549c774-rnptw\" (UID: \"6e80f53a-8873-4c07-b738-2854d9b8b089\") " pod="openstack/barbican-keystone-listener-54f549c774-rnptw" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.704136 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6cbb6fc6b6-tlfhj"] Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.706376 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.709220 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb2gq\" (UniqueName: \"kubernetes.io/projected/6e80f53a-8873-4c07-b738-2854d9b8b089-kube-api-access-wb2gq\") pod \"barbican-keystone-listener-54f549c774-rnptw\" (UID: \"6e80f53a-8873-4c07-b738-2854d9b8b089\") " pod="openstack/barbican-keystone-listener-54f549c774-rnptw" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.716652 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.721959 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6cbb6fc6b6-tlfhj"] Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.778759 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2ecfd63-c654-42e9-b324-22c02d21b506-ovsdbserver-sb\") pod \"dnsmasq-dns-69f96db49f-qzf9p\" (UID: \"d2ecfd63-c654-42e9-b324-22c02d21b506\") " pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.778863 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2ecfd63-c654-42e9-b324-22c02d21b506-dns-svc\") pod \"dnsmasq-dns-69f96db49f-qzf9p\" (UID: \"d2ecfd63-c654-42e9-b324-22c02d21b506\") " pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.779034 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2ecfd63-c654-42e9-b324-22c02d21b506-config\") pod \"dnsmasq-dns-69f96db49f-qzf9p\" (UID: \"d2ecfd63-c654-42e9-b324-22c02d21b506\") " pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.779142 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2ecfd63-c654-42e9-b324-22c02d21b506-ovsdbserver-nb\") pod \"dnsmasq-dns-69f96db49f-qzf9p\" (UID: \"d2ecfd63-c654-42e9-b324-22c02d21b506\") " pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.779250 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2ecfd63-c654-42e9-b324-22c02d21b506-dns-swift-storage-0\") pod \"dnsmasq-dns-69f96db49f-qzf9p\" (UID: \"d2ecfd63-c654-42e9-b324-22c02d21b506\") " pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.779623 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfwwk\" (UniqueName: \"kubernetes.io/projected/d2ecfd63-c654-42e9-b324-22c02d21b506-kube-api-access-sfwwk\") pod \"dnsmasq-dns-69f96db49f-qzf9p\" (UID: \"d2ecfd63-c654-42e9-b324-22c02d21b506\") " pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.856091 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-796dd99876-gb7nt" event={"ID":"f51f915e-f553-4130-a16b-9e6af68a5a15","Type":"ContainerStarted","Data":"d69bb72f9eba472479b5b854a392dd678dcf12a1e5ab100dffbf954eda114573"} Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.856317 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-796dd99876-gb7nt" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.870085 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66498f95d9-n6nvg" event={"ID":"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe","Type":"ContainerStarted","Data":"502e6f906f1978cd73b6fd52aa270b0a25fe565d624b6874af91148a542bee58"} Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.870909 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-66498f95d9-n6nvg" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.882940 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfwwk\" (UniqueName: \"kubernetes.io/projected/d2ecfd63-c654-42e9-b324-22c02d21b506-kube-api-access-sfwwk\") pod \"dnsmasq-dns-69f96db49f-qzf9p\" (UID: \"d2ecfd63-c654-42e9-b324-22c02d21b506\") " pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.882997 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85f05121-bd30-4b3f-936d-dc20e30fca06-config-data\") pod \"barbican-api-6cbb6fc6b6-tlfhj\" (UID: \"85f05121-bd30-4b3f-936d-dc20e30fca06\") " pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.883036 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2ecfd63-c654-42e9-b324-22c02d21b506-ovsdbserver-sb\") pod \"dnsmasq-dns-69f96db49f-qzf9p\" (UID: \"d2ecfd63-c654-42e9-b324-22c02d21b506\") " pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.883064 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2ecfd63-c654-42e9-b324-22c02d21b506-dns-svc\") pod \"dnsmasq-dns-69f96db49f-qzf9p\" (UID: \"d2ecfd63-c654-42e9-b324-22c02d21b506\") " pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.883102 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsnsj\" (UniqueName: \"kubernetes.io/projected/85f05121-bd30-4b3f-936d-dc20e30fca06-kube-api-access-rsnsj\") pod \"barbican-api-6cbb6fc6b6-tlfhj\" (UID: \"85f05121-bd30-4b3f-936d-dc20e30fca06\") " pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.883146 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2ecfd63-c654-42e9-b324-22c02d21b506-config\") pod \"dnsmasq-dns-69f96db49f-qzf9p\" (UID: \"d2ecfd63-c654-42e9-b324-22c02d21b506\") " pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.883170 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85f05121-bd30-4b3f-936d-dc20e30fca06-logs\") pod \"barbican-api-6cbb6fc6b6-tlfhj\" (UID: \"85f05121-bd30-4b3f-936d-dc20e30fca06\") " pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.883208 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2ecfd63-c654-42e9-b324-22c02d21b506-ovsdbserver-nb\") pod \"dnsmasq-dns-69f96db49f-qzf9p\" (UID: \"d2ecfd63-c654-42e9-b324-22c02d21b506\") " pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.883237 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85f05121-bd30-4b3f-936d-dc20e30fca06-combined-ca-bundle\") pod \"barbican-api-6cbb6fc6b6-tlfhj\" (UID: \"85f05121-bd30-4b3f-936d-dc20e30fca06\") " pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.883258 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/85f05121-bd30-4b3f-936d-dc20e30fca06-config-data-custom\") pod \"barbican-api-6cbb6fc6b6-tlfhj\" (UID: \"85f05121-bd30-4b3f-936d-dc20e30fca06\") " pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.883287 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2ecfd63-c654-42e9-b324-22c02d21b506-dns-swift-storage-0\") pod \"dnsmasq-dns-69f96db49f-qzf9p\" (UID: \"d2ecfd63-c654-42e9-b324-22c02d21b506\") " pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.884728 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2ecfd63-c654-42e9-b324-22c02d21b506-dns-swift-storage-0\") pod \"dnsmasq-dns-69f96db49f-qzf9p\" (UID: \"d2ecfd63-c654-42e9-b324-22c02d21b506\") " pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.885817 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2ecfd63-c654-42e9-b324-22c02d21b506-ovsdbserver-sb\") pod \"dnsmasq-dns-69f96db49f-qzf9p\" (UID: \"d2ecfd63-c654-42e9-b324-22c02d21b506\") " pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.889846 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bcec3c24-87bd-4c22-a800-d3835455a38b","Type":"ContainerStarted","Data":"7a2597fbfe970937452b64ccef79f25aaeee72972449d78e0549c998d5351134"} Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.890247 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bcec3c24-87bd-4c22-a800-d3835455a38b" containerName="ceilometer-central-agent" containerID="cri-o://04c2a8411b86bd02035922d4fe1ad96f1a1dbf240fbfa10221b52bc6ac101706" gracePeriod=30 Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.890289 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bcec3c24-87bd-4c22-a800-d3835455a38b" containerName="sg-core" containerID="cri-o://ca18caa0fee509128e7ffae2755d6b5b1126bfe1c63366090fd0947db93d8443" gracePeriod=30 Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.890276 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2ecfd63-c654-42e9-b324-22c02d21b506-dns-svc\") pod \"dnsmasq-dns-69f96db49f-qzf9p\" (UID: \"d2ecfd63-c654-42e9-b324-22c02d21b506\") " pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.890385 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bcec3c24-87bd-4c22-a800-d3835455a38b" containerName="ceilometer-notification-agent" containerID="cri-o://b14382df533ca3054b8542bddeff2d41d2f1e579142ea3b20b1a7a9c276362b8" gracePeriod=30 Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.890492 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.890440 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bcec3c24-87bd-4c22-a800-d3835455a38b" containerName="proxy-httpd" containerID="cri-o://7a2597fbfe970937452b64ccef79f25aaeee72972449d78e0549c998d5351134" gracePeriod=30 Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.895045 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2ecfd63-c654-42e9-b324-22c02d21b506-config\") pod \"dnsmasq-dns-69f96db49f-qzf9p\" (UID: \"d2ecfd63-c654-42e9-b324-22c02d21b506\") " pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.894442 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2ecfd63-c654-42e9-b324-22c02d21b506-ovsdbserver-nb\") pod \"dnsmasq-dns-69f96db49f-qzf9p\" (UID: \"d2ecfd63-c654-42e9-b324-22c02d21b506\") " pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.904521 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-796dd99876-gb7nt" podStartSLOduration=10.904498468 podStartE2EDuration="10.904498468s" podCreationTimestamp="2026-01-21 11:20:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:21:00.889106745 +0000 UTC m=+1448.149063214" watchObservedRunningTime="2026-01-21 11:21:00.904498468 +0000 UTC m=+1448.164454937" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.914424 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-667d9dbbbc-pcbhd" event={"ID":"3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9","Type":"ContainerStarted","Data":"5c0eca339ef26596d70dd7e8649e504d14255f85aa417abb37517636935e7473"} Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.925208 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfwwk\" (UniqueName: \"kubernetes.io/projected/d2ecfd63-c654-42e9-b324-22c02d21b506-kube-api-access-sfwwk\") pod \"dnsmasq-dns-69f96db49f-qzf9p\" (UID: \"d2ecfd63-c654-42e9-b324-22c02d21b506\") " pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.946220 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-66498f95d9-n6nvg" podStartSLOduration=10.946195106 podStartE2EDuration="10.946195106s" podCreationTimestamp="2026-01-21 11:20:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:21:00.922569668 +0000 UTC m=+1448.182526137" watchObservedRunningTime="2026-01-21 11:21:00.946195106 +0000 UTC m=+1448.206151585" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.957127 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.975569 4881 generic.go:334] "Generic (PLEG): container finished" podID="3687b313-1df2-4274-80db-8c758b51bf2d" containerID="7331cbf4e5c1ebad90ff508798581f83536e17ac3c1ee9a79afc3f65f6e8ad1a" exitCode=0 Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.975844 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-69c96776fd-k2z88" podUID="2f516fb6-322b-4eee-9d4d-a10176959bbb" containerName="horizon-log" containerID="cri-o://c37cb0dabfc7bd198de45353bd7d592c9381160bf0f186350e93353fe2ea4470" gracePeriod=30 Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.976131 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerDied","Data":"7331cbf4e5c1ebad90ff508798581f83536e17ac3c1ee9a79afc3f65f6e8ad1a"} Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.976185 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"8d01a71cc3cebcfb692e7385a1f123f20c56f33df75b2fdeed7ba4c65dcb43ca"} Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.976208 4881 scope.go:117] "RemoveContainer" containerID="d0f3ab6355e31b97e337f7f21fb84796e3dea68bac874475991ce7eb43a93a82" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.976658 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-69c96776fd-k2z88" podUID="2f516fb6-322b-4eee-9d4d-a10176959bbb" containerName="horizon" containerID="cri-o://20e9501e200b98586a1c9e7d12e2adf41d01903bd2505ab83e7f8f0fc5404f52" gracePeriod=30 Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.987678 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85f05121-bd30-4b3f-936d-dc20e30fca06-combined-ca-bundle\") pod \"barbican-api-6cbb6fc6b6-tlfhj\" (UID: \"85f05121-bd30-4b3f-936d-dc20e30fca06\") " pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.987735 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/85f05121-bd30-4b3f-936d-dc20e30fca06-config-data-custom\") pod \"barbican-api-6cbb6fc6b6-tlfhj\" (UID: \"85f05121-bd30-4b3f-936d-dc20e30fca06\") " pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.988032 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85f05121-bd30-4b3f-936d-dc20e30fca06-config-data\") pod \"barbican-api-6cbb6fc6b6-tlfhj\" (UID: \"85f05121-bd30-4b3f-936d-dc20e30fca06\") " pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.988227 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsnsj\" (UniqueName: \"kubernetes.io/projected/85f05121-bd30-4b3f-936d-dc20e30fca06-kube-api-access-rsnsj\") pod \"barbican-api-6cbb6fc6b6-tlfhj\" (UID: \"85f05121-bd30-4b3f-936d-dc20e30fca06\") " pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.988372 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85f05121-bd30-4b3f-936d-dc20e30fca06-logs\") pod \"barbican-api-6cbb6fc6b6-tlfhj\" (UID: \"85f05121-bd30-4b3f-936d-dc20e30fca06\") " pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" Jan 21 11:21:00 crc kubenswrapper[4881]: I0121 11:21:00.992107 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85f05121-bd30-4b3f-936d-dc20e30fca06-logs\") pod \"barbican-api-6cbb6fc6b6-tlfhj\" (UID: \"85f05121-bd30-4b3f-936d-dc20e30fca06\") " pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" Jan 21 11:21:01 crc kubenswrapper[4881]: I0121 11:21:00.997220 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85f05121-bd30-4b3f-936d-dc20e30fca06-combined-ca-bundle\") pod \"barbican-api-6cbb6fc6b6-tlfhj\" (UID: \"85f05121-bd30-4b3f-936d-dc20e30fca06\") " pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" Jan 21 11:21:01 crc kubenswrapper[4881]: I0121 11:21:00.997713 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-54f549c774-rnptw" Jan 21 11:21:01 crc kubenswrapper[4881]: I0121 11:21:01.006884 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85f05121-bd30-4b3f-936d-dc20e30fca06-config-data\") pod \"barbican-api-6cbb6fc6b6-tlfhj\" (UID: \"85f05121-bd30-4b3f-936d-dc20e30fca06\") " pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" Jan 21 11:21:01 crc kubenswrapper[4881]: I0121 11:21:01.019804 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/85f05121-bd30-4b3f-936d-dc20e30fca06-config-data-custom\") pod \"barbican-api-6cbb6fc6b6-tlfhj\" (UID: \"85f05121-bd30-4b3f-936d-dc20e30fca06\") " pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" Jan 21 11:21:01 crc kubenswrapper[4881]: I0121 11:21:01.022427 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.022803688 podStartE2EDuration="1m16.022404422s" podCreationTimestamp="2026-01-21 11:19:45 +0000 UTC" firstStartedPulling="2026-01-21 11:19:47.861601855 +0000 UTC m=+1375.121558324" lastFinishedPulling="2026-01-21 11:20:59.861202589 +0000 UTC m=+1447.121159058" observedRunningTime="2026-01-21 11:21:00.972246634 +0000 UTC m=+1448.232203103" watchObservedRunningTime="2026-01-21 11:21:01.022404422 +0000 UTC m=+1448.282360891" Jan 21 11:21:01 crc kubenswrapper[4881]: I0121 11:21:01.022905 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsnsj\" (UniqueName: \"kubernetes.io/projected/85f05121-bd30-4b3f-936d-dc20e30fca06-kube-api-access-rsnsj\") pod \"barbican-api-6cbb6fc6b6-tlfhj\" (UID: \"85f05121-bd30-4b3f-936d-dc20e30fca06\") " pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" Jan 21 11:21:01 crc kubenswrapper[4881]: I0121 11:21:01.054848 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" Jan 21 11:21:01 crc kubenswrapper[4881]: I0121 11:21:01.308099 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-55755579c5-csgz2"] Jan 21 11:21:01 crc kubenswrapper[4881]: I0121 11:21:01.644296 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69f96db49f-qzf9p"] Jan 21 11:21:01 crc kubenswrapper[4881]: W0121 11:21:01.645926 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2ecfd63_c654_42e9_b324_22c02d21b506.slice/crio-b2d41124075aed0e5d3723eb39479bb34ae77563466138e26829e292a42a163c WatchSource:0}: Error finding container b2d41124075aed0e5d3723eb39479bb34ae77563466138e26829e292a42a163c: Status 404 returned error can't find the container with id b2d41124075aed0e5d3723eb39479bb34ae77563466138e26829e292a42a163c Jan 21 11:21:01 crc kubenswrapper[4881]: I0121 11:21:01.837748 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-54f549c774-rnptw"] Jan 21 11:21:01 crc kubenswrapper[4881]: E0121 11:21:01.852900 4881 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbcec3c24_87bd_4c22_a800_d3835455a38b.slice/crio-04c2a8411b86bd02035922d4fe1ad96f1a1dbf240fbfa10221b52bc6ac101706.scope\": RecentStats: unable to find data in memory cache]" Jan 21 11:21:02 crc kubenswrapper[4881]: I0121 11:21:02.016390 4881 generic.go:334] "Generic (PLEG): container finished" podID="bcec3c24-87bd-4c22-a800-d3835455a38b" containerID="7a2597fbfe970937452b64ccef79f25aaeee72972449d78e0549c998d5351134" exitCode=0 Jan 21 11:21:02 crc kubenswrapper[4881]: I0121 11:21:02.016826 4881 generic.go:334] "Generic (PLEG): container finished" podID="bcec3c24-87bd-4c22-a800-d3835455a38b" containerID="ca18caa0fee509128e7ffae2755d6b5b1126bfe1c63366090fd0947db93d8443" exitCode=2 Jan 21 11:21:02 crc kubenswrapper[4881]: I0121 11:21:02.016835 4881 generic.go:334] "Generic (PLEG): container finished" podID="bcec3c24-87bd-4c22-a800-d3835455a38b" containerID="04c2a8411b86bd02035922d4fe1ad96f1a1dbf240fbfa10221b52bc6ac101706" exitCode=0 Jan 21 11:21:02 crc kubenswrapper[4881]: I0121 11:21:02.016918 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bcec3c24-87bd-4c22-a800-d3835455a38b","Type":"ContainerDied","Data":"7a2597fbfe970937452b64ccef79f25aaeee72972449d78e0549c998d5351134"} Jan 21 11:21:02 crc kubenswrapper[4881]: I0121 11:21:02.016949 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bcec3c24-87bd-4c22-a800-d3835455a38b","Type":"ContainerDied","Data":"ca18caa0fee509128e7ffae2755d6b5b1126bfe1c63366090fd0947db93d8443"} Jan 21 11:21:02 crc kubenswrapper[4881]: I0121 11:21:02.016960 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bcec3c24-87bd-4c22-a800-d3835455a38b","Type":"ContainerDied","Data":"04c2a8411b86bd02035922d4fe1ad96f1a1dbf240fbfa10221b52bc6ac101706"} Jan 21 11:21:02 crc kubenswrapper[4881]: I0121 11:21:02.028539 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6cbb6fc6b6-tlfhj"] Jan 21 11:21:02 crc kubenswrapper[4881]: I0121 11:21:02.038353 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-54f549c774-rnptw" event={"ID":"6e80f53a-8873-4c07-b738-2854d9b8b089","Type":"ContainerStarted","Data":"8a851cacbff6f63fdcd19b9d99dcd44f0beccc9b727794d59895fbd1d06b5e2b"} Jan 21 11:21:02 crc kubenswrapper[4881]: I0121 11:21:02.046567 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-667d9dbbbc-pcbhd" event={"ID":"3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9","Type":"ContainerStarted","Data":"7df1602479c0737d3c8958d570f9cbeba35e5715f926583fb77d0ec87c7486e1"} Jan 21 11:21:02 crc kubenswrapper[4881]: I0121 11:21:02.046623 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-667d9dbbbc-pcbhd" event={"ID":"3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9","Type":"ContainerStarted","Data":"6220354aa4ade8d0f046ca74d11c614ca92041bd84251682dd52e97d0f4995f7"} Jan 21 11:21:02 crc kubenswrapper[4881]: I0121 11:21:02.046678 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-667d9dbbbc-pcbhd" Jan 21 11:21:02 crc kubenswrapper[4881]: I0121 11:21:02.055532 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" event={"ID":"d2ecfd63-c654-42e9-b324-22c02d21b506","Type":"ContainerStarted","Data":"b2d41124075aed0e5d3723eb39479bb34ae77563466138e26829e292a42a163c"} Jan 21 11:21:02 crc kubenswrapper[4881]: I0121 11:21:02.086509 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-667d9dbbbc-pcbhd" podStartSLOduration=9.086486018 podStartE2EDuration="9.086486018s" podCreationTimestamp="2026-01-21 11:20:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:21:02.082018887 +0000 UTC m=+1449.341975356" watchObservedRunningTime="2026-01-21 11:21:02.086486018 +0000 UTC m=+1449.346442487" Jan 21 11:21:02 crc kubenswrapper[4881]: I0121 11:21:02.093920 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-55755579c5-csgz2" event={"ID":"90253f07-2dfb-48b3-9b75-34a653836589","Type":"ContainerStarted","Data":"01ff14eb6c7415d70cc8495bf9f82913d21e21a3010c85315863fc04a400d197"} Jan 21 11:21:02 crc kubenswrapper[4881]: I0121 11:21:02.094141 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-66498f95d9-n6nvg" podUID="3a4d2e63-3d53-44ef-8968-22a7ced8d0fe" containerName="dnsmasq-dns" containerID="cri-o://502e6f906f1978cd73b6fd52aa270b0a25fe565d624b6874af91148a542bee58" gracePeriod=10 Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.111484 4881 generic.go:334] "Generic (PLEG): container finished" podID="bcec3c24-87bd-4c22-a800-d3835455a38b" containerID="b14382df533ca3054b8542bddeff2d41d2f1e579142ea3b20b1a7a9c276362b8" exitCode=0 Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.111914 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bcec3c24-87bd-4c22-a800-d3835455a38b","Type":"ContainerDied","Data":"b14382df533ca3054b8542bddeff2d41d2f1e579142ea3b20b1a7a9c276362b8"} Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.114252 4881 generic.go:334] "Generic (PLEG): container finished" podID="d2ecfd63-c654-42e9-b324-22c02d21b506" containerID="ab96b5d1c6a41e54c1b2168c0a309330a7285a8a3d539c811f7b6cd696883974" exitCode=0 Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.114360 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" event={"ID":"d2ecfd63-c654-42e9-b324-22c02d21b506","Type":"ContainerDied","Data":"ab96b5d1c6a41e54c1b2168c0a309330a7285a8a3d539c811f7b6cd696883974"} Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.116451 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" event={"ID":"85f05121-bd30-4b3f-936d-dc20e30fca06","Type":"ContainerStarted","Data":"9af42ead045471788f06fad27bb79fcdf735280d710e2b7eaa693c5e2301f9f2"} Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.116497 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" event={"ID":"85f05121-bd30-4b3f-936d-dc20e30fca06","Type":"ContainerStarted","Data":"7876bc29105eec2a39d493ced73df7df6c703880a81ffba5229cbe6f92400377"} Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.121896 4881 generic.go:334] "Generic (PLEG): container finished" podID="2f516fb6-322b-4eee-9d4d-a10176959bbb" containerID="20e9501e200b98586a1c9e7d12e2adf41d01903bd2505ab83e7f8f0fc5404f52" exitCode=0 Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.121970 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69c96776fd-k2z88" event={"ID":"2f516fb6-322b-4eee-9d4d-a10176959bbb","Type":"ContainerDied","Data":"20e9501e200b98586a1c9e7d12e2adf41d01903bd2505ab83e7f8f0fc5404f52"} Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.125851 4881 generic.go:334] "Generic (PLEG): container finished" podID="3a4d2e63-3d53-44ef-8968-22a7ced8d0fe" containerID="502e6f906f1978cd73b6fd52aa270b0a25fe565d624b6874af91148a542bee58" exitCode=0 Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.125942 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66498f95d9-n6nvg" event={"ID":"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe","Type":"ContainerDied","Data":"502e6f906f1978cd73b6fd52aa270b0a25fe565d624b6874af91148a542bee58"} Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.519317 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.599078 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bcec3c24-87bd-4c22-a800-d3835455a38b-run-httpd\") pod \"bcec3c24-87bd-4c22-a800-d3835455a38b\" (UID: \"bcec3c24-87bd-4c22-a800-d3835455a38b\") " Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.599231 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bcec3c24-87bd-4c22-a800-d3835455a38b-scripts\") pod \"bcec3c24-87bd-4c22-a800-d3835455a38b\" (UID: \"bcec3c24-87bd-4c22-a800-d3835455a38b\") " Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.599337 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcec3c24-87bd-4c22-a800-d3835455a38b-config-data\") pod \"bcec3c24-87bd-4c22-a800-d3835455a38b\" (UID: \"bcec3c24-87bd-4c22-a800-d3835455a38b\") " Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.599459 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcec3c24-87bd-4c22-a800-d3835455a38b-combined-ca-bundle\") pod \"bcec3c24-87bd-4c22-a800-d3835455a38b\" (UID: \"bcec3c24-87bd-4c22-a800-d3835455a38b\") " Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.599507 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bcec3c24-87bd-4c22-a800-d3835455a38b-sg-core-conf-yaml\") pod \"bcec3c24-87bd-4c22-a800-d3835455a38b\" (UID: \"bcec3c24-87bd-4c22-a800-d3835455a38b\") " Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.599589 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bcec3c24-87bd-4c22-a800-d3835455a38b-log-httpd\") pod \"bcec3c24-87bd-4c22-a800-d3835455a38b\" (UID: \"bcec3c24-87bd-4c22-a800-d3835455a38b\") " Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.599663 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bj6cp\" (UniqueName: \"kubernetes.io/projected/bcec3c24-87bd-4c22-a800-d3835455a38b-kube-api-access-bj6cp\") pod \"bcec3c24-87bd-4c22-a800-d3835455a38b\" (UID: \"bcec3c24-87bd-4c22-a800-d3835455a38b\") " Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.600198 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcec3c24-87bd-4c22-a800-d3835455a38b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bcec3c24-87bd-4c22-a800-d3835455a38b" (UID: "bcec3c24-87bd-4c22-a800-d3835455a38b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.600500 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcec3c24-87bd-4c22-a800-d3835455a38b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bcec3c24-87bd-4c22-a800-d3835455a38b" (UID: "bcec3c24-87bd-4c22-a800-d3835455a38b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.601451 4881 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bcec3c24-87bd-4c22-a800-d3835455a38b-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.601557 4881 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bcec3c24-87bd-4c22-a800-d3835455a38b-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.615486 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcec3c24-87bd-4c22-a800-d3835455a38b-scripts" (OuterVolumeSpecName: "scripts") pod "bcec3c24-87bd-4c22-a800-d3835455a38b" (UID: "bcec3c24-87bd-4c22-a800-d3835455a38b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.618873 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcec3c24-87bd-4c22-a800-d3835455a38b-kube-api-access-bj6cp" (OuterVolumeSpecName: "kube-api-access-bj6cp") pod "bcec3c24-87bd-4c22-a800-d3835455a38b" (UID: "bcec3c24-87bd-4c22-a800-d3835455a38b"). InnerVolumeSpecName "kube-api-access-bj6cp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.629542 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66498f95d9-n6nvg" Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.663969 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcec3c24-87bd-4c22-a800-d3835455a38b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bcec3c24-87bd-4c22-a800-d3835455a38b" (UID: "bcec3c24-87bd-4c22-a800-d3835455a38b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.703420 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-dns-svc\") pod \"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe\" (UID: \"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe\") " Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.703536 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-ovsdbserver-sb\") pod \"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe\" (UID: \"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe\") " Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.703607 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcslt\" (UniqueName: \"kubernetes.io/projected/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-kube-api-access-zcslt\") pod \"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe\" (UID: \"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe\") " Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.703714 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-config\") pod \"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe\" (UID: \"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe\") " Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.703932 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-dns-swift-storage-0\") pod \"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe\" (UID: \"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe\") " Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.703982 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-ovsdbserver-nb\") pod \"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe\" (UID: \"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe\") " Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.704557 4881 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bcec3c24-87bd-4c22-a800-d3835455a38b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.704584 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bj6cp\" (UniqueName: \"kubernetes.io/projected/bcec3c24-87bd-4c22-a800-d3835455a38b-kube-api-access-bj6cp\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.704600 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bcec3c24-87bd-4c22-a800-d3835455a38b-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.714897 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-kube-api-access-zcslt" (OuterVolumeSpecName: "kube-api-access-zcslt") pod "3a4d2e63-3d53-44ef-8968-22a7ced8d0fe" (UID: "3a4d2e63-3d53-44ef-8968-22a7ced8d0fe"). InnerVolumeSpecName "kube-api-access-zcslt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.783543 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-config" (OuterVolumeSpecName: "config") pod "3a4d2e63-3d53-44ef-8968-22a7ced8d0fe" (UID: "3a4d2e63-3d53-44ef-8968-22a7ced8d0fe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.801267 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcec3c24-87bd-4c22-a800-d3835455a38b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bcec3c24-87bd-4c22-a800-d3835455a38b" (UID: "bcec3c24-87bd-4c22-a800-d3835455a38b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.802080 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3a4d2e63-3d53-44ef-8968-22a7ced8d0fe" (UID: "3a4d2e63-3d53-44ef-8968-22a7ced8d0fe"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.806210 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.806239 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zcslt\" (UniqueName: \"kubernetes.io/projected/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-kube-api-access-zcslt\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.806250 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcec3c24-87bd-4c22-a800-d3835455a38b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.806266 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.807211 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcec3c24-87bd-4c22-a800-d3835455a38b-config-data" (OuterVolumeSpecName: "config-data") pod "bcec3c24-87bd-4c22-a800-d3835455a38b" (UID: "bcec3c24-87bd-4c22-a800-d3835455a38b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.811020 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3a4d2e63-3d53-44ef-8968-22a7ced8d0fe" (UID: "3a4d2e63-3d53-44ef-8968-22a7ced8d0fe"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.836668 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3a4d2e63-3d53-44ef-8968-22a7ced8d0fe" (UID: "3a4d2e63-3d53-44ef-8968-22a7ced8d0fe"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.863553 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3a4d2e63-3d53-44ef-8968-22a7ced8d0fe" (UID: "3a4d2e63-3d53-44ef-8968-22a7ced8d0fe"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.907846 4881 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.907882 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.907892 4881 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:03 crc kubenswrapper[4881]: I0121 11:21:03.907902 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcec3c24-87bd-4c22-a800-d3835455a38b-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.078844 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7d6f7f4cc8-c4tt4"] Jan 21 11:21:04 crc kubenswrapper[4881]: E0121 11:21:04.079376 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcec3c24-87bd-4c22-a800-d3835455a38b" containerName="sg-core" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.079400 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcec3c24-87bd-4c22-a800-d3835455a38b" containerName="sg-core" Jan 21 11:21:04 crc kubenswrapper[4881]: E0121 11:21:04.079420 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcec3c24-87bd-4c22-a800-d3835455a38b" containerName="ceilometer-central-agent" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.079429 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcec3c24-87bd-4c22-a800-d3835455a38b" containerName="ceilometer-central-agent" Jan 21 11:21:04 crc kubenswrapper[4881]: E0121 11:21:04.079441 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a4d2e63-3d53-44ef-8968-22a7ced8d0fe" containerName="init" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.079448 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a4d2e63-3d53-44ef-8968-22a7ced8d0fe" containerName="init" Jan 21 11:21:04 crc kubenswrapper[4881]: E0121 11:21:04.079462 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a4d2e63-3d53-44ef-8968-22a7ced8d0fe" containerName="dnsmasq-dns" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.079468 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a4d2e63-3d53-44ef-8968-22a7ced8d0fe" containerName="dnsmasq-dns" Jan 21 11:21:04 crc kubenswrapper[4881]: E0121 11:21:04.079480 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcec3c24-87bd-4c22-a800-d3835455a38b" containerName="ceilometer-notification-agent" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.079488 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcec3c24-87bd-4c22-a800-d3835455a38b" containerName="ceilometer-notification-agent" Jan 21 11:21:04 crc kubenswrapper[4881]: E0121 11:21:04.079505 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcec3c24-87bd-4c22-a800-d3835455a38b" containerName="proxy-httpd" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.079511 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcec3c24-87bd-4c22-a800-d3835455a38b" containerName="proxy-httpd" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.079728 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcec3c24-87bd-4c22-a800-d3835455a38b" containerName="sg-core" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.079760 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a4d2e63-3d53-44ef-8968-22a7ced8d0fe" containerName="dnsmasq-dns" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.079800 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcec3c24-87bd-4c22-a800-d3835455a38b" containerName="proxy-httpd" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.079821 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcec3c24-87bd-4c22-a800-d3835455a38b" containerName="ceilometer-notification-agent" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.079839 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcec3c24-87bd-4c22-a800-d3835455a38b" containerName="ceilometer-central-agent" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.083127 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7d6f7f4cc8-c4tt4" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.087595 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.089941 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.099648 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7d6f7f4cc8-c4tt4"] Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.158591 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66498f95d9-n6nvg" event={"ID":"3a4d2e63-3d53-44ef-8968-22a7ced8d0fe","Type":"ContainerDied","Data":"53bbfd2a49add8edadc389aeebfde92d8828c88f0f666671d93498d8d53c2567"} Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.158653 4881 scope.go:117] "RemoveContainer" containerID="502e6f906f1978cd73b6fd52aa270b0a25fe565d624b6874af91148a542bee58" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.158824 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66498f95d9-n6nvg" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.184004 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bcec3c24-87bd-4c22-a800-d3835455a38b","Type":"ContainerDied","Data":"254ee6473012064881c3b931949d5889b646c256080246e608ecc4945a005f58"} Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.184104 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.218718 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bc5ed6a-2607-4a28-8bd3-949b0f0c761d-combined-ca-bundle\") pod \"barbican-api-7d6f7f4cc8-c4tt4\" (UID: \"9bc5ed6a-2607-4a28-8bd3-949b0f0c761d\") " pod="openstack/barbican-api-7d6f7f4cc8-c4tt4" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.218870 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9bc5ed6a-2607-4a28-8bd3-949b0f0c761d-config-data-custom\") pod \"barbican-api-7d6f7f4cc8-c4tt4\" (UID: \"9bc5ed6a-2607-4a28-8bd3-949b0f0c761d\") " pod="openstack/barbican-api-7d6f7f4cc8-c4tt4" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.219149 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bc5ed6a-2607-4a28-8bd3-949b0f0c761d-config-data\") pod \"barbican-api-7d6f7f4cc8-c4tt4\" (UID: \"9bc5ed6a-2607-4a28-8bd3-949b0f0c761d\") " pod="openstack/barbican-api-7d6f7f4cc8-c4tt4" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.219274 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bc5ed6a-2607-4a28-8bd3-949b0f0c761d-public-tls-certs\") pod \"barbican-api-7d6f7f4cc8-c4tt4\" (UID: \"9bc5ed6a-2607-4a28-8bd3-949b0f0c761d\") " pod="openstack/barbican-api-7d6f7f4cc8-c4tt4" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.219314 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bc5ed6a-2607-4a28-8bd3-949b0f0c761d-logs\") pod \"barbican-api-7d6f7f4cc8-c4tt4\" (UID: \"9bc5ed6a-2607-4a28-8bd3-949b0f0c761d\") " pod="openstack/barbican-api-7d6f7f4cc8-c4tt4" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.219359 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6vwv\" (UniqueName: \"kubernetes.io/projected/9bc5ed6a-2607-4a28-8bd3-949b0f0c761d-kube-api-access-w6vwv\") pod \"barbican-api-7d6f7f4cc8-c4tt4\" (UID: \"9bc5ed6a-2607-4a28-8bd3-949b0f0c761d\") " pod="openstack/barbican-api-7d6f7f4cc8-c4tt4" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.219400 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bc5ed6a-2607-4a28-8bd3-949b0f0c761d-internal-tls-certs\") pod \"barbican-api-7d6f7f4cc8-c4tt4\" (UID: \"9bc5ed6a-2607-4a28-8bd3-949b0f0c761d\") " pod="openstack/barbican-api-7d6f7f4cc8-c4tt4" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.231611 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66498f95d9-n6nvg"] Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.252902 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-66498f95d9-n6nvg"] Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.282212 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.307989 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.321285 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bc5ed6a-2607-4a28-8bd3-949b0f0c761d-config-data\") pod \"barbican-api-7d6f7f4cc8-c4tt4\" (UID: \"9bc5ed6a-2607-4a28-8bd3-949b0f0c761d\") " pod="openstack/barbican-api-7d6f7f4cc8-c4tt4" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.321938 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bc5ed6a-2607-4a28-8bd3-949b0f0c761d-public-tls-certs\") pod \"barbican-api-7d6f7f4cc8-c4tt4\" (UID: \"9bc5ed6a-2607-4a28-8bd3-949b0f0c761d\") " pod="openstack/barbican-api-7d6f7f4cc8-c4tt4" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.322060 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bc5ed6a-2607-4a28-8bd3-949b0f0c761d-logs\") pod \"barbican-api-7d6f7f4cc8-c4tt4\" (UID: \"9bc5ed6a-2607-4a28-8bd3-949b0f0c761d\") " pod="openstack/barbican-api-7d6f7f4cc8-c4tt4" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.322148 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6vwv\" (UniqueName: \"kubernetes.io/projected/9bc5ed6a-2607-4a28-8bd3-949b0f0c761d-kube-api-access-w6vwv\") pod \"barbican-api-7d6f7f4cc8-c4tt4\" (UID: \"9bc5ed6a-2607-4a28-8bd3-949b0f0c761d\") " pod="openstack/barbican-api-7d6f7f4cc8-c4tt4" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.322230 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bc5ed6a-2607-4a28-8bd3-949b0f0c761d-internal-tls-certs\") pod \"barbican-api-7d6f7f4cc8-c4tt4\" (UID: \"9bc5ed6a-2607-4a28-8bd3-949b0f0c761d\") " pod="openstack/barbican-api-7d6f7f4cc8-c4tt4" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.322332 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bc5ed6a-2607-4a28-8bd3-949b0f0c761d-combined-ca-bundle\") pod \"barbican-api-7d6f7f4cc8-c4tt4\" (UID: \"9bc5ed6a-2607-4a28-8bd3-949b0f0c761d\") " pod="openstack/barbican-api-7d6f7f4cc8-c4tt4" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.322415 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9bc5ed6a-2607-4a28-8bd3-949b0f0c761d-config-data-custom\") pod \"barbican-api-7d6f7f4cc8-c4tt4\" (UID: \"9bc5ed6a-2607-4a28-8bd3-949b0f0c761d\") " pod="openstack/barbican-api-7d6f7f4cc8-c4tt4" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.325552 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9bc5ed6a-2607-4a28-8bd3-949b0f0c761d-logs\") pod \"barbican-api-7d6f7f4cc8-c4tt4\" (UID: \"9bc5ed6a-2607-4a28-8bd3-949b0f0c761d\") " pod="openstack/barbican-api-7d6f7f4cc8-c4tt4" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.331514 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bc5ed6a-2607-4a28-8bd3-949b0f0c761d-internal-tls-certs\") pod \"barbican-api-7d6f7f4cc8-c4tt4\" (UID: \"9bc5ed6a-2607-4a28-8bd3-949b0f0c761d\") " pod="openstack/barbican-api-7d6f7f4cc8-c4tt4" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.331988 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9bc5ed6a-2607-4a28-8bd3-949b0f0c761d-public-tls-certs\") pod \"barbican-api-7d6f7f4cc8-c4tt4\" (UID: \"9bc5ed6a-2607-4a28-8bd3-949b0f0c761d\") " pod="openstack/barbican-api-7d6f7f4cc8-c4tt4" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.332552 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9bc5ed6a-2607-4a28-8bd3-949b0f0c761d-config-data-custom\") pod \"barbican-api-7d6f7f4cc8-c4tt4\" (UID: \"9bc5ed6a-2607-4a28-8bd3-949b0f0c761d\") " pod="openstack/barbican-api-7d6f7f4cc8-c4tt4" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.335352 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bc5ed6a-2607-4a28-8bd3-949b0f0c761d-combined-ca-bundle\") pod \"barbican-api-7d6f7f4cc8-c4tt4\" (UID: \"9bc5ed6a-2607-4a28-8bd3-949b0f0c761d\") " pod="openstack/barbican-api-7d6f7f4cc8-c4tt4" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.337881 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bc5ed6a-2607-4a28-8bd3-949b0f0c761d-config-data\") pod \"barbican-api-7d6f7f4cc8-c4tt4\" (UID: \"9bc5ed6a-2607-4a28-8bd3-949b0f0c761d\") " pod="openstack/barbican-api-7d6f7f4cc8-c4tt4" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.357823 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6vwv\" (UniqueName: \"kubernetes.io/projected/9bc5ed6a-2607-4a28-8bd3-949b0f0c761d-kube-api-access-w6vwv\") pod \"barbican-api-7d6f7f4cc8-c4tt4\" (UID: \"9bc5ed6a-2607-4a28-8bd3-949b0f0c761d\") " pod="openstack/barbican-api-7d6f7f4cc8-c4tt4" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.366299 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.369752 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.383675 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.383835 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.400962 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.424432 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/75119e97-b896-4b71-ab1f-28db45a4606d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"75119e97-b896-4b71-ab1f-28db45a4606d\") " pod="openstack/ceilometer-0" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.424541 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75119e97-b896-4b71-ab1f-28db45a4606d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"75119e97-b896-4b71-ab1f-28db45a4606d\") " pod="openstack/ceilometer-0" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.424597 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75119e97-b896-4b71-ab1f-28db45a4606d-log-httpd\") pod \"ceilometer-0\" (UID: \"75119e97-b896-4b71-ab1f-28db45a4606d\") " pod="openstack/ceilometer-0" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.424626 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75119e97-b896-4b71-ab1f-28db45a4606d-run-httpd\") pod \"ceilometer-0\" (UID: \"75119e97-b896-4b71-ab1f-28db45a4606d\") " pod="openstack/ceilometer-0" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.424678 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cmwf\" (UniqueName: \"kubernetes.io/projected/75119e97-b896-4b71-ab1f-28db45a4606d-kube-api-access-2cmwf\") pod \"ceilometer-0\" (UID: \"75119e97-b896-4b71-ab1f-28db45a4606d\") " pod="openstack/ceilometer-0" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.424729 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75119e97-b896-4b71-ab1f-28db45a4606d-config-data\") pod \"ceilometer-0\" (UID: \"75119e97-b896-4b71-ab1f-28db45a4606d\") " pod="openstack/ceilometer-0" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.424767 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75119e97-b896-4b71-ab1f-28db45a4606d-scripts\") pod \"ceilometer-0\" (UID: \"75119e97-b896-4b71-ab1f-28db45a4606d\") " pod="openstack/ceilometer-0" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.456599 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7d6f7f4cc8-c4tt4" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.526711 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75119e97-b896-4b71-ab1f-28db45a4606d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"75119e97-b896-4b71-ab1f-28db45a4606d\") " pod="openstack/ceilometer-0" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.526890 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75119e97-b896-4b71-ab1f-28db45a4606d-log-httpd\") pod \"ceilometer-0\" (UID: \"75119e97-b896-4b71-ab1f-28db45a4606d\") " pod="openstack/ceilometer-0" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.526924 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75119e97-b896-4b71-ab1f-28db45a4606d-run-httpd\") pod \"ceilometer-0\" (UID: \"75119e97-b896-4b71-ab1f-28db45a4606d\") " pod="openstack/ceilometer-0" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.526988 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cmwf\" (UniqueName: \"kubernetes.io/projected/75119e97-b896-4b71-ab1f-28db45a4606d-kube-api-access-2cmwf\") pod \"ceilometer-0\" (UID: \"75119e97-b896-4b71-ab1f-28db45a4606d\") " pod="openstack/ceilometer-0" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.527054 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75119e97-b896-4b71-ab1f-28db45a4606d-config-data\") pod \"ceilometer-0\" (UID: \"75119e97-b896-4b71-ab1f-28db45a4606d\") " pod="openstack/ceilometer-0" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.527102 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75119e97-b896-4b71-ab1f-28db45a4606d-scripts\") pod \"ceilometer-0\" (UID: \"75119e97-b896-4b71-ab1f-28db45a4606d\") " pod="openstack/ceilometer-0" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.527178 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/75119e97-b896-4b71-ab1f-28db45a4606d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"75119e97-b896-4b71-ab1f-28db45a4606d\") " pod="openstack/ceilometer-0" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.527439 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75119e97-b896-4b71-ab1f-28db45a4606d-log-httpd\") pod \"ceilometer-0\" (UID: \"75119e97-b896-4b71-ab1f-28db45a4606d\") " pod="openstack/ceilometer-0" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.527459 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75119e97-b896-4b71-ab1f-28db45a4606d-run-httpd\") pod \"ceilometer-0\" (UID: \"75119e97-b896-4b71-ab1f-28db45a4606d\") " pod="openstack/ceilometer-0" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.532416 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/75119e97-b896-4b71-ab1f-28db45a4606d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"75119e97-b896-4b71-ab1f-28db45a4606d\") " pod="openstack/ceilometer-0" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.535360 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75119e97-b896-4b71-ab1f-28db45a4606d-scripts\") pod \"ceilometer-0\" (UID: \"75119e97-b896-4b71-ab1f-28db45a4606d\") " pod="openstack/ceilometer-0" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.535681 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75119e97-b896-4b71-ab1f-28db45a4606d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"75119e97-b896-4b71-ab1f-28db45a4606d\") " pod="openstack/ceilometer-0" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.537743 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75119e97-b896-4b71-ab1f-28db45a4606d-config-data\") pod \"ceilometer-0\" (UID: \"75119e97-b896-4b71-ab1f-28db45a4606d\") " pod="openstack/ceilometer-0" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.556112 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cmwf\" (UniqueName: \"kubernetes.io/projected/75119e97-b896-4b71-ab1f-28db45a4606d-kube-api-access-2cmwf\") pod \"ceilometer-0\" (UID: \"75119e97-b896-4b71-ab1f-28db45a4606d\") " pod="openstack/ceilometer-0" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.614488 4881 scope.go:117] "RemoveContainer" containerID="fa55b39990f74afb936b29eb6ca3dc719ebcf2a4b47a29af77516eac502e8d26" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.742649 4881 scope.go:117] "RemoveContainer" containerID="7a2597fbfe970937452b64ccef79f25aaeee72972449d78e0549c998d5351134" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.812536 4881 scope.go:117] "RemoveContainer" containerID="ca18caa0fee509128e7ffae2755d6b5b1126bfe1c63366090fd0947db93d8443" Jan 21 11:21:04 crc kubenswrapper[4881]: I0121 11:21:04.827252 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:21:05 crc kubenswrapper[4881]: I0121 11:21:05.124160 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-69c96776fd-k2z88" podUID="2f516fb6-322b-4eee-9d4d-a10176959bbb" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.160:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.160:8443: connect: connection refused" Jan 21 11:21:05 crc kubenswrapper[4881]: I0121 11:21:05.125539 4881 scope.go:117] "RemoveContainer" containerID="b14382df533ca3054b8542bddeff2d41d2f1e579142ea3b20b1a7a9c276362b8" Jan 21 11:21:05 crc kubenswrapper[4881]: I0121 11:21:05.222961 4881 generic.go:334] "Generic (PLEG): container finished" podID="65250dcf-0f0f-4fa6-8d57-e07d3d29f290" containerID="6641f95a17dea3fe9aff6d4faf3bd17425257c19253868f2b83b7d7d759a48fd" exitCode=0 Jan 21 11:21:05 crc kubenswrapper[4881]: I0121 11:21:05.223097 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4wxvl" event={"ID":"65250dcf-0f0f-4fa6-8d57-e07d3d29f290","Type":"ContainerDied","Data":"6641f95a17dea3fe9aff6d4faf3bd17425257c19253868f2b83b7d7d759a48fd"} Jan 21 11:21:05 crc kubenswrapper[4881]: I0121 11:21:05.250932 4881 scope.go:117] "RemoveContainer" containerID="04c2a8411b86bd02035922d4fe1ad96f1a1dbf240fbfa10221b52bc6ac101706" Jan 21 11:21:05 crc kubenswrapper[4881]: I0121 11:21:05.343456 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a4d2e63-3d53-44ef-8968-22a7ced8d0fe" path="/var/lib/kubelet/pods/3a4d2e63-3d53-44ef-8968-22a7ced8d0fe/volumes" Jan 21 11:21:05 crc kubenswrapper[4881]: I0121 11:21:05.345694 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcec3c24-87bd-4c22-a800-d3835455a38b" path="/var/lib/kubelet/pods/bcec3c24-87bd-4c22-a800-d3835455a38b/volumes" Jan 21 11:21:05 crc kubenswrapper[4881]: I0121 11:21:05.384708 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7d6f7f4cc8-c4tt4"] Jan 21 11:21:05 crc kubenswrapper[4881]: I0121 11:21:05.529165 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:21:06 crc kubenswrapper[4881]: I0121 11:21:06.240120 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-54f549c774-rnptw" event={"ID":"6e80f53a-8873-4c07-b738-2854d9b8b089","Type":"ContainerStarted","Data":"d8791563c1ca72988ffa5c7dd6721abff63dc81e7d6af0726e4381840048b729"} Jan 21 11:21:06 crc kubenswrapper[4881]: I0121 11:21:06.240647 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-54f549c774-rnptw" event={"ID":"6e80f53a-8873-4c07-b738-2854d9b8b089","Type":"ContainerStarted","Data":"3c5a26b98954f78ce7a8ff7f8fcf9dc2e852f1f67ae837fcec1bb082944e5a82"} Jan 21 11:21:06 crc kubenswrapper[4881]: I0121 11:21:06.242980 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7d6f7f4cc8-c4tt4" event={"ID":"9bc5ed6a-2607-4a28-8bd3-949b0f0c761d","Type":"ContainerStarted","Data":"57adf152bcc2268a1ab736b8d2425c489a664b9e1996850dcca6047b3be237f2"} Jan 21 11:21:06 crc kubenswrapper[4881]: I0121 11:21:06.243027 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7d6f7f4cc8-c4tt4" event={"ID":"9bc5ed6a-2607-4a28-8bd3-949b0f0c761d","Type":"ContainerStarted","Data":"36fe06b953dbd0c2746adb410072bdf8e6dc67fad566cb6d5ab0d5b768131c92"} Jan 21 11:21:06 crc kubenswrapper[4881]: I0121 11:21:06.248304 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" event={"ID":"d2ecfd63-c654-42e9-b324-22c02d21b506","Type":"ContainerStarted","Data":"be5a6f1470e765f48f097fc450f52d809f8dde1c774ca2b5463ea172b9bb0587"} Jan 21 11:21:06 crc kubenswrapper[4881]: I0121 11:21:06.248419 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" Jan 21 11:21:06 crc kubenswrapper[4881]: I0121 11:21:06.257208 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" event={"ID":"85f05121-bd30-4b3f-936d-dc20e30fca06","Type":"ContainerStarted","Data":"791785eb6fe44e62deb830a72f9b0fb2d75b8a52cfe9209138c6ef5d0b47ed74"} Jan 21 11:21:06 crc kubenswrapper[4881]: I0121 11:21:06.258697 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" Jan 21 11:21:06 crc kubenswrapper[4881]: I0121 11:21:06.258726 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" Jan 21 11:21:06 crc kubenswrapper[4881]: I0121 11:21:06.265664 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-54f549c774-rnptw" podStartSLOduration=3.404066624 podStartE2EDuration="6.265648462s" podCreationTimestamp="2026-01-21 11:21:00 +0000 UTC" firstStartedPulling="2026-01-21 11:21:01.881806094 +0000 UTC m=+1449.141762553" lastFinishedPulling="2026-01-21 11:21:04.743387922 +0000 UTC m=+1452.003344391" observedRunningTime="2026-01-21 11:21:06.265187751 +0000 UTC m=+1453.525144230" watchObservedRunningTime="2026-01-21 11:21:06.265648462 +0000 UTC m=+1453.525604931" Jan 21 11:21:06 crc kubenswrapper[4881]: I0121 11:21:06.267684 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75119e97-b896-4b71-ab1f-28db45a4606d","Type":"ContainerStarted","Data":"bc7224d9bf84f344828f19a13fb8096ac19d517cb3bb70d8fce495b5aa46625b"} Jan 21 11:21:06 crc kubenswrapper[4881]: I0121 11:21:06.267747 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75119e97-b896-4b71-ab1f-28db45a4606d","Type":"ContainerStarted","Data":"9b7298fa3a3fcd477e8d84c1587f761e32e00a24d488249df9cca1ca349c7bc0"} Jan 21 11:21:06 crc kubenswrapper[4881]: I0121 11:21:06.271153 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-55755579c5-csgz2" event={"ID":"90253f07-2dfb-48b3-9b75-34a653836589","Type":"ContainerStarted","Data":"17dedd4e1860567e14390962a5f62dfcb62566e788a9c94218631794328be6d0"} Jan 21 11:21:06 crc kubenswrapper[4881]: I0121 11:21:06.271312 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-55755579c5-csgz2" event={"ID":"90253f07-2dfb-48b3-9b75-34a653836589","Type":"ContainerStarted","Data":"5e3bbdab8b8364a2eeaa709840c0197cabd9dda1a1b1cfd6ea9d0e61abb1fc04"} Jan 21 11:21:06 crc kubenswrapper[4881]: I0121 11:21:06.302869 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" podStartSLOduration=6.302848649 podStartE2EDuration="6.302848649s" podCreationTimestamp="2026-01-21 11:21:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:21:06.292666634 +0000 UTC m=+1453.552623103" watchObservedRunningTime="2026-01-21 11:21:06.302848649 +0000 UTC m=+1453.562805118" Jan 21 11:21:06 crc kubenswrapper[4881]: I0121 11:21:06.320197 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" podStartSLOduration=6.3201785600000004 podStartE2EDuration="6.32017856s" podCreationTimestamp="2026-01-21 11:21:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:21:06.31977722 +0000 UTC m=+1453.579733689" watchObservedRunningTime="2026-01-21 11:21:06.32017856 +0000 UTC m=+1453.580135029" Jan 21 11:21:06 crc kubenswrapper[4881]: I0121 11:21:06.345092 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-55755579c5-csgz2" podStartSLOduration=3.004303124 podStartE2EDuration="6.345074379s" podCreationTimestamp="2026-01-21 11:21:00 +0000 UTC" firstStartedPulling="2026-01-21 11:21:01.381271015 +0000 UTC m=+1448.641227484" lastFinishedPulling="2026-01-21 11:21:04.72204227 +0000 UTC m=+1451.981998739" observedRunningTime="2026-01-21 11:21:06.34109196 +0000 UTC m=+1453.601048429" watchObservedRunningTime="2026-01-21 11:21:06.345074379 +0000 UTC m=+1453.605030848" Jan 21 11:21:06 crc kubenswrapper[4881]: I0121 11:21:06.863036 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4wxvl" Jan 21 11:21:06 crc kubenswrapper[4881]: I0121 11:21:06.934734 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-kube-api-access-ltkw6" (OuterVolumeSpecName: "kube-api-access-ltkw6") pod "65250dcf-0f0f-4fa6-8d57-e07d3d29f290" (UID: "65250dcf-0f0f-4fa6-8d57-e07d3d29f290"). InnerVolumeSpecName "kube-api-access-ltkw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:21:06 crc kubenswrapper[4881]: I0121 11:21:06.938437 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltkw6\" (UniqueName: \"kubernetes.io/projected/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-kube-api-access-ltkw6\") pod \"65250dcf-0f0f-4fa6-8d57-e07d3d29f290\" (UID: \"65250dcf-0f0f-4fa6-8d57-e07d3d29f290\") " Jan 21 11:21:06 crc kubenswrapper[4881]: I0121 11:21:06.938643 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-combined-ca-bundle\") pod \"65250dcf-0f0f-4fa6-8d57-e07d3d29f290\" (UID: \"65250dcf-0f0f-4fa6-8d57-e07d3d29f290\") " Jan 21 11:21:06 crc kubenswrapper[4881]: I0121 11:21:06.938752 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-scripts\") pod \"65250dcf-0f0f-4fa6-8d57-e07d3d29f290\" (UID: \"65250dcf-0f0f-4fa6-8d57-e07d3d29f290\") " Jan 21 11:21:06 crc kubenswrapper[4881]: I0121 11:21:06.938801 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-etc-machine-id\") pod \"65250dcf-0f0f-4fa6-8d57-e07d3d29f290\" (UID: \"65250dcf-0f0f-4fa6-8d57-e07d3d29f290\") " Jan 21 11:21:06 crc kubenswrapper[4881]: I0121 11:21:06.938863 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-config-data\") pod \"65250dcf-0f0f-4fa6-8d57-e07d3d29f290\" (UID: \"65250dcf-0f0f-4fa6-8d57-e07d3d29f290\") " Jan 21 11:21:06 crc kubenswrapper[4881]: I0121 11:21:06.938886 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-db-sync-config-data\") pod \"65250dcf-0f0f-4fa6-8d57-e07d3d29f290\" (UID: \"65250dcf-0f0f-4fa6-8d57-e07d3d29f290\") " Jan 21 11:21:06 crc kubenswrapper[4881]: I0121 11:21:06.938963 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "65250dcf-0f0f-4fa6-8d57-e07d3d29f290" (UID: "65250dcf-0f0f-4fa6-8d57-e07d3d29f290"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:21:06 crc kubenswrapper[4881]: I0121 11:21:06.939862 4881 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:06 crc kubenswrapper[4881]: I0121 11:21:06.939886 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltkw6\" (UniqueName: \"kubernetes.io/projected/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-kube-api-access-ltkw6\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:06 crc kubenswrapper[4881]: I0121 11:21:06.944028 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-scripts" (OuterVolumeSpecName: "scripts") pod "65250dcf-0f0f-4fa6-8d57-e07d3d29f290" (UID: "65250dcf-0f0f-4fa6-8d57-e07d3d29f290"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:06 crc kubenswrapper[4881]: I0121 11:21:06.948930 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "65250dcf-0f0f-4fa6-8d57-e07d3d29f290" (UID: "65250dcf-0f0f-4fa6-8d57-e07d3d29f290"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:06 crc kubenswrapper[4881]: I0121 11:21:06.998554 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-config-data" (OuterVolumeSpecName: "config-data") pod "65250dcf-0f0f-4fa6-8d57-e07d3d29f290" (UID: "65250dcf-0f0f-4fa6-8d57-e07d3d29f290"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:07 crc kubenswrapper[4881]: I0121 11:21:07.020648 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "65250dcf-0f0f-4fa6-8d57-e07d3d29f290" (UID: "65250dcf-0f0f-4fa6-8d57-e07d3d29f290"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:07 crc kubenswrapper[4881]: I0121 11:21:07.045168 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:07 crc kubenswrapper[4881]: I0121 11:21:07.045208 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:07 crc kubenswrapper[4881]: I0121 11:21:07.045218 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:07 crc kubenswrapper[4881]: I0121 11:21:07.045227 4881 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/65250dcf-0f0f-4fa6-8d57-e07d3d29f290-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:07 crc kubenswrapper[4881]: I0121 11:21:07.299704 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4wxvl" event={"ID":"65250dcf-0f0f-4fa6-8d57-e07d3d29f290","Type":"ContainerDied","Data":"fcbe801cf2c7f3f9ce63291d49a4353e90c810cdaa5f27e1d6112dedee1eae63"} Jan 21 11:21:07 crc kubenswrapper[4881]: I0121 11:21:07.300127 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcbe801cf2c7f3f9ce63291d49a4353e90c810cdaa5f27e1d6112dedee1eae63" Jan 21 11:21:07 crc kubenswrapper[4881]: I0121 11:21:07.299721 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4wxvl" Jan 21 11:21:07 crc kubenswrapper[4881]: I0121 11:21:07.302527 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7d6f7f4cc8-c4tt4" event={"ID":"9bc5ed6a-2607-4a28-8bd3-949b0f0c761d","Type":"ContainerStarted","Data":"81be590057d64d1af247cdbc56979bf76d7783982f1718a281d906ee494d55e6"} Jan 21 11:21:07 crc kubenswrapper[4881]: I0121 11:21:07.302644 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7d6f7f4cc8-c4tt4" Jan 21 11:21:07 crc kubenswrapper[4881]: I0121 11:21:07.302684 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7d6f7f4cc8-c4tt4" Jan 21 11:21:07 crc kubenswrapper[4881]: I0121 11:21:07.306415 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75119e97-b896-4b71-ab1f-28db45a4606d","Type":"ContainerStarted","Data":"53e2fe665bdaeb7b9eb972106db909c519d01d1c08141b3cb40de82bd0536330"} Jan 21 11:21:07 crc kubenswrapper[4881]: I0121 11:21:07.348095 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7d6f7f4cc8-c4tt4" podStartSLOduration=3.348072165 podStartE2EDuration="3.348072165s" podCreationTimestamp="2026-01-21 11:21:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:21:07.342031495 +0000 UTC m=+1454.601987954" watchObservedRunningTime="2026-01-21 11:21:07.348072165 +0000 UTC m=+1454.608028654" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.182665 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 11:21:08 crc kubenswrapper[4881]: E0121 11:21:08.183587 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65250dcf-0f0f-4fa6-8d57-e07d3d29f290" containerName="cinder-db-sync" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.183615 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="65250dcf-0f0f-4fa6-8d57-e07d3d29f290" containerName="cinder-db-sync" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.193250 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="65250dcf-0f0f-4fa6-8d57-e07d3d29f290" containerName="cinder-db-sync" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.194922 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.198495 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.199280 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-9r4q7" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.199615 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.200465 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.210014 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.291662 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h42sc\" (UniqueName: \"kubernetes.io/projected/86045f5e-defd-4c68-a582-c51c9c26e5c7-kube-api-access-h42sc\") pod \"cinder-scheduler-0\" (UID: \"86045f5e-defd-4c68-a582-c51c9c26e5c7\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.291762 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86045f5e-defd-4c68-a582-c51c9c26e5c7-config-data\") pod \"cinder-scheduler-0\" (UID: \"86045f5e-defd-4c68-a582-c51c9c26e5c7\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.291851 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86045f5e-defd-4c68-a582-c51c9c26e5c7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"86045f5e-defd-4c68-a582-c51c9c26e5c7\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.291940 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86045f5e-defd-4c68-a582-c51c9c26e5c7-scripts\") pod \"cinder-scheduler-0\" (UID: \"86045f5e-defd-4c68-a582-c51c9c26e5c7\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.292032 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/86045f5e-defd-4c68-a582-c51c9c26e5c7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"86045f5e-defd-4c68-a582-c51c9c26e5c7\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.292151 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/86045f5e-defd-4c68-a582-c51c9c26e5c7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"86045f5e-defd-4c68-a582-c51c9c26e5c7\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.382511 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-69f96db49f-qzf9p"] Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.385028 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" podUID="d2ecfd63-c654-42e9-b324-22c02d21b506" containerName="dnsmasq-dns" containerID="cri-o://be5a6f1470e765f48f097fc450f52d809f8dde1c774ca2b5463ea172b9bb0587" gracePeriod=10 Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.385413 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75119e97-b896-4b71-ab1f-28db45a4606d","Type":"ContainerStarted","Data":"899f70ee131f6e530963ca573a67921fd95a35fbdae76709308568e8f0b66d06"} Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.431304 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86045f5e-defd-4c68-a582-c51c9c26e5c7-scripts\") pod \"cinder-scheduler-0\" (UID: \"86045f5e-defd-4c68-a582-c51c9c26e5c7\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.431509 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/86045f5e-defd-4c68-a582-c51c9c26e5c7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"86045f5e-defd-4c68-a582-c51c9c26e5c7\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.431706 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/86045f5e-defd-4c68-a582-c51c9c26e5c7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"86045f5e-defd-4c68-a582-c51c9c26e5c7\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.431755 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h42sc\" (UniqueName: \"kubernetes.io/projected/86045f5e-defd-4c68-a582-c51c9c26e5c7-kube-api-access-h42sc\") pod \"cinder-scheduler-0\" (UID: \"86045f5e-defd-4c68-a582-c51c9c26e5c7\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.431810 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/86045f5e-defd-4c68-a582-c51c9c26e5c7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"86045f5e-defd-4c68-a582-c51c9c26e5c7\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.434848 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86045f5e-defd-4c68-a582-c51c9c26e5c7-config-data\") pod \"cinder-scheduler-0\" (UID: \"86045f5e-defd-4c68-a582-c51c9c26e5c7\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.434964 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86045f5e-defd-4c68-a582-c51c9c26e5c7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"86045f5e-defd-4c68-a582-c51c9c26e5c7\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.446141 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86045f5e-defd-4c68-a582-c51c9c26e5c7-scripts\") pod \"cinder-scheduler-0\" (UID: \"86045f5e-defd-4c68-a582-c51c9c26e5c7\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.497665 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86045f5e-defd-4c68-a582-c51c9c26e5c7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"86045f5e-defd-4c68-a582-c51c9c26e5c7\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.528563 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77b944d67-mw2nq"] Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.481017 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/86045f5e-defd-4c68-a582-c51c9c26e5c7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"86045f5e-defd-4c68-a582-c51c9c26e5c7\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.531018 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86045f5e-defd-4c68-a582-c51c9c26e5c7-config-data\") pod \"cinder-scheduler-0\" (UID: \"86045f5e-defd-4c68-a582-c51c9c26e5c7\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.549934 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77b944d67-mw2nq"] Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.550088 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77b944d67-mw2nq" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.561442 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h42sc\" (UniqueName: \"kubernetes.io/projected/86045f5e-defd-4c68-a582-c51c9c26e5c7-kube-api-access-h42sc\") pod \"cinder-scheduler-0\" (UID: \"86045f5e-defd-4c68-a582-c51c9c26e5c7\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.657653 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8kc6\" (UniqueName: \"kubernetes.io/projected/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-kube-api-access-h8kc6\") pod \"dnsmasq-dns-77b944d67-mw2nq\" (UID: \"b0326de6-1c1a-4e21-9592-ae86b46d7a3f\") " pod="openstack/dnsmasq-dns-77b944d67-mw2nq" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.657744 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-ovsdbserver-sb\") pod \"dnsmasq-dns-77b944d67-mw2nq\" (UID: \"b0326de6-1c1a-4e21-9592-ae86b46d7a3f\") " pod="openstack/dnsmasq-dns-77b944d67-mw2nq" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.657830 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-dns-svc\") pod \"dnsmasq-dns-77b944d67-mw2nq\" (UID: \"b0326de6-1c1a-4e21-9592-ae86b46d7a3f\") " pod="openstack/dnsmasq-dns-77b944d67-mw2nq" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.657852 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-config\") pod \"dnsmasq-dns-77b944d67-mw2nq\" (UID: \"b0326de6-1c1a-4e21-9592-ae86b46d7a3f\") " pod="openstack/dnsmasq-dns-77b944d67-mw2nq" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.657879 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-ovsdbserver-nb\") pod \"dnsmasq-dns-77b944d67-mw2nq\" (UID: \"b0326de6-1c1a-4e21-9592-ae86b46d7a3f\") " pod="openstack/dnsmasq-dns-77b944d67-mw2nq" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.657940 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-dns-swift-storage-0\") pod \"dnsmasq-dns-77b944d67-mw2nq\" (UID: \"b0326de6-1c1a-4e21-9592-ae86b46d7a3f\") " pod="openstack/dnsmasq-dns-77b944d67-mw2nq" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.761335 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8kc6\" (UniqueName: \"kubernetes.io/projected/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-kube-api-access-h8kc6\") pod \"dnsmasq-dns-77b944d67-mw2nq\" (UID: \"b0326de6-1c1a-4e21-9592-ae86b46d7a3f\") " pod="openstack/dnsmasq-dns-77b944d67-mw2nq" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.761741 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-ovsdbserver-sb\") pod \"dnsmasq-dns-77b944d67-mw2nq\" (UID: \"b0326de6-1c1a-4e21-9592-ae86b46d7a3f\") " pod="openstack/dnsmasq-dns-77b944d67-mw2nq" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.761838 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-dns-svc\") pod \"dnsmasq-dns-77b944d67-mw2nq\" (UID: \"b0326de6-1c1a-4e21-9592-ae86b46d7a3f\") " pod="openstack/dnsmasq-dns-77b944d67-mw2nq" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.761867 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-config\") pod \"dnsmasq-dns-77b944d67-mw2nq\" (UID: \"b0326de6-1c1a-4e21-9592-ae86b46d7a3f\") " pod="openstack/dnsmasq-dns-77b944d67-mw2nq" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.761905 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-ovsdbserver-nb\") pod \"dnsmasq-dns-77b944d67-mw2nq\" (UID: \"b0326de6-1c1a-4e21-9592-ae86b46d7a3f\") " pod="openstack/dnsmasq-dns-77b944d67-mw2nq" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.761984 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-dns-swift-storage-0\") pod \"dnsmasq-dns-77b944d67-mw2nq\" (UID: \"b0326de6-1c1a-4e21-9592-ae86b46d7a3f\") " pod="openstack/dnsmasq-dns-77b944d67-mw2nq" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.763166 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-dns-swift-storage-0\") pod \"dnsmasq-dns-77b944d67-mw2nq\" (UID: \"b0326de6-1c1a-4e21-9592-ae86b46d7a3f\") " pod="openstack/dnsmasq-dns-77b944d67-mw2nq" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.764145 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-ovsdbserver-sb\") pod \"dnsmasq-dns-77b944d67-mw2nq\" (UID: \"b0326de6-1c1a-4e21-9592-ae86b46d7a3f\") " pod="openstack/dnsmasq-dns-77b944d67-mw2nq" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.764750 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-dns-svc\") pod \"dnsmasq-dns-77b944d67-mw2nq\" (UID: \"b0326de6-1c1a-4e21-9592-ae86b46d7a3f\") " pod="openstack/dnsmasq-dns-77b944d67-mw2nq" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.765548 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-config\") pod \"dnsmasq-dns-77b944d67-mw2nq\" (UID: \"b0326de6-1c1a-4e21-9592-ae86b46d7a3f\") " pod="openstack/dnsmasq-dns-77b944d67-mw2nq" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.765968 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-ovsdbserver-nb\") pod \"dnsmasq-dns-77b944d67-mw2nq\" (UID: \"b0326de6-1c1a-4e21-9592-ae86b46d7a3f\") " pod="openstack/dnsmasq-dns-77b944d67-mw2nq" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.811668 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8kc6\" (UniqueName: \"kubernetes.io/projected/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-kube-api-access-h8kc6\") pod \"dnsmasq-dns-77b944d67-mw2nq\" (UID: \"b0326de6-1c1a-4e21-9592-ae86b46d7a3f\") " pod="openstack/dnsmasq-dns-77b944d67-mw2nq" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.848416 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.981405 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77b944d67-mw2nq" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.981544 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.983801 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.990311 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 21 11:21:08 crc kubenswrapper[4881]: I0121 11:21:08.992900 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 11:21:09 crc kubenswrapper[4881]: I0121 11:21:09.201821 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-scripts\") pod \"cinder-api-0\" (UID: \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\") " pod="openstack/cinder-api-0" Jan 21 11:21:09 crc kubenswrapper[4881]: I0121 11:21:09.203545 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-etc-machine-id\") pod \"cinder-api-0\" (UID: \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\") " pod="openstack/cinder-api-0" Jan 21 11:21:09 crc kubenswrapper[4881]: I0121 11:21:09.203665 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\") " pod="openstack/cinder-api-0" Jan 21 11:21:09 crc kubenswrapper[4881]: I0121 11:21:09.203687 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-config-data-custom\") pod \"cinder-api-0\" (UID: \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\") " pod="openstack/cinder-api-0" Jan 21 11:21:09 crc kubenswrapper[4881]: I0121 11:21:09.203712 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwkdn\" (UniqueName: \"kubernetes.io/projected/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-kube-api-access-wwkdn\") pod \"cinder-api-0\" (UID: \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\") " pod="openstack/cinder-api-0" Jan 21 11:21:09 crc kubenswrapper[4881]: I0121 11:21:09.203819 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-config-data\") pod \"cinder-api-0\" (UID: \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\") " pod="openstack/cinder-api-0" Jan 21 11:21:09 crc kubenswrapper[4881]: I0121 11:21:09.203854 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-logs\") pod \"cinder-api-0\" (UID: \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\") " pod="openstack/cinder-api-0" Jan 21 11:21:09 crc kubenswrapper[4881]: I0121 11:21:09.310902 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\") " pod="openstack/cinder-api-0" Jan 21 11:21:09 crc kubenswrapper[4881]: I0121 11:21:09.311009 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-config-data-custom\") pod \"cinder-api-0\" (UID: \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\") " pod="openstack/cinder-api-0" Jan 21 11:21:09 crc kubenswrapper[4881]: I0121 11:21:09.311059 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwkdn\" (UniqueName: \"kubernetes.io/projected/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-kube-api-access-wwkdn\") pod \"cinder-api-0\" (UID: \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\") " pod="openstack/cinder-api-0" Jan 21 11:21:09 crc kubenswrapper[4881]: I0121 11:21:09.311166 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-config-data\") pod \"cinder-api-0\" (UID: \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\") " pod="openstack/cinder-api-0" Jan 21 11:21:09 crc kubenswrapper[4881]: I0121 11:21:09.311237 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-logs\") pod \"cinder-api-0\" (UID: \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\") " pod="openstack/cinder-api-0" Jan 21 11:21:09 crc kubenswrapper[4881]: I0121 11:21:09.311323 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-scripts\") pod \"cinder-api-0\" (UID: \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\") " pod="openstack/cinder-api-0" Jan 21 11:21:09 crc kubenswrapper[4881]: I0121 11:21:09.311469 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-etc-machine-id\") pod \"cinder-api-0\" (UID: \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\") " pod="openstack/cinder-api-0" Jan 21 11:21:09 crc kubenswrapper[4881]: I0121 11:21:09.311657 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-etc-machine-id\") pod \"cinder-api-0\" (UID: \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\") " pod="openstack/cinder-api-0" Jan 21 11:21:09 crc kubenswrapper[4881]: I0121 11:21:09.320161 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-logs\") pod \"cinder-api-0\" (UID: \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\") " pod="openstack/cinder-api-0" Jan 21 11:21:09 crc kubenswrapper[4881]: I0121 11:21:09.335909 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-config-data-custom\") pod \"cinder-api-0\" (UID: \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\") " pod="openstack/cinder-api-0" Jan 21 11:21:09 crc kubenswrapper[4881]: I0121 11:21:09.343649 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\") " pod="openstack/cinder-api-0" Jan 21 11:21:09 crc kubenswrapper[4881]: I0121 11:21:09.346701 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-config-data\") pod \"cinder-api-0\" (UID: \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\") " pod="openstack/cinder-api-0" Jan 21 11:21:09 crc kubenswrapper[4881]: I0121 11:21:09.364000 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwkdn\" (UniqueName: \"kubernetes.io/projected/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-kube-api-access-wwkdn\") pod \"cinder-api-0\" (UID: \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\") " pod="openstack/cinder-api-0" Jan 21 11:21:09 crc kubenswrapper[4881]: I0121 11:21:09.378321 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-scripts\") pod \"cinder-api-0\" (UID: \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\") " pod="openstack/cinder-api-0" Jan 21 11:21:09 crc kubenswrapper[4881]: I0121 11:21:09.452948 4881 generic.go:334] "Generic (PLEG): container finished" podID="d2ecfd63-c654-42e9-b324-22c02d21b506" containerID="be5a6f1470e765f48f097fc450f52d809f8dde1c774ca2b5463ea172b9bb0587" exitCode=0 Jan 21 11:21:09 crc kubenswrapper[4881]: I0121 11:21:09.452999 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" event={"ID":"d2ecfd63-c654-42e9-b324-22c02d21b506","Type":"ContainerDied","Data":"be5a6f1470e765f48f097fc450f52d809f8dde1c774ca2b5463ea172b9bb0587"} Jan 21 11:21:09 crc kubenswrapper[4881]: I0121 11:21:09.615233 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 11:21:09 crc kubenswrapper[4881]: I0121 11:21:09.778365 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 11:21:10 crc kubenswrapper[4881]: I0121 11:21:10.594641 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ncbfx" podUID="6a8083e9-c68d-40ca-bde9-b84e43b65ab8" containerName="registry-server" probeResult="failure" output=< Jan 21 11:21:10 crc kubenswrapper[4881]: timeout: failed to connect service ":50051" within 1s Jan 21 11:21:10 crc kubenswrapper[4881]: > Jan 21 11:21:10 crc kubenswrapper[4881]: I0121 11:21:10.833191 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 21 11:21:11 crc kubenswrapper[4881]: I0121 11:21:11.600397 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77b944d67-mw2nq"] Jan 21 11:21:11 crc kubenswrapper[4881]: I0121 11:21:11.862195 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" Jan 21 11:21:12 crc kubenswrapper[4881]: I0121 11:21:12.312066 4881 scope.go:117] "RemoveContainer" containerID="61f6b4008e5afe3c84bc4dbf116ba996728224955a2729f3dc2de6c1a2eeb445" Jan 21 11:21:13 crc kubenswrapper[4881]: I0121 11:21:13.207346 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" Jan 21 11:21:15 crc kubenswrapper[4881]: I0121 11:21:15.124465 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-69c96776fd-k2z88" podUID="2f516fb6-322b-4eee-9d4d-a10176959bbb" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.160:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.160:8443: connect: connection refused" Jan 21 11:21:15 crc kubenswrapper[4881]: I0121 11:21:15.959067 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" podUID="d2ecfd63-c654-42e9-b324-22c02d21b506" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.175:5353: i/o timeout" Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.015880 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7d6f7f4cc8-c4tt4" Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.113972 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.176342 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7d6f7f4cc8-c4tt4" Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.222494 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2ecfd63-c654-42e9-b324-22c02d21b506-ovsdbserver-sb\") pod \"d2ecfd63-c654-42e9-b324-22c02d21b506\" (UID: \"d2ecfd63-c654-42e9-b324-22c02d21b506\") " Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.222631 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfwwk\" (UniqueName: \"kubernetes.io/projected/d2ecfd63-c654-42e9-b324-22c02d21b506-kube-api-access-sfwwk\") pod \"d2ecfd63-c654-42e9-b324-22c02d21b506\" (UID: \"d2ecfd63-c654-42e9-b324-22c02d21b506\") " Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.222673 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2ecfd63-c654-42e9-b324-22c02d21b506-dns-svc\") pod \"d2ecfd63-c654-42e9-b324-22c02d21b506\" (UID: \"d2ecfd63-c654-42e9-b324-22c02d21b506\") " Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.222693 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2ecfd63-c654-42e9-b324-22c02d21b506-config\") pod \"d2ecfd63-c654-42e9-b324-22c02d21b506\" (UID: \"d2ecfd63-c654-42e9-b324-22c02d21b506\") " Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.222946 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2ecfd63-c654-42e9-b324-22c02d21b506-ovsdbserver-nb\") pod \"d2ecfd63-c654-42e9-b324-22c02d21b506\" (UID: \"d2ecfd63-c654-42e9-b324-22c02d21b506\") " Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.223068 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2ecfd63-c654-42e9-b324-22c02d21b506-dns-swift-storage-0\") pod \"d2ecfd63-c654-42e9-b324-22c02d21b506\" (UID: \"d2ecfd63-c654-42e9-b324-22c02d21b506\") " Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.270094 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2ecfd63-c654-42e9-b324-22c02d21b506-kube-api-access-sfwwk" (OuterVolumeSpecName: "kube-api-access-sfwwk") pod "d2ecfd63-c654-42e9-b324-22c02d21b506" (UID: "d2ecfd63-c654-42e9-b324-22c02d21b506"). InnerVolumeSpecName "kube-api-access-sfwwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.272296 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6cbb6fc6b6-tlfhj"] Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.272583 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" podUID="85f05121-bd30-4b3f-936d-dc20e30fca06" containerName="barbican-api-log" containerID="cri-o://9af42ead045471788f06fad27bb79fcdf735280d710e2b7eaa693c5e2301f9f2" gracePeriod=30 Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.272808 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" podUID="85f05121-bd30-4b3f-936d-dc20e30fca06" containerName="barbican-api" containerID="cri-o://791785eb6fe44e62deb830a72f9b0fb2d75b8a52cfe9209138c6ef5d0b47ed74" gracePeriod=30 Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.327135 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfwwk\" (UniqueName: \"kubernetes.io/projected/d2ecfd63-c654-42e9-b324-22c02d21b506-kube-api-access-sfwwk\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.356647 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2ecfd63-c654-42e9-b324-22c02d21b506-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d2ecfd63-c654-42e9-b324-22c02d21b506" (UID: "d2ecfd63-c654-42e9-b324-22c02d21b506"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.449173 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2ecfd63-c654-42e9-b324-22c02d21b506-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.477870 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2ecfd63-c654-42e9-b324-22c02d21b506-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d2ecfd63-c654-42e9-b324-22c02d21b506" (UID: "d2ecfd63-c654-42e9-b324-22c02d21b506"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.513112 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2ecfd63-c654-42e9-b324-22c02d21b506-config" (OuterVolumeSpecName: "config") pod "d2ecfd63-c654-42e9-b324-22c02d21b506" (UID: "d2ecfd63-c654-42e9-b324-22c02d21b506"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.527482 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2ecfd63-c654-42e9-b324-22c02d21b506-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d2ecfd63-c654-42e9-b324-22c02d21b506" (UID: "d2ecfd63-c654-42e9-b324-22c02d21b506"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.528362 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2ecfd63-c654-42e9-b324-22c02d21b506-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d2ecfd63-c654-42e9-b324-22c02d21b506" (UID: "d2ecfd63-c654-42e9-b324-22c02d21b506"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.550724 4881 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2ecfd63-c654-42e9-b324-22c02d21b506-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.550759 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2ecfd63-c654-42e9-b324-22c02d21b506-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.550771 4881 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2ecfd63-c654-42e9-b324-22c02d21b506-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.583851 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2ecfd63-c654-42e9-b324-22c02d21b506-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.618293 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"86045f5e-defd-4c68-a582-c51c9c26e5c7","Type":"ContainerStarted","Data":"37f117f350f4a5bb6279fc8d328dfd979286450f9c150553b8cff2ebf1ef387c"} Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.644217 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e","Type":"ContainerStarted","Data":"5ccae223d32b8d30267f4d247c29e77d1942427c122a26bc75e9b00b89fa3bc0"} Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.677006 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" event={"ID":"d2ecfd63-c654-42e9-b324-22c02d21b506","Type":"ContainerDied","Data":"b2d41124075aed0e5d3723eb39479bb34ae77563466138e26829e292a42a163c"} Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.677075 4881 scope.go:117] "RemoveContainer" containerID="be5a6f1470e765f48f097fc450f52d809f8dde1c774ca2b5463ea172b9bb0587" Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.677355 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.704107 4881 generic.go:334] "Generic (PLEG): container finished" podID="85f05121-bd30-4b3f-936d-dc20e30fca06" containerID="9af42ead045471788f06fad27bb79fcdf735280d710e2b7eaa693c5e2301f9f2" exitCode=143 Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.704456 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" event={"ID":"85f05121-bd30-4b3f-936d-dc20e30fca06","Type":"ContainerDied","Data":"9af42ead045471788f06fad27bb79fcdf735280d710e2b7eaa693c5e2301f9f2"} Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.726074 4881 generic.go:334] "Generic (PLEG): container finished" podID="b0326de6-1c1a-4e21-9592-ae86b46d7a3f" containerID="da41cb40adea77808d3ff28a4531a5534241d5f62e3dd8c6c92475b8c399e085" exitCode=0 Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.726200 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77b944d67-mw2nq" event={"ID":"b0326de6-1c1a-4e21-9592-ae86b46d7a3f","Type":"ContainerDied","Data":"da41cb40adea77808d3ff28a4531a5534241d5f62e3dd8c6c92475b8c399e085"} Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.726225 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77b944d67-mw2nq" event={"ID":"b0326de6-1c1a-4e21-9592-ae86b46d7a3f","Type":"ContainerStarted","Data":"74a53a8b6fc2a23210eccd53e198b676934ec49275b7b25077e7e841617ab615"} Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.758740 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-69f96db49f-qzf9p"] Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.800093 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-69f96db49f-qzf9p"] Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.930926 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 21 11:21:16 crc kubenswrapper[4881]: I0121 11:21:16.982109 4881 scope.go:117] "RemoveContainer" containerID="ab96b5d1c6a41e54c1b2168c0a309330a7285a8a3d539c811f7b6cd696883974" Jan 21 11:21:16 crc kubenswrapper[4881]: W0121 11:21:16.994494 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48d1a17c_f3f7_4da9_bab3_d60bf8acf261.slice/crio-c0535afd842bb7eb500134a0de7821f679adee9739e8044161792e4e82bff780 WatchSource:0}: Error finding container c0535afd842bb7eb500134a0de7821f679adee9739e8044161792e4e82bff780: Status 404 returned error can't find the container with id c0535afd842bb7eb500134a0de7821f679adee9739e8044161792e4e82bff780 Jan 21 11:21:17 crc kubenswrapper[4881]: I0121 11:21:17.327048 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2ecfd63-c654-42e9-b324-22c02d21b506" path="/var/lib/kubelet/pods/d2ecfd63-c654-42e9-b324-22c02d21b506/volumes" Jan 21 11:21:17 crc kubenswrapper[4881]: I0121 11:21:17.756128 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"48d1a17c-f3f7-4da9-bab3-d60bf8acf261","Type":"ContainerStarted","Data":"c0535afd842bb7eb500134a0de7821f679adee9739e8044161792e4e82bff780"} Jan 21 11:21:18 crc kubenswrapper[4881]: I0121 11:21:18.806294 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75119e97-b896-4b71-ab1f-28db45a4606d","Type":"ContainerStarted","Data":"80eb788c6d10eab27f68e4afaa093b8aa3a02ead209347f52848e0e84c80db9f"} Jan 21 11:21:18 crc kubenswrapper[4881]: I0121 11:21:18.808272 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 11:21:18 crc kubenswrapper[4881]: I0121 11:21:18.816235 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77b944d67-mw2nq" event={"ID":"b0326de6-1c1a-4e21-9592-ae86b46d7a3f","Type":"ContainerStarted","Data":"74a966ab9ba8420c744ac8e1932e9ad473ca91de2100fd5d2f1bf2544fd837be"} Jan 21 11:21:18 crc kubenswrapper[4881]: I0121 11:21:18.816913 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77b944d67-mw2nq" Jan 21 11:21:18 crc kubenswrapper[4881]: I0121 11:21:18.862436 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.405756021 podStartE2EDuration="14.862408218s" podCreationTimestamp="2026-01-21 11:21:04 +0000 UTC" firstStartedPulling="2026-01-21 11:21:05.531352615 +0000 UTC m=+1452.791309084" lastFinishedPulling="2026-01-21 11:21:16.988004812 +0000 UTC m=+1464.247961281" observedRunningTime="2026-01-21 11:21:18.844144394 +0000 UTC m=+1466.104100863" watchObservedRunningTime="2026-01-21 11:21:18.862408218 +0000 UTC m=+1466.122364687" Jan 21 11:21:18 crc kubenswrapper[4881]: I0121 11:21:18.888858 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77b944d67-mw2nq" podStartSLOduration=10.888835236 podStartE2EDuration="10.888835236s" podCreationTimestamp="2026-01-21 11:21:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:21:18.881134734 +0000 UTC m=+1466.141091203" watchObservedRunningTime="2026-01-21 11:21:18.888835236 +0000 UTC m=+1466.148791695" Jan 21 11:21:19 crc kubenswrapper[4881]: I0121 11:21:19.279965 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-59bf6c8c7b-wvc46" Jan 21 11:21:19 crc kubenswrapper[4881]: I0121 11:21:19.498069 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 21 11:21:19 crc kubenswrapper[4881]: I0121 11:21:19.589547 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Jan 21 11:21:19 crc kubenswrapper[4881]: I0121 11:21:19.661725 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ncbfx" Jan 21 11:21:19 crc kubenswrapper[4881]: I0121 11:21:19.749887 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ncbfx" Jan 21 11:21:19 crc kubenswrapper[4881]: I0121 11:21:19.837638 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-59bf6c8c7b-wvc46" Jan 21 11:21:19 crc kubenswrapper[4881]: I0121 11:21:19.915259 4881 generic.go:334] "Generic (PLEG): container finished" podID="85f05121-bd30-4b3f-936d-dc20e30fca06" containerID="791785eb6fe44e62deb830a72f9b0fb2d75b8a52cfe9209138c6ef5d0b47ed74" exitCode=0 Jan 21 11:21:19 crc kubenswrapper[4881]: I0121 11:21:19.915344 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" event={"ID":"85f05121-bd30-4b3f-936d-dc20e30fca06","Type":"ContainerDied","Data":"791785eb6fe44e62deb830a72f9b0fb2d75b8a52cfe9209138c6ef5d0b47ed74"} Jan 21 11:21:19 crc kubenswrapper[4881]: I0121 11:21:19.919508 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"48d1a17c-f3f7-4da9-bab3-d60bf8acf261","Type":"ContainerStarted","Data":"fb1a9c603817e07a4207e571ae05892f26734f9d1dd1a9aa64d1de0e3a66cf96"} Jan 21 11:21:19 crc kubenswrapper[4881]: I0121 11:21:19.920406 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ncbfx"] Jan 21 11:21:19 crc kubenswrapper[4881]: I0121 11:21:19.922953 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 21 11:21:20 crc kubenswrapper[4881]: I0121 11:21:20.026193 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Jan 21 11:21:20 crc kubenswrapper[4881]: I0121 11:21:20.131083 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" Jan 21 11:21:20 crc kubenswrapper[4881]: I0121 11:21:20.220725 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsnsj\" (UniqueName: \"kubernetes.io/projected/85f05121-bd30-4b3f-936d-dc20e30fca06-kube-api-access-rsnsj\") pod \"85f05121-bd30-4b3f-936d-dc20e30fca06\" (UID: \"85f05121-bd30-4b3f-936d-dc20e30fca06\") " Jan 21 11:21:20 crc kubenswrapper[4881]: I0121 11:21:20.220887 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85f05121-bd30-4b3f-936d-dc20e30fca06-config-data\") pod \"85f05121-bd30-4b3f-936d-dc20e30fca06\" (UID: \"85f05121-bd30-4b3f-936d-dc20e30fca06\") " Jan 21 11:21:20 crc kubenswrapper[4881]: I0121 11:21:20.220929 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/85f05121-bd30-4b3f-936d-dc20e30fca06-config-data-custom\") pod \"85f05121-bd30-4b3f-936d-dc20e30fca06\" (UID: \"85f05121-bd30-4b3f-936d-dc20e30fca06\") " Jan 21 11:21:20 crc kubenswrapper[4881]: I0121 11:21:20.221096 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85f05121-bd30-4b3f-936d-dc20e30fca06-logs\") pod \"85f05121-bd30-4b3f-936d-dc20e30fca06\" (UID: \"85f05121-bd30-4b3f-936d-dc20e30fca06\") " Jan 21 11:21:20 crc kubenswrapper[4881]: I0121 11:21:20.221130 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85f05121-bd30-4b3f-936d-dc20e30fca06-combined-ca-bundle\") pod \"85f05121-bd30-4b3f-936d-dc20e30fca06\" (UID: \"85f05121-bd30-4b3f-936d-dc20e30fca06\") " Jan 21 11:21:20 crc kubenswrapper[4881]: I0121 11:21:20.224587 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85f05121-bd30-4b3f-936d-dc20e30fca06-logs" (OuterVolumeSpecName: "logs") pod "85f05121-bd30-4b3f-936d-dc20e30fca06" (UID: "85f05121-bd30-4b3f-936d-dc20e30fca06"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:21:20 crc kubenswrapper[4881]: I0121 11:21:20.229370 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85f05121-bd30-4b3f-936d-dc20e30fca06-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "85f05121-bd30-4b3f-936d-dc20e30fca06" (UID: "85f05121-bd30-4b3f-936d-dc20e30fca06"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:20 crc kubenswrapper[4881]: I0121 11:21:20.242391 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85f05121-bd30-4b3f-936d-dc20e30fca06-kube-api-access-rsnsj" (OuterVolumeSpecName: "kube-api-access-rsnsj") pod "85f05121-bd30-4b3f-936d-dc20e30fca06" (UID: "85f05121-bd30-4b3f-936d-dc20e30fca06"). InnerVolumeSpecName "kube-api-access-rsnsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:21:20 crc kubenswrapper[4881]: I0121 11:21:20.280288 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85f05121-bd30-4b3f-936d-dc20e30fca06-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "85f05121-bd30-4b3f-936d-dc20e30fca06" (UID: "85f05121-bd30-4b3f-936d-dc20e30fca06"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:20 crc kubenswrapper[4881]: I0121 11:21:20.308731 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85f05121-bd30-4b3f-936d-dc20e30fca06-config-data" (OuterVolumeSpecName: "config-data") pod "85f05121-bd30-4b3f-936d-dc20e30fca06" (UID: "85f05121-bd30-4b3f-936d-dc20e30fca06"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:20 crc kubenswrapper[4881]: I0121 11:21:20.323584 4881 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/85f05121-bd30-4b3f-936d-dc20e30fca06-logs\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:20 crc kubenswrapper[4881]: I0121 11:21:20.323622 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85f05121-bd30-4b3f-936d-dc20e30fca06-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:20 crc kubenswrapper[4881]: I0121 11:21:20.323635 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rsnsj\" (UniqueName: \"kubernetes.io/projected/85f05121-bd30-4b3f-936d-dc20e30fca06-kube-api-access-rsnsj\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:20 crc kubenswrapper[4881]: I0121 11:21:20.323646 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85f05121-bd30-4b3f-936d-dc20e30fca06-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:20 crc kubenswrapper[4881]: I0121 11:21:20.323669 4881 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/85f05121-bd30-4b3f-936d-dc20e30fca06-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:20 crc kubenswrapper[4881]: I0121 11:21:20.932647 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"86045f5e-defd-4c68-a582-c51c9c26e5c7","Type":"ContainerStarted","Data":"f179a38b8e729fdba1d50653424c543fe9ebf0803e8ecb14e1eaa90d4edb87bf"} Jan 21 11:21:20 crc kubenswrapper[4881]: I0121 11:21:20.935525 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" event={"ID":"85f05121-bd30-4b3f-936d-dc20e30fca06","Type":"ContainerDied","Data":"7876bc29105eec2a39d493ced73df7df6c703880a81ffba5229cbe6f92400377"} Jan 21 11:21:20 crc kubenswrapper[4881]: I0121 11:21:20.935586 4881 scope.go:117] "RemoveContainer" containerID="791785eb6fe44e62deb830a72f9b0fb2d75b8a52cfe9209138c6ef5d0b47ed74" Jan 21 11:21:20 crc kubenswrapper[4881]: I0121 11:21:20.935638 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6cbb6fc6b6-tlfhj" Jan 21 11:21:20 crc kubenswrapper[4881]: I0121 11:21:20.945209 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"48d1a17c-f3f7-4da9-bab3-d60bf8acf261","Type":"ContainerStarted","Data":"dc4b5bf988baf1bd0e9bfba02297e594c49b64cccb9777011ae90d555a839fe9"} Jan 21 11:21:20 crc kubenswrapper[4881]: I0121 11:21:20.945244 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="48d1a17c-f3f7-4da9-bab3-d60bf8acf261" containerName="cinder-api-log" containerID="cri-o://fb1a9c603817e07a4207e571ae05892f26734f9d1dd1a9aa64d1de0e3a66cf96" gracePeriod=30 Jan 21 11:21:20 crc kubenswrapper[4881]: I0121 11:21:20.945353 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="48d1a17c-f3f7-4da9-bab3-d60bf8acf261" containerName="cinder-api" containerID="cri-o://dc4b5bf988baf1bd0e9bfba02297e594c49b64cccb9777011ae90d555a839fe9" gracePeriod=30 Jan 21 11:21:20 crc kubenswrapper[4881]: I0121 11:21:20.945647 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 21 11:21:20 crc kubenswrapper[4881]: I0121 11:21:20.946842 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ncbfx" podUID="6a8083e9-c68d-40ca-bde9-b84e43b65ab8" containerName="registry-server" containerID="cri-o://bb51d30f717ade21f99893a221476158fedbab913c5592a0655c1dfba33d69c7" gracePeriod=2 Jan 21 11:21:20 crc kubenswrapper[4881]: I0121 11:21:20.962813 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-69f96db49f-qzf9p" podUID="d2ecfd63-c654-42e9-b324-22c02d21b506" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.175:5353: i/o timeout" Jan 21 11:21:20 crc kubenswrapper[4881]: I0121 11:21:20.972186 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=12.972167001 podStartE2EDuration="12.972167001s" podCreationTimestamp="2026-01-21 11:21:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:21:20.971199977 +0000 UTC m=+1468.231156456" watchObservedRunningTime="2026-01-21 11:21:20.972167001 +0000 UTC m=+1468.232123470" Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.019631 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-857c5cc966-ggkc4" Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.030942 4881 scope.go:117] "RemoveContainer" containerID="9af42ead045471788f06fad27bb79fcdf735280d710e2b7eaa693c5e2301f9f2" Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.077763 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6cbb6fc6b6-tlfhj"] Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.116547 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6cbb6fc6b6-tlfhj"] Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.325621 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85f05121-bd30-4b3f-936d-dc20e30fca06" path="/var/lib/kubelet/pods/85f05121-bd30-4b3f-936d-dc20e30fca06/volumes" Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.376352 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-796dd99876-gb7nt" Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.635483 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ncbfx" Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.760602 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79rxx\" (UniqueName: \"kubernetes.io/projected/6a8083e9-c68d-40ca-bde9-b84e43b65ab8-kube-api-access-79rxx\") pod \"6a8083e9-c68d-40ca-bde9-b84e43b65ab8\" (UID: \"6a8083e9-c68d-40ca-bde9-b84e43b65ab8\") " Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.761005 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a8083e9-c68d-40ca-bde9-b84e43b65ab8-catalog-content\") pod \"6a8083e9-c68d-40ca-bde9-b84e43b65ab8\" (UID: \"6a8083e9-c68d-40ca-bde9-b84e43b65ab8\") " Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.761047 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a8083e9-c68d-40ca-bde9-b84e43b65ab8-utilities\") pod \"6a8083e9-c68d-40ca-bde9-b84e43b65ab8\" (UID: \"6a8083e9-c68d-40ca-bde9-b84e43b65ab8\") " Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.764157 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a8083e9-c68d-40ca-bde9-b84e43b65ab8-utilities" (OuterVolumeSpecName: "utilities") pod "6a8083e9-c68d-40ca-bde9-b84e43b65ab8" (UID: "6a8083e9-c68d-40ca-bde9-b84e43b65ab8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.770093 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a8083e9-c68d-40ca-bde9-b84e43b65ab8-kube-api-access-79rxx" (OuterVolumeSpecName: "kube-api-access-79rxx") pod "6a8083e9-c68d-40ca-bde9-b84e43b65ab8" (UID: "6a8083e9-c68d-40ca-bde9-b84e43b65ab8"). InnerVolumeSpecName "kube-api-access-79rxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.852950 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.866635 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79rxx\" (UniqueName: \"kubernetes.io/projected/6a8083e9-c68d-40ca-bde9-b84e43b65ab8-kube-api-access-79rxx\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.866693 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a8083e9-c68d-40ca-bde9-b84e43b65ab8-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.901613 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a8083e9-c68d-40ca-bde9-b84e43b65ab8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6a8083e9-c68d-40ca-bde9-b84e43b65ab8" (UID: "6a8083e9-c68d-40ca-bde9-b84e43b65ab8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.963775 4881 generic.go:334] "Generic (PLEG): container finished" podID="6a8083e9-c68d-40ca-bde9-b84e43b65ab8" containerID="bb51d30f717ade21f99893a221476158fedbab913c5592a0655c1dfba33d69c7" exitCode=0 Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.963860 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ncbfx" event={"ID":"6a8083e9-c68d-40ca-bde9-b84e43b65ab8","Type":"ContainerDied","Data":"bb51d30f717ade21f99893a221476158fedbab913c5592a0655c1dfba33d69c7"} Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.963888 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ncbfx" event={"ID":"6a8083e9-c68d-40ca-bde9-b84e43b65ab8","Type":"ContainerDied","Data":"a06c31c201ce60f211d95724861d78b4cdd096d87a4ed5b0a3ede7c018cd2b3c"} Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.963906 4881 scope.go:117] "RemoveContainer" containerID="bb51d30f717ade21f99893a221476158fedbab913c5592a0655c1dfba33d69c7" Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.964013 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ncbfx" Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.969253 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwkdn\" (UniqueName: \"kubernetes.io/projected/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-kube-api-access-wwkdn\") pod \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\" (UID: \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\") " Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.969353 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-etc-machine-id\") pod \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\" (UID: \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\") " Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.969395 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-config-data\") pod \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\" (UID: \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\") " Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.969465 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-logs\") pod \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\" (UID: \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\") " Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.969521 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-config-data-custom\") pod \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\" (UID: \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\") " Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.969586 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-combined-ca-bundle\") pod \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\" (UID: \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\") " Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.969631 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-scripts\") pod \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\" (UID: \"48d1a17c-f3f7-4da9-bab3-d60bf8acf261\") " Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.970148 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a8083e9-c68d-40ca-bde9-b84e43b65ab8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.974871 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "48d1a17c-f3f7-4da9-bab3-d60bf8acf261" (UID: "48d1a17c-f3f7-4da9-bab3-d60bf8acf261"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.974961 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-scripts" (OuterVolumeSpecName: "scripts") pod "48d1a17c-f3f7-4da9-bab3-d60bf8acf261" (UID: "48d1a17c-f3f7-4da9-bab3-d60bf8acf261"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.979079 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "48d1a17c-f3f7-4da9-bab3-d60bf8acf261" (UID: "48d1a17c-f3f7-4da9-bab3-d60bf8acf261"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.982125 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"86045f5e-defd-4c68-a582-c51c9c26e5c7","Type":"ContainerStarted","Data":"d776082f3aaee81cc1f230c5cf4abdaa34059f0c862a2df0c93b102e79762938"} Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.987249 4881 generic.go:334] "Generic (PLEG): container finished" podID="349e8898-8b7c-414a-8357-d431c8b81bf4" containerID="c648692c811ad6f54f474e55240cf83d10bccce020989330faa953f52c62836c" exitCode=0 Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.987340 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mxb97" event={"ID":"349e8898-8b7c-414a-8357-d431c8b81bf4","Type":"ContainerDied","Data":"c648692c811ad6f54f474e55240cf83d10bccce020989330faa953f52c62836c"} Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.990051 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-logs" (OuterVolumeSpecName: "logs") pod "48d1a17c-f3f7-4da9-bab3-d60bf8acf261" (UID: "48d1a17c-f3f7-4da9-bab3-d60bf8acf261"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.993453 4881 generic.go:334] "Generic (PLEG): container finished" podID="48d1a17c-f3f7-4da9-bab3-d60bf8acf261" containerID="dc4b5bf988baf1bd0e9bfba02297e594c49b64cccb9777011ae90d555a839fe9" exitCode=0 Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.993512 4881 generic.go:334] "Generic (PLEG): container finished" podID="48d1a17c-f3f7-4da9-bab3-d60bf8acf261" containerID="fb1a9c603817e07a4207e571ae05892f26734f9d1dd1a9aa64d1de0e3a66cf96" exitCode=143 Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.993923 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"48d1a17c-f3f7-4da9-bab3-d60bf8acf261","Type":"ContainerDied","Data":"dc4b5bf988baf1bd0e9bfba02297e594c49b64cccb9777011ae90d555a839fe9"} Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.993965 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.993973 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"48d1a17c-f3f7-4da9-bab3-d60bf8acf261","Type":"ContainerDied","Data":"fb1a9c603817e07a4207e571ae05892f26734f9d1dd1a9aa64d1de0e3a66cf96"} Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.994090 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"48d1a17c-f3f7-4da9-bab3-d60bf8acf261","Type":"ContainerDied","Data":"c0535afd842bb7eb500134a0de7821f679adee9739e8044161792e4e82bff780"} Jan 21 11:21:21 crc kubenswrapper[4881]: I0121 11:21:21.994178 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-kube-api-access-wwkdn" (OuterVolumeSpecName: "kube-api-access-wwkdn") pod "48d1a17c-f3f7-4da9-bab3-d60bf8acf261" (UID: "48d1a17c-f3f7-4da9-bab3-d60bf8acf261"). InnerVolumeSpecName "kube-api-access-wwkdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.011707 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "48d1a17c-f3f7-4da9-bab3-d60bf8acf261" (UID: "48d1a17c-f3f7-4da9-bab3-d60bf8acf261"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.013933 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=11.330477309 podStartE2EDuration="14.013912492s" podCreationTimestamp="2026-01-21 11:21:08 +0000 UTC" firstStartedPulling="2026-01-21 11:21:15.860681272 +0000 UTC m=+1463.120637741" lastFinishedPulling="2026-01-21 11:21:18.544116455 +0000 UTC m=+1465.804072924" observedRunningTime="2026-01-21 11:21:22.008166778 +0000 UTC m=+1469.268123257" watchObservedRunningTime="2026-01-21 11:21:22.013912492 +0000 UTC m=+1469.273868961" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.048317 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-config-data" (OuterVolumeSpecName: "config-data") pod "48d1a17c-f3f7-4da9-bab3-d60bf8acf261" (UID: "48d1a17c-f3f7-4da9-bab3-d60bf8acf261"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.072524 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwkdn\" (UniqueName: \"kubernetes.io/projected/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-kube-api-access-wwkdn\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.072565 4881 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.072575 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.072585 4881 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-logs\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.072595 4881 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.072603 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.072612 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48d1a17c-f3f7-4da9-bab3-d60bf8acf261-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.146872 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ncbfx"] Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.152149 4881 scope.go:117] "RemoveContainer" containerID="c80c8a89877e92046c31c2139dd4330c1447f9d23ecebf26a3928b9515ff61af" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.156174 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ncbfx"] Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.181440 4881 scope.go:117] "RemoveContainer" containerID="932fbf80100df4b5aa3c652842e044641d2f0a31589d5beff4fb8c850ca3a5fe" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.227319 4881 scope.go:117] "RemoveContainer" containerID="bb51d30f717ade21f99893a221476158fedbab913c5592a0655c1dfba33d69c7" Jan 21 11:21:22 crc kubenswrapper[4881]: E0121 11:21:22.230408 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb51d30f717ade21f99893a221476158fedbab913c5592a0655c1dfba33d69c7\": container with ID starting with bb51d30f717ade21f99893a221476158fedbab913c5592a0655c1dfba33d69c7 not found: ID does not exist" containerID="bb51d30f717ade21f99893a221476158fedbab913c5592a0655c1dfba33d69c7" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.230460 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb51d30f717ade21f99893a221476158fedbab913c5592a0655c1dfba33d69c7"} err="failed to get container status \"bb51d30f717ade21f99893a221476158fedbab913c5592a0655c1dfba33d69c7\": rpc error: code = NotFound desc = could not find container \"bb51d30f717ade21f99893a221476158fedbab913c5592a0655c1dfba33d69c7\": container with ID starting with bb51d30f717ade21f99893a221476158fedbab913c5592a0655c1dfba33d69c7 not found: ID does not exist" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.230497 4881 scope.go:117] "RemoveContainer" containerID="c80c8a89877e92046c31c2139dd4330c1447f9d23ecebf26a3928b9515ff61af" Jan 21 11:21:22 crc kubenswrapper[4881]: E0121 11:21:22.233973 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c80c8a89877e92046c31c2139dd4330c1447f9d23ecebf26a3928b9515ff61af\": container with ID starting with c80c8a89877e92046c31c2139dd4330c1447f9d23ecebf26a3928b9515ff61af not found: ID does not exist" containerID="c80c8a89877e92046c31c2139dd4330c1447f9d23ecebf26a3928b9515ff61af" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.234021 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c80c8a89877e92046c31c2139dd4330c1447f9d23ecebf26a3928b9515ff61af"} err="failed to get container status \"c80c8a89877e92046c31c2139dd4330c1447f9d23ecebf26a3928b9515ff61af\": rpc error: code = NotFound desc = could not find container \"c80c8a89877e92046c31c2139dd4330c1447f9d23ecebf26a3928b9515ff61af\": container with ID starting with c80c8a89877e92046c31c2139dd4330c1447f9d23ecebf26a3928b9515ff61af not found: ID does not exist" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.234049 4881 scope.go:117] "RemoveContainer" containerID="932fbf80100df4b5aa3c652842e044641d2f0a31589d5beff4fb8c850ca3a5fe" Jan 21 11:21:22 crc kubenswrapper[4881]: E0121 11:21:22.243964 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"932fbf80100df4b5aa3c652842e044641d2f0a31589d5beff4fb8c850ca3a5fe\": container with ID starting with 932fbf80100df4b5aa3c652842e044641d2f0a31589d5beff4fb8c850ca3a5fe not found: ID does not exist" containerID="932fbf80100df4b5aa3c652842e044641d2f0a31589d5beff4fb8c850ca3a5fe" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.244020 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"932fbf80100df4b5aa3c652842e044641d2f0a31589d5beff4fb8c850ca3a5fe"} err="failed to get container status \"932fbf80100df4b5aa3c652842e044641d2f0a31589d5beff4fb8c850ca3a5fe\": rpc error: code = NotFound desc = could not find container \"932fbf80100df4b5aa3c652842e044641d2f0a31589d5beff4fb8c850ca3a5fe\": container with ID starting with 932fbf80100df4b5aa3c652842e044641d2f0a31589d5beff4fb8c850ca3a5fe not found: ID does not exist" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.244058 4881 scope.go:117] "RemoveContainer" containerID="dc4b5bf988baf1bd0e9bfba02297e594c49b64cccb9777011ae90d555a839fe9" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.333347 4881 scope.go:117] "RemoveContainer" containerID="fb1a9c603817e07a4207e571ae05892f26734f9d1dd1a9aa64d1de0e3a66cf96" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.390089 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.394376 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.399031 4881 scope.go:117] "RemoveContainer" containerID="dc4b5bf988baf1bd0e9bfba02297e594c49b64cccb9777011ae90d555a839fe9" Jan 21 11:21:22 crc kubenswrapper[4881]: E0121 11:21:22.401528 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc4b5bf988baf1bd0e9bfba02297e594c49b64cccb9777011ae90d555a839fe9\": container with ID starting with dc4b5bf988baf1bd0e9bfba02297e594c49b64cccb9777011ae90d555a839fe9 not found: ID does not exist" containerID="dc4b5bf988baf1bd0e9bfba02297e594c49b64cccb9777011ae90d555a839fe9" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.401581 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc4b5bf988baf1bd0e9bfba02297e594c49b64cccb9777011ae90d555a839fe9"} err="failed to get container status \"dc4b5bf988baf1bd0e9bfba02297e594c49b64cccb9777011ae90d555a839fe9\": rpc error: code = NotFound desc = could not find container \"dc4b5bf988baf1bd0e9bfba02297e594c49b64cccb9777011ae90d555a839fe9\": container with ID starting with dc4b5bf988baf1bd0e9bfba02297e594c49b64cccb9777011ae90d555a839fe9 not found: ID does not exist" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.401613 4881 scope.go:117] "RemoveContainer" containerID="fb1a9c603817e07a4207e571ae05892f26734f9d1dd1a9aa64d1de0e3a66cf96" Jan 21 11:21:22 crc kubenswrapper[4881]: E0121 11:21:22.408636 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb1a9c603817e07a4207e571ae05892f26734f9d1dd1a9aa64d1de0e3a66cf96\": container with ID starting with fb1a9c603817e07a4207e571ae05892f26734f9d1dd1a9aa64d1de0e3a66cf96 not found: ID does not exist" containerID="fb1a9c603817e07a4207e571ae05892f26734f9d1dd1a9aa64d1de0e3a66cf96" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.408705 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb1a9c603817e07a4207e571ae05892f26734f9d1dd1a9aa64d1de0e3a66cf96"} err="failed to get container status \"fb1a9c603817e07a4207e571ae05892f26734f9d1dd1a9aa64d1de0e3a66cf96\": rpc error: code = NotFound desc = could not find container \"fb1a9c603817e07a4207e571ae05892f26734f9d1dd1a9aa64d1de0e3a66cf96\": container with ID starting with fb1a9c603817e07a4207e571ae05892f26734f9d1dd1a9aa64d1de0e3a66cf96 not found: ID does not exist" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.408745 4881 scope.go:117] "RemoveContainer" containerID="dc4b5bf988baf1bd0e9bfba02297e594c49b64cccb9777011ae90d555a839fe9" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.412901 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc4b5bf988baf1bd0e9bfba02297e594c49b64cccb9777011ae90d555a839fe9"} err="failed to get container status \"dc4b5bf988baf1bd0e9bfba02297e594c49b64cccb9777011ae90d555a839fe9\": rpc error: code = NotFound desc = could not find container \"dc4b5bf988baf1bd0e9bfba02297e594c49b64cccb9777011ae90d555a839fe9\": container with ID starting with dc4b5bf988baf1bd0e9bfba02297e594c49b64cccb9777011ae90d555a839fe9 not found: ID does not exist" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.412953 4881 scope.go:117] "RemoveContainer" containerID="fb1a9c603817e07a4207e571ae05892f26734f9d1dd1a9aa64d1de0e3a66cf96" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.419642 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb1a9c603817e07a4207e571ae05892f26734f9d1dd1a9aa64d1de0e3a66cf96"} err="failed to get container status \"fb1a9c603817e07a4207e571ae05892f26734f9d1dd1a9aa64d1de0e3a66cf96\": rpc error: code = NotFound desc = could not find container \"fb1a9c603817e07a4207e571ae05892f26734f9d1dd1a9aa64d1de0e3a66cf96\": container with ID starting with fb1a9c603817e07a4207e571ae05892f26734f9d1dd1a9aa64d1de0e3a66cf96 not found: ID does not exist" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.426123 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 21 11:21:22 crc kubenswrapper[4881]: E0121 11:21:22.430037 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85f05121-bd30-4b3f-936d-dc20e30fca06" containerName="barbican-api" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.430090 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="85f05121-bd30-4b3f-936d-dc20e30fca06" containerName="barbican-api" Jan 21 11:21:22 crc kubenswrapper[4881]: E0121 11:21:22.430136 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2ecfd63-c654-42e9-b324-22c02d21b506" containerName="init" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.430147 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2ecfd63-c654-42e9-b324-22c02d21b506" containerName="init" Jan 21 11:21:22 crc kubenswrapper[4881]: E0121 11:21:22.430186 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a8083e9-c68d-40ca-bde9-b84e43b65ab8" containerName="registry-server" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.430201 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a8083e9-c68d-40ca-bde9-b84e43b65ab8" containerName="registry-server" Jan 21 11:21:22 crc kubenswrapper[4881]: E0121 11:21:22.430251 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a8083e9-c68d-40ca-bde9-b84e43b65ab8" containerName="extract-utilities" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.430261 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a8083e9-c68d-40ca-bde9-b84e43b65ab8" containerName="extract-utilities" Jan 21 11:21:22 crc kubenswrapper[4881]: E0121 11:21:22.430278 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48d1a17c-f3f7-4da9-bab3-d60bf8acf261" containerName="cinder-api-log" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.430287 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="48d1a17c-f3f7-4da9-bab3-d60bf8acf261" containerName="cinder-api-log" Jan 21 11:21:22 crc kubenswrapper[4881]: E0121 11:21:22.430309 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85f05121-bd30-4b3f-936d-dc20e30fca06" containerName="barbican-api-log" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.430318 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="85f05121-bd30-4b3f-936d-dc20e30fca06" containerName="barbican-api-log" Jan 21 11:21:22 crc kubenswrapper[4881]: E0121 11:21:22.430330 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48d1a17c-f3f7-4da9-bab3-d60bf8acf261" containerName="cinder-api" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.430339 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="48d1a17c-f3f7-4da9-bab3-d60bf8acf261" containerName="cinder-api" Jan 21 11:21:22 crc kubenswrapper[4881]: E0121 11:21:22.430354 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2ecfd63-c654-42e9-b324-22c02d21b506" containerName="dnsmasq-dns" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.430362 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2ecfd63-c654-42e9-b324-22c02d21b506" containerName="dnsmasq-dns" Jan 21 11:21:22 crc kubenswrapper[4881]: E0121 11:21:22.430380 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a8083e9-c68d-40ca-bde9-b84e43b65ab8" containerName="extract-content" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.430391 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a8083e9-c68d-40ca-bde9-b84e43b65ab8" containerName="extract-content" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.430859 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a8083e9-c68d-40ca-bde9-b84e43b65ab8" containerName="registry-server" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.430884 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="48d1a17c-f3f7-4da9-bab3-d60bf8acf261" containerName="cinder-api" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.430902 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="48d1a17c-f3f7-4da9-bab3-d60bf8acf261" containerName="cinder-api-log" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.430926 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2ecfd63-c654-42e9-b324-22c02d21b506" containerName="dnsmasq-dns" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.430944 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="85f05121-bd30-4b3f-936d-dc20e30fca06" containerName="barbican-api" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.430962 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="85f05121-bd30-4b3f-936d-dc20e30fca06" containerName="barbican-api-log" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.439773 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.440025 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.443469 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.443642 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.443760 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.483637 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae53e440-5bd5-41e3-8339-57eebaef03d2-scripts\") pod \"cinder-api-0\" (UID: \"ae53e440-5bd5-41e3-8339-57eebaef03d2\") " pod="openstack/cinder-api-0" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.483829 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae53e440-5bd5-41e3-8339-57eebaef03d2-logs\") pod \"cinder-api-0\" (UID: \"ae53e440-5bd5-41e3-8339-57eebaef03d2\") " pod="openstack/cinder-api-0" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.483912 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ae53e440-5bd5-41e3-8339-57eebaef03d2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ae53e440-5bd5-41e3-8339-57eebaef03d2\") " pod="openstack/cinder-api-0" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.483981 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae53e440-5bd5-41e3-8339-57eebaef03d2-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"ae53e440-5bd5-41e3-8339-57eebaef03d2\") " pod="openstack/cinder-api-0" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.484079 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae53e440-5bd5-41e3-8339-57eebaef03d2-config-data-custom\") pod \"cinder-api-0\" (UID: \"ae53e440-5bd5-41e3-8339-57eebaef03d2\") " pod="openstack/cinder-api-0" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.484179 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbk4d\" (UniqueName: \"kubernetes.io/projected/ae53e440-5bd5-41e3-8339-57eebaef03d2-kube-api-access-rbk4d\") pod \"cinder-api-0\" (UID: \"ae53e440-5bd5-41e3-8339-57eebaef03d2\") " pod="openstack/cinder-api-0" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.484311 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae53e440-5bd5-41e3-8339-57eebaef03d2-config-data\") pod \"cinder-api-0\" (UID: \"ae53e440-5bd5-41e3-8339-57eebaef03d2\") " pod="openstack/cinder-api-0" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.484475 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae53e440-5bd5-41e3-8339-57eebaef03d2-public-tls-certs\") pod \"cinder-api-0\" (UID: \"ae53e440-5bd5-41e3-8339-57eebaef03d2\") " pod="openstack/cinder-api-0" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.484563 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae53e440-5bd5-41e3-8339-57eebaef03d2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ae53e440-5bd5-41e3-8339-57eebaef03d2\") " pod="openstack/cinder-api-0" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.586291 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbk4d\" (UniqueName: \"kubernetes.io/projected/ae53e440-5bd5-41e3-8339-57eebaef03d2-kube-api-access-rbk4d\") pod \"cinder-api-0\" (UID: \"ae53e440-5bd5-41e3-8339-57eebaef03d2\") " pod="openstack/cinder-api-0" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.586369 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae53e440-5bd5-41e3-8339-57eebaef03d2-config-data\") pod \"cinder-api-0\" (UID: \"ae53e440-5bd5-41e3-8339-57eebaef03d2\") " pod="openstack/cinder-api-0" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.586443 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae53e440-5bd5-41e3-8339-57eebaef03d2-public-tls-certs\") pod \"cinder-api-0\" (UID: \"ae53e440-5bd5-41e3-8339-57eebaef03d2\") " pod="openstack/cinder-api-0" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.586491 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae53e440-5bd5-41e3-8339-57eebaef03d2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ae53e440-5bd5-41e3-8339-57eebaef03d2\") " pod="openstack/cinder-api-0" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.586540 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae53e440-5bd5-41e3-8339-57eebaef03d2-scripts\") pod \"cinder-api-0\" (UID: \"ae53e440-5bd5-41e3-8339-57eebaef03d2\") " pod="openstack/cinder-api-0" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.586577 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae53e440-5bd5-41e3-8339-57eebaef03d2-logs\") pod \"cinder-api-0\" (UID: \"ae53e440-5bd5-41e3-8339-57eebaef03d2\") " pod="openstack/cinder-api-0" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.586765 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ae53e440-5bd5-41e3-8339-57eebaef03d2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ae53e440-5bd5-41e3-8339-57eebaef03d2\") " pod="openstack/cinder-api-0" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.587106 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae53e440-5bd5-41e3-8339-57eebaef03d2-logs\") pod \"cinder-api-0\" (UID: \"ae53e440-5bd5-41e3-8339-57eebaef03d2\") " pod="openstack/cinder-api-0" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.587288 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ae53e440-5bd5-41e3-8339-57eebaef03d2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ae53e440-5bd5-41e3-8339-57eebaef03d2\") " pod="openstack/cinder-api-0" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.587349 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae53e440-5bd5-41e3-8339-57eebaef03d2-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"ae53e440-5bd5-41e3-8339-57eebaef03d2\") " pod="openstack/cinder-api-0" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.587425 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae53e440-5bd5-41e3-8339-57eebaef03d2-config-data-custom\") pod \"cinder-api-0\" (UID: \"ae53e440-5bd5-41e3-8339-57eebaef03d2\") " pod="openstack/cinder-api-0" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.593054 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae53e440-5bd5-41e3-8339-57eebaef03d2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ae53e440-5bd5-41e3-8339-57eebaef03d2\") " pod="openstack/cinder-api-0" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.593198 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae53e440-5bd5-41e3-8339-57eebaef03d2-config-data-custom\") pod \"cinder-api-0\" (UID: \"ae53e440-5bd5-41e3-8339-57eebaef03d2\") " pod="openstack/cinder-api-0" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.595223 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae53e440-5bd5-41e3-8339-57eebaef03d2-public-tls-certs\") pod \"cinder-api-0\" (UID: \"ae53e440-5bd5-41e3-8339-57eebaef03d2\") " pod="openstack/cinder-api-0" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.602005 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae53e440-5bd5-41e3-8339-57eebaef03d2-scripts\") pod \"cinder-api-0\" (UID: \"ae53e440-5bd5-41e3-8339-57eebaef03d2\") " pod="openstack/cinder-api-0" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.602633 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae53e440-5bd5-41e3-8339-57eebaef03d2-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"ae53e440-5bd5-41e3-8339-57eebaef03d2\") " pod="openstack/cinder-api-0" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.603234 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae53e440-5bd5-41e3-8339-57eebaef03d2-config-data\") pod \"cinder-api-0\" (UID: \"ae53e440-5bd5-41e3-8339-57eebaef03d2\") " pod="openstack/cinder-api-0" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.607483 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbk4d\" (UniqueName: \"kubernetes.io/projected/ae53e440-5bd5-41e3-8339-57eebaef03d2-kube-api-access-rbk4d\") pod \"cinder-api-0\" (UID: \"ae53e440-5bd5-41e3-8339-57eebaef03d2\") " pod="openstack/cinder-api-0" Jan 21 11:21:22 crc kubenswrapper[4881]: I0121 11:21:22.767354 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 11:21:23 crc kubenswrapper[4881]: I0121 11:21:23.037390 4881 generic.go:334] "Generic (PLEG): container finished" podID="ee4e7116-c2cd-43d5-af6b-9f30b5053e0e" containerID="5ccae223d32b8d30267f4d247c29e77d1942427c122a26bc75e9b00b89fa3bc0" exitCode=1 Jan 21 11:21:23 crc kubenswrapper[4881]: I0121 11:21:23.037622 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e","Type":"ContainerDied","Data":"5ccae223d32b8d30267f4d247c29e77d1942427c122a26bc75e9b00b89fa3bc0"} Jan 21 11:21:23 crc kubenswrapper[4881]: I0121 11:21:23.038177 4881 scope.go:117] "RemoveContainer" containerID="61f6b4008e5afe3c84bc4dbf116ba996728224955a2729f3dc2de6c1a2eeb445" Jan 21 11:21:23 crc kubenswrapper[4881]: I0121 11:21:23.039050 4881 scope.go:117] "RemoveContainer" containerID="5ccae223d32b8d30267f4d247c29e77d1942427c122a26bc75e9b00b89fa3bc0" Jan 21 11:21:23 crc kubenswrapper[4881]: E0121 11:21:23.039340 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(ee4e7116-c2cd-43d5-af6b-9f30b5053e0e)\"" pod="openstack/watcher-decision-engine-0" podUID="ee4e7116-c2cd-43d5-af6b-9f30b5053e0e" Jan 21 11:21:23 crc kubenswrapper[4881]: I0121 11:21:23.264881 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 11:21:23 crc kubenswrapper[4881]: W0121 11:21:23.283939 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae53e440_5bd5_41e3_8339_57eebaef03d2.slice/crio-c2c4191f74bf553a8a2dca661f23628aae4dc5fb419e29786f6ea024fe83ab3c WatchSource:0}: Error finding container c2c4191f74bf553a8a2dca661f23628aae4dc5fb419e29786f6ea024fe83ab3c: Status 404 returned error can't find the container with id c2c4191f74bf553a8a2dca661f23628aae4dc5fb419e29786f6ea024fe83ab3c Jan 21 11:21:23 crc kubenswrapper[4881]: I0121 11:21:23.329669 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48d1a17c-f3f7-4da9-bab3-d60bf8acf261" path="/var/lib/kubelet/pods/48d1a17c-f3f7-4da9-bab3-d60bf8acf261/volumes" Jan 21 11:21:23 crc kubenswrapper[4881]: I0121 11:21:23.330933 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a8083e9-c68d-40ca-bde9-b84e43b65ab8" path="/var/lib/kubelet/pods/6a8083e9-c68d-40ca-bde9-b84e43b65ab8/volumes" Jan 21 11:21:23 crc kubenswrapper[4881]: I0121 11:21:23.610665 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mxb97" Jan 21 11:21:23 crc kubenswrapper[4881]: I0121 11:21:23.713668 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/349e8898-8b7c-414a-8357-d431c8b81bf4-db-sync-config-data\") pod \"349e8898-8b7c-414a-8357-d431c8b81bf4\" (UID: \"349e8898-8b7c-414a-8357-d431c8b81bf4\") " Jan 21 11:21:23 crc kubenswrapper[4881]: I0121 11:21:23.713795 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/349e8898-8b7c-414a-8357-d431c8b81bf4-combined-ca-bundle\") pod \"349e8898-8b7c-414a-8357-d431c8b81bf4\" (UID: \"349e8898-8b7c-414a-8357-d431c8b81bf4\") " Jan 21 11:21:23 crc kubenswrapper[4881]: I0121 11:21:23.713821 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvn9r\" (UniqueName: \"kubernetes.io/projected/349e8898-8b7c-414a-8357-d431c8b81bf4-kube-api-access-gvn9r\") pod \"349e8898-8b7c-414a-8357-d431c8b81bf4\" (UID: \"349e8898-8b7c-414a-8357-d431c8b81bf4\") " Jan 21 11:21:23 crc kubenswrapper[4881]: I0121 11:21:23.713843 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/349e8898-8b7c-414a-8357-d431c8b81bf4-config-data\") pod \"349e8898-8b7c-414a-8357-d431c8b81bf4\" (UID: \"349e8898-8b7c-414a-8357-d431c8b81bf4\") " Jan 21 11:21:23 crc kubenswrapper[4881]: I0121 11:21:23.720923 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/349e8898-8b7c-414a-8357-d431c8b81bf4-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "349e8898-8b7c-414a-8357-d431c8b81bf4" (UID: "349e8898-8b7c-414a-8357-d431c8b81bf4"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:23 crc kubenswrapper[4881]: I0121 11:21:23.727099 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/349e8898-8b7c-414a-8357-d431c8b81bf4-kube-api-access-gvn9r" (OuterVolumeSpecName: "kube-api-access-gvn9r") pod "349e8898-8b7c-414a-8357-d431c8b81bf4" (UID: "349e8898-8b7c-414a-8357-d431c8b81bf4"). InnerVolumeSpecName "kube-api-access-gvn9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:21:23 crc kubenswrapper[4881]: I0121 11:21:23.764747 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/349e8898-8b7c-414a-8357-d431c8b81bf4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "349e8898-8b7c-414a-8357-d431c8b81bf4" (UID: "349e8898-8b7c-414a-8357-d431c8b81bf4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:23 crc kubenswrapper[4881]: I0121 11:21:23.784767 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-667d9dbbbc-pcbhd" Jan 21 11:21:23 crc kubenswrapper[4881]: I0121 11:21:23.807049 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/349e8898-8b7c-414a-8357-d431c8b81bf4-config-data" (OuterVolumeSpecName: "config-data") pod "349e8898-8b7c-414a-8357-d431c8b81bf4" (UID: "349e8898-8b7c-414a-8357-d431c8b81bf4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:23 crc kubenswrapper[4881]: I0121 11:21:23.823972 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/349e8898-8b7c-414a-8357-d431c8b81bf4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:23 crc kubenswrapper[4881]: I0121 11:21:23.830603 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvn9r\" (UniqueName: \"kubernetes.io/projected/349e8898-8b7c-414a-8357-d431c8b81bf4-kube-api-access-gvn9r\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:23 crc kubenswrapper[4881]: I0121 11:21:23.831011 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/349e8898-8b7c-414a-8357-d431c8b81bf4-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:23 crc kubenswrapper[4881]: I0121 11:21:23.831096 4881 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/349e8898-8b7c-414a-8357-d431c8b81bf4-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:23 crc kubenswrapper[4881]: I0121 11:21:23.849554 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 21 11:21:23 crc kubenswrapper[4881]: I0121 11:21:23.862139 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-796dd99876-gb7nt"] Jan 21 11:21:23 crc kubenswrapper[4881]: I0121 11:21:23.862383 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-796dd99876-gb7nt" podUID="f51f915e-f553-4130-a16b-9e6af68a5a15" containerName="neutron-api" containerID="cri-o://3a9e17862c5ff2f64ddcb7cb3eb9d73424fbbcd62c695e9a6f00fe4f1a20f86b" gracePeriod=30 Jan 21 11:21:23 crc kubenswrapper[4881]: I0121 11:21:23.862461 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-796dd99876-gb7nt" podUID="f51f915e-f553-4130-a16b-9e6af68a5a15" containerName="neutron-httpd" containerID="cri-o://d69bb72f9eba472479b5b854a392dd678dcf12a1e5ab100dffbf954eda114573" gracePeriod=30 Jan 21 11:21:23 crc kubenswrapper[4881]: I0121 11:21:23.988043 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-77b944d67-mw2nq" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.087860 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bb8f8b9c9-cwqc2"] Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.092192 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bb8f8b9c9-cwqc2" podUID="a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f" containerName="dnsmasq-dns" containerID="cri-o://3c2fbfa61210bf849e04651287e22b6c198d4c12ea96a2312edd5e9f291c7879" gracePeriod=10 Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.132475 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ae53e440-5bd5-41e3-8339-57eebaef03d2","Type":"ContainerStarted","Data":"c2c4191f74bf553a8a2dca661f23628aae4dc5fb419e29786f6ea024fe83ab3c"} Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.153920 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-mxb97" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.157178 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-mxb97" event={"ID":"349e8898-8b7c-414a-8357-d431c8b81bf4","Type":"ContainerDied","Data":"cd824796b06380fe0748d0a1334aa26a3fd0a19fab70225e560d35cfb754e2b4"} Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.157224 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd824796b06380fe0748d0a1334aa26a3fd0a19fab70225e560d35cfb754e2b4" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.545729 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-c849cf559-fjllv"] Jan 21 11:21:24 crc kubenswrapper[4881]: E0121 11:21:24.546539 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="349e8898-8b7c-414a-8357-d431c8b81bf4" containerName="glance-db-sync" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.546551 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="349e8898-8b7c-414a-8357-d431c8b81bf4" containerName="glance-db-sync" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.546725 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="349e8898-8b7c-414a-8357-d431c8b81bf4" containerName="glance-db-sync" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.551306 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c849cf559-fjllv" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.569443 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd8b7\" (UniqueName: \"kubernetes.io/projected/4a89a9d0-4859-41cb-896d-f1a91e854d7b-kube-api-access-cd8b7\") pod \"dnsmasq-dns-c849cf559-fjllv\" (UID: \"4a89a9d0-4859-41cb-896d-f1a91e854d7b\") " pod="openstack/dnsmasq-dns-c849cf559-fjllv" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.569495 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a89a9d0-4859-41cb-896d-f1a91e854d7b-ovsdbserver-sb\") pod \"dnsmasq-dns-c849cf559-fjllv\" (UID: \"4a89a9d0-4859-41cb-896d-f1a91e854d7b\") " pod="openstack/dnsmasq-dns-c849cf559-fjllv" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.569517 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a89a9d0-4859-41cb-896d-f1a91e854d7b-dns-svc\") pod \"dnsmasq-dns-c849cf559-fjllv\" (UID: \"4a89a9d0-4859-41cb-896d-f1a91e854d7b\") " pod="openstack/dnsmasq-dns-c849cf559-fjllv" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.569681 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4a89a9d0-4859-41cb-896d-f1a91e854d7b-dns-swift-storage-0\") pod \"dnsmasq-dns-c849cf559-fjllv\" (UID: \"4a89a9d0-4859-41cb-896d-f1a91e854d7b\") " pod="openstack/dnsmasq-dns-c849cf559-fjllv" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.569703 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a89a9d0-4859-41cb-896d-f1a91e854d7b-ovsdbserver-nb\") pod \"dnsmasq-dns-c849cf559-fjllv\" (UID: \"4a89a9d0-4859-41cb-896d-f1a91e854d7b\") " pod="openstack/dnsmasq-dns-c849cf559-fjllv" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.569748 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a89a9d0-4859-41cb-896d-f1a91e854d7b-config\") pod \"dnsmasq-dns-c849cf559-fjllv\" (UID: \"4a89a9d0-4859-41cb-896d-f1a91e854d7b\") " pod="openstack/dnsmasq-dns-c849cf559-fjllv" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.586412 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c849cf559-fjllv"] Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.673473 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cd8b7\" (UniqueName: \"kubernetes.io/projected/4a89a9d0-4859-41cb-896d-f1a91e854d7b-kube-api-access-cd8b7\") pod \"dnsmasq-dns-c849cf559-fjllv\" (UID: \"4a89a9d0-4859-41cb-896d-f1a91e854d7b\") " pod="openstack/dnsmasq-dns-c849cf559-fjllv" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.673520 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a89a9d0-4859-41cb-896d-f1a91e854d7b-ovsdbserver-sb\") pod \"dnsmasq-dns-c849cf559-fjllv\" (UID: \"4a89a9d0-4859-41cb-896d-f1a91e854d7b\") " pod="openstack/dnsmasq-dns-c849cf559-fjllv" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.673545 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a89a9d0-4859-41cb-896d-f1a91e854d7b-dns-svc\") pod \"dnsmasq-dns-c849cf559-fjllv\" (UID: \"4a89a9d0-4859-41cb-896d-f1a91e854d7b\") " pod="openstack/dnsmasq-dns-c849cf559-fjllv" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.673661 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4a89a9d0-4859-41cb-896d-f1a91e854d7b-dns-swift-storage-0\") pod \"dnsmasq-dns-c849cf559-fjllv\" (UID: \"4a89a9d0-4859-41cb-896d-f1a91e854d7b\") " pod="openstack/dnsmasq-dns-c849cf559-fjllv" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.673677 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a89a9d0-4859-41cb-896d-f1a91e854d7b-ovsdbserver-nb\") pod \"dnsmasq-dns-c849cf559-fjllv\" (UID: \"4a89a9d0-4859-41cb-896d-f1a91e854d7b\") " pod="openstack/dnsmasq-dns-c849cf559-fjllv" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.673737 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a89a9d0-4859-41cb-896d-f1a91e854d7b-config\") pod \"dnsmasq-dns-c849cf559-fjllv\" (UID: \"4a89a9d0-4859-41cb-896d-f1a91e854d7b\") " pod="openstack/dnsmasq-dns-c849cf559-fjllv" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.674715 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a89a9d0-4859-41cb-896d-f1a91e854d7b-config\") pod \"dnsmasq-dns-c849cf559-fjllv\" (UID: \"4a89a9d0-4859-41cb-896d-f1a91e854d7b\") " pod="openstack/dnsmasq-dns-c849cf559-fjllv" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.675383 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4a89a9d0-4859-41cb-896d-f1a91e854d7b-dns-swift-storage-0\") pod \"dnsmasq-dns-c849cf559-fjllv\" (UID: \"4a89a9d0-4859-41cb-896d-f1a91e854d7b\") " pod="openstack/dnsmasq-dns-c849cf559-fjllv" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.675380 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a89a9d0-4859-41cb-896d-f1a91e854d7b-dns-svc\") pod \"dnsmasq-dns-c849cf559-fjllv\" (UID: \"4a89a9d0-4859-41cb-896d-f1a91e854d7b\") " pod="openstack/dnsmasq-dns-c849cf559-fjllv" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.676238 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a89a9d0-4859-41cb-896d-f1a91e854d7b-ovsdbserver-sb\") pod \"dnsmasq-dns-c849cf559-fjllv\" (UID: \"4a89a9d0-4859-41cb-896d-f1a91e854d7b\") " pod="openstack/dnsmasq-dns-c849cf559-fjllv" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.684209 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a89a9d0-4859-41cb-896d-f1a91e854d7b-ovsdbserver-nb\") pod \"dnsmasq-dns-c849cf559-fjllv\" (UID: \"4a89a9d0-4859-41cb-896d-f1a91e854d7b\") " pod="openstack/dnsmasq-dns-c849cf559-fjllv" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.710593 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cd8b7\" (UniqueName: \"kubernetes.io/projected/4a89a9d0-4859-41cb-896d-f1a91e854d7b-kube-api-access-cd8b7\") pod \"dnsmasq-dns-c849cf559-fjllv\" (UID: \"4a89a9d0-4859-41cb-896d-f1a91e854d7b\") " pod="openstack/dnsmasq-dns-c849cf559-fjllv" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.722876 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.724283 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.731111 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.731297 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.737326 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-hk8hq" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.768091 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.783430 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6sr9\" (UniqueName: \"kubernetes.io/projected/b0b6ce2c-5ae8-496f-9374-d3069bc3d149-kube-api-access-m6sr9\") pod \"openstackclient\" (UID: \"b0b6ce2c-5ae8-496f-9374-d3069bc3d149\") " pod="openstack/openstackclient" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.783528 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b0b6ce2c-5ae8-496f-9374-d3069bc3d149-openstack-config-secret\") pod \"openstackclient\" (UID: \"b0b6ce2c-5ae8-496f-9374-d3069bc3d149\") " pod="openstack/openstackclient" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.783889 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b0b6ce2c-5ae8-496f-9374-d3069bc3d149-openstack-config\") pod \"openstackclient\" (UID: \"b0b6ce2c-5ae8-496f-9374-d3069bc3d149\") " pod="openstack/openstackclient" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.783989 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0b6ce2c-5ae8-496f-9374-d3069bc3d149-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b0b6ce2c-5ae8-496f-9374-d3069bc3d149\") " pod="openstack/openstackclient" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.884981 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b0b6ce2c-5ae8-496f-9374-d3069bc3d149-openstack-config\") pod \"openstackclient\" (UID: \"b0b6ce2c-5ae8-496f-9374-d3069bc3d149\") " pod="openstack/openstackclient" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.885052 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0b6ce2c-5ae8-496f-9374-d3069bc3d149-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b0b6ce2c-5ae8-496f-9374-d3069bc3d149\") " pod="openstack/openstackclient" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.885154 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6sr9\" (UniqueName: \"kubernetes.io/projected/b0b6ce2c-5ae8-496f-9374-d3069bc3d149-kube-api-access-m6sr9\") pod \"openstackclient\" (UID: \"b0b6ce2c-5ae8-496f-9374-d3069bc3d149\") " pod="openstack/openstackclient" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.885199 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b0b6ce2c-5ae8-496f-9374-d3069bc3d149-openstack-config-secret\") pod \"openstackclient\" (UID: \"b0b6ce2c-5ae8-496f-9374-d3069bc3d149\") " pod="openstack/openstackclient" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.886822 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b0b6ce2c-5ae8-496f-9374-d3069bc3d149-openstack-config\") pod \"openstackclient\" (UID: \"b0b6ce2c-5ae8-496f-9374-d3069bc3d149\") " pod="openstack/openstackclient" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.893570 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b0b6ce2c-5ae8-496f-9374-d3069bc3d149-openstack-config-secret\") pod \"openstackclient\" (UID: \"b0b6ce2c-5ae8-496f-9374-d3069bc3d149\") " pod="openstack/openstackclient" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.897474 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0b6ce2c-5ae8-496f-9374-d3069bc3d149-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b0b6ce2c-5ae8-496f-9374-d3069bc3d149\") " pod="openstack/openstackclient" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.911451 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c849cf559-fjllv" Jan 21 11:21:24 crc kubenswrapper[4881]: I0121 11:21:24.931193 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6sr9\" (UniqueName: \"kubernetes.io/projected/b0b6ce2c-5ae8-496f-9374-d3069bc3d149-kube-api-access-m6sr9\") pod \"openstackclient\" (UID: \"b0b6ce2c-5ae8-496f-9374-d3069bc3d149\") " pod="openstack/openstackclient" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.129732 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-69c96776fd-k2z88" podUID="2f516fb6-322b-4eee-9d4d-a10176959bbb" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.160:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.160:8443: connect: connection refused" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.130120 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-69c96776fd-k2z88" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.256377 4881 generic.go:334] "Generic (PLEG): container finished" podID="a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f" containerID="3c2fbfa61210bf849e04651287e22b6c198d4c12ea96a2312edd5e9f291c7879" exitCode=0 Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.256697 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb8f8b9c9-cwqc2" event={"ID":"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f","Type":"ContainerDied","Data":"3c2fbfa61210bf849e04651287e22b6c198d4c12ea96a2312edd5e9f291c7879"} Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.267841 4881 generic.go:334] "Generic (PLEG): container finished" podID="f51f915e-f553-4130-a16b-9e6af68a5a15" containerID="d69bb72f9eba472479b5b854a392dd678dcf12a1e5ab100dffbf954eda114573" exitCode=0 Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.267927 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-796dd99876-gb7nt" event={"ID":"f51f915e-f553-4130-a16b-9e6af68a5a15","Type":"ContainerDied","Data":"d69bb72f9eba472479b5b854a392dd678dcf12a1e5ab100dffbf954eda114573"} Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.275240 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ae53e440-5bd5-41e3-8339-57eebaef03d2","Type":"ContainerStarted","Data":"9855bd2a68e38d3c6ab91049f119372b94a26d3db8127fad0eb05eb3d93712a7"} Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.280924 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.309845 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bb8f8b9c9-cwqc2" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.411411 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 11:21:25 crc kubenswrapper[4881]: E0121 11:21:25.412653 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f" containerName="dnsmasq-dns" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.412676 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f" containerName="dnsmasq-dns" Jan 21 11:21:25 crc kubenswrapper[4881]: E0121 11:21:25.412714 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f" containerName="init" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.412727 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f" containerName="init" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.412953 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f" containerName="dnsmasq-dns" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.414421 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.414772 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gpsz\" (UniqueName: \"kubernetes.io/projected/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-kube-api-access-9gpsz\") pod \"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f\" (UID: \"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f\") " Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.414907 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-ovsdbserver-sb\") pod \"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f\" (UID: \"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f\") " Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.415125 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-dns-swift-storage-0\") pod \"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f\" (UID: \"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f\") " Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.415161 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-ovsdbserver-nb\") pod \"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f\" (UID: \"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f\") " Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.415199 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-dns-svc\") pod \"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f\" (UID: \"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f\") " Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.415280 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-config\") pod \"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f\" (UID: \"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f\") " Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.422317 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.422877 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.423185 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.423316 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-f8snw" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.436745 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-kube-api-access-9gpsz" (OuterVolumeSpecName: "kube-api-access-9gpsz") pod "a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f" (UID: "a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f"). InnerVolumeSpecName "kube-api-access-9gpsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.520151 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8ac2a63-dc28-4695-a77c-e82af400f4b9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.520230 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8ac2a63-dc28-4695-a77c-e82af400f4b9-logs\") pod \"glance-default-external-api-0\" (UID: \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.520273 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.520289 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b8ac2a63-dc28-4695-a77c-e82af400f4b9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.520308 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8ac2a63-dc28-4695-a77c-e82af400f4b9-scripts\") pod \"glance-default-external-api-0\" (UID: \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.520356 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8ac2a63-dc28-4695-a77c-e82af400f4b9-config-data\") pod \"glance-default-external-api-0\" (UID: \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.520418 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7p9c\" (UniqueName: \"kubernetes.io/projected/b8ac2a63-dc28-4695-a77c-e82af400f4b9-kube-api-access-m7p9c\") pod \"glance-default-external-api-0\" (UID: \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.520537 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gpsz\" (UniqueName: \"kubernetes.io/projected/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-kube-api-access-9gpsz\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.527013 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f" (UID: "a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.535917 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f" (UID: "a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.555628 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-config" (OuterVolumeSpecName: "config") pod "a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f" (UID: "a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.574355 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f" (UID: "a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.588993 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f" (UID: "a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.623177 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8ac2a63-dc28-4695-a77c-e82af400f4b9-config-data\") pod \"glance-default-external-api-0\" (UID: \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.623277 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7p9c\" (UniqueName: \"kubernetes.io/projected/b8ac2a63-dc28-4695-a77c-e82af400f4b9-kube-api-access-m7p9c\") pod \"glance-default-external-api-0\" (UID: \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.623336 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8ac2a63-dc28-4695-a77c-e82af400f4b9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.623374 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8ac2a63-dc28-4695-a77c-e82af400f4b9-logs\") pod \"glance-default-external-api-0\" (UID: \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.623412 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.623430 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b8ac2a63-dc28-4695-a77c-e82af400f4b9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.623446 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8ac2a63-dc28-4695-a77c-e82af400f4b9-scripts\") pod \"glance-default-external-api-0\" (UID: \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.623509 4881 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.623519 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.623528 4881 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.623538 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.623547 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.624086 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-external-api-0" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.624918 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8ac2a63-dc28-4695-a77c-e82af400f4b9-logs\") pod \"glance-default-external-api-0\" (UID: \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.625220 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b8ac2a63-dc28-4695-a77c-e82af400f4b9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.631126 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c849cf559-fjllv"] Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.634192 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8ac2a63-dc28-4695-a77c-e82af400f4b9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.640939 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8ac2a63-dc28-4695-a77c-e82af400f4b9-config-data\") pod \"glance-default-external-api-0\" (UID: \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.656453 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8ac2a63-dc28-4695-a77c-e82af400f4b9-scripts\") pod \"glance-default-external-api-0\" (UID: \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.674686 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.694448 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7p9c\" (UniqueName: \"kubernetes.io/projected/b8ac2a63-dc28-4695-a77c-e82af400f4b9-kube-api-access-m7p9c\") pod \"glance-default-external-api-0\" (UID: \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:25 crc kubenswrapper[4881]: W0121 11:21:25.725422 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a89a9d0_4859_41cb_896d_f1a91e854d7b.slice/crio-7d5f5a0fecb347a3031d8e9d038b27129aa5ce2b2e49dd11bb8a2bb4f461cdbf WatchSource:0}: Error finding container 7d5f5a0fecb347a3031d8e9d038b27129aa5ce2b2e49dd11bb8a2bb4f461cdbf: Status 404 returned error can't find the container with id 7d5f5a0fecb347a3031d8e9d038b27129aa5ce2b2e49dd11bb8a2bb4f461cdbf Jan 21 11:21:25 crc kubenswrapper[4881]: I0121 11:21:25.756848 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.000340 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.002702 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.006770 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.026478 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.048924 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.142365 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b6314462-e91a-47e2-8c76-27d6045e4fd5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b6314462-e91a-47e2-8c76-27d6045e4fd5\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.143805 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6314462-e91a-47e2-8c76-27d6045e4fd5-logs\") pod \"glance-default-internal-api-0\" (UID: \"b6314462-e91a-47e2-8c76-27d6045e4fd5\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.143958 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6314462-e91a-47e2-8c76-27d6045e4fd5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b6314462-e91a-47e2-8c76-27d6045e4fd5\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.144092 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"b6314462-e91a-47e2-8c76-27d6045e4fd5\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.144206 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6314462-e91a-47e2-8c76-27d6045e4fd5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b6314462-e91a-47e2-8c76-27d6045e4fd5\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.144450 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c96n8\" (UniqueName: \"kubernetes.io/projected/b6314462-e91a-47e2-8c76-27d6045e4fd5-kube-api-access-c96n8\") pod \"glance-default-internal-api-0\" (UID: \"b6314462-e91a-47e2-8c76-27d6045e4fd5\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.144559 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6314462-e91a-47e2-8c76-27d6045e4fd5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b6314462-e91a-47e2-8c76-27d6045e4fd5\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.253440 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6314462-e91a-47e2-8c76-27d6045e4fd5-logs\") pod \"glance-default-internal-api-0\" (UID: \"b6314462-e91a-47e2-8c76-27d6045e4fd5\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.253629 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6314462-e91a-47e2-8c76-27d6045e4fd5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b6314462-e91a-47e2-8c76-27d6045e4fd5\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.253724 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"b6314462-e91a-47e2-8c76-27d6045e4fd5\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.253824 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6314462-e91a-47e2-8c76-27d6045e4fd5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b6314462-e91a-47e2-8c76-27d6045e4fd5\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.253996 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c96n8\" (UniqueName: \"kubernetes.io/projected/b6314462-e91a-47e2-8c76-27d6045e4fd5-kube-api-access-c96n8\") pod \"glance-default-internal-api-0\" (UID: \"b6314462-e91a-47e2-8c76-27d6045e4fd5\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.254108 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6314462-e91a-47e2-8c76-27d6045e4fd5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b6314462-e91a-47e2-8c76-27d6045e4fd5\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.254267 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b6314462-e91a-47e2-8c76-27d6045e4fd5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b6314462-e91a-47e2-8c76-27d6045e4fd5\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.254687 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"b6314462-e91a-47e2-8c76-27d6045e4fd5\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.255044 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b6314462-e91a-47e2-8c76-27d6045e4fd5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b6314462-e91a-47e2-8c76-27d6045e4fd5\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.261184 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6314462-e91a-47e2-8c76-27d6045e4fd5-logs\") pod \"glance-default-internal-api-0\" (UID: \"b6314462-e91a-47e2-8c76-27d6045e4fd5\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.268903 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6314462-e91a-47e2-8c76-27d6045e4fd5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b6314462-e91a-47e2-8c76-27d6045e4fd5\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.283755 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6314462-e91a-47e2-8c76-27d6045e4fd5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b6314462-e91a-47e2-8c76-27d6045e4fd5\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.284568 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6314462-e91a-47e2-8c76-27d6045e4fd5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b6314462-e91a-47e2-8c76-27d6045e4fd5\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.294265 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c96n8\" (UniqueName: \"kubernetes.io/projected/b6314462-e91a-47e2-8c76-27d6045e4fd5-kube-api-access-c96n8\") pod \"glance-default-internal-api-0\" (UID: \"b6314462-e91a-47e2-8c76-27d6045e4fd5\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.322570 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"b0b6ce2c-5ae8-496f-9374-d3069bc3d149","Type":"ContainerStarted","Data":"ac59164ee2feec470301d1408d5d445d2eb400ca2673ab9a5db218be6b952cfd"} Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.331953 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"b6314462-e91a-47e2-8c76-27d6045e4fd5\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.347518 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bb8f8b9c9-cwqc2" event={"ID":"a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f","Type":"ContainerDied","Data":"89b83a73d98285f1ad5dfbcb846ef4a7cc6a0027b6f7fbb5d7b8bc7a7b615ee8"} Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.347625 4881 scope.go:117] "RemoveContainer" containerID="3c2fbfa61210bf849e04651287e22b6c198d4c12ea96a2312edd5e9f291c7879" Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.348032 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bb8f8b9c9-cwqc2" Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.361124 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c849cf559-fjllv" event={"ID":"4a89a9d0-4859-41cb-896d-f1a91e854d7b","Type":"ContainerStarted","Data":"7d5f5a0fecb347a3031d8e9d038b27129aa5ce2b2e49dd11bb8a2bb4f461cdbf"} Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.430669 4881 scope.go:117] "RemoveContainer" containerID="ab477504b6174b1df2cba532dc993abe653a33a827965c0d26c8c5abcd35974f" Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.495263 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.520756 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bb8f8b9c9-cwqc2"] Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.544322 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bb8f8b9c9-cwqc2"] Jan 21 11:21:26 crc kubenswrapper[4881]: I0121 11:21:26.555887 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 11:21:26 crc kubenswrapper[4881]: W0121 11:21:26.577277 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8ac2a63_dc28_4695_a77c_e82af400f4b9.slice/crio-878fece72c860d769e8ee83651c9b53fe9a4d183577d57ce467d36c383c7548b WatchSource:0}: Error finding container 878fece72c860d769e8ee83651c9b53fe9a4d183577d57ce467d36c383c7548b: Status 404 returned error can't find the container with id 878fece72c860d769e8ee83651c9b53fe9a4d183577d57ce467d36c383c7548b Jan 21 11:21:27 crc kubenswrapper[4881]: I0121 11:21:27.196522 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 11:21:27 crc kubenswrapper[4881]: W0121 11:21:27.243577 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6314462_e91a_47e2_8c76_27d6045e4fd5.slice/crio-1b45bab75ec786490c31073f33d23492c5ef48b13f2754d5543dd412a6220954 WatchSource:0}: Error finding container 1b45bab75ec786490c31073f33d23492c5ef48b13f2754d5543dd412a6220954: Status 404 returned error can't find the container with id 1b45bab75ec786490c31073f33d23492c5ef48b13f2754d5543dd412a6220954 Jan 21 11:21:27 crc kubenswrapper[4881]: I0121 11:21:27.334369 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f" path="/var/lib/kubelet/pods/a93d8b8c-58bf-47f4-b880-2fa5fb8fdf6f/volumes" Jan 21 11:21:27 crc kubenswrapper[4881]: I0121 11:21:27.389436 4881 generic.go:334] "Generic (PLEG): container finished" podID="4a89a9d0-4859-41cb-896d-f1a91e854d7b" containerID="e80fa73fd255dd2a9302a2ee6b75f7b4cf8767d543328dc915247c69166c0c25" exitCode=0 Jan 21 11:21:27 crc kubenswrapper[4881]: I0121 11:21:27.389539 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c849cf559-fjllv" event={"ID":"4a89a9d0-4859-41cb-896d-f1a91e854d7b","Type":"ContainerDied","Data":"e80fa73fd255dd2a9302a2ee6b75f7b4cf8767d543328dc915247c69166c0c25"} Jan 21 11:21:27 crc kubenswrapper[4881]: I0121 11:21:27.406335 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b6314462-e91a-47e2-8c76-27d6045e4fd5","Type":"ContainerStarted","Data":"1b45bab75ec786490c31073f33d23492c5ef48b13f2754d5543dd412a6220954"} Jan 21 11:21:27 crc kubenswrapper[4881]: I0121 11:21:27.430486 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ae53e440-5bd5-41e3-8339-57eebaef03d2","Type":"ContainerStarted","Data":"8ac2bfc0ffb0d46d00cee4b790d5413d7436c14a608c0d6d0e310a86377c6f2b"} Jan 21 11:21:27 crc kubenswrapper[4881]: I0121 11:21:27.431724 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 21 11:21:27 crc kubenswrapper[4881]: I0121 11:21:27.435976 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b8ac2a63-dc28-4695-a77c-e82af400f4b9","Type":"ContainerStarted","Data":"878fece72c860d769e8ee83651c9b53fe9a4d183577d57ce467d36c383c7548b"} Jan 21 11:21:27 crc kubenswrapper[4881]: I0121 11:21:27.466672 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.466640165 podStartE2EDuration="5.466640165s" podCreationTimestamp="2026-01-21 11:21:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:21:27.455632181 +0000 UTC m=+1474.715588650" watchObservedRunningTime="2026-01-21 11:21:27.466640165 +0000 UTC m=+1474.726596634" Jan 21 11:21:28 crc kubenswrapper[4881]: I0121 11:21:28.485739 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c849cf559-fjllv" event={"ID":"4a89a9d0-4859-41cb-896d-f1a91e854d7b","Type":"ContainerStarted","Data":"520ec1cfcb7fa94d0057499475a0936b202225668f29de849ba69f710c127ead"} Jan 21 11:21:28 crc kubenswrapper[4881]: I0121 11:21:28.493054 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-c849cf559-fjllv" Jan 21 11:21:28 crc kubenswrapper[4881]: I0121 11:21:28.507406 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b6314462-e91a-47e2-8c76-27d6045e4fd5","Type":"ContainerStarted","Data":"434985857d701704a9774fd3d3052cd59ddf177800adf78d0e559512e010a9cb"} Jan 21 11:21:28 crc kubenswrapper[4881]: I0121 11:21:28.525045 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b8ac2a63-dc28-4695-a77c-e82af400f4b9","Type":"ContainerStarted","Data":"243391ce37046a98efbd843bc1e6f28fda173bffe3ce05b733b63f613224e766"} Jan 21 11:21:28 crc kubenswrapper[4881]: I0121 11:21:28.528554 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-c849cf559-fjllv" podStartSLOduration=4.528528266 podStartE2EDuration="4.528528266s" podCreationTimestamp="2026-01-21 11:21:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:21:28.519767218 +0000 UTC m=+1475.779723697" watchObservedRunningTime="2026-01-21 11:21:28.528528266 +0000 UTC m=+1475.788484735" Jan 21 11:21:29 crc kubenswrapper[4881]: I0121 11:21:29.146437 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 21 11:21:29 crc kubenswrapper[4881]: I0121 11:21:29.224312 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 11:21:29 crc kubenswrapper[4881]: I0121 11:21:29.497882 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 21 11:21:29 crc kubenswrapper[4881]: I0121 11:21:29.498241 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 21 11:21:29 crc kubenswrapper[4881]: I0121 11:21:29.498253 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 21 11:21:29 crc kubenswrapper[4881]: I0121 11:21:29.499185 4881 scope.go:117] "RemoveContainer" containerID="5ccae223d32b8d30267f4d247c29e77d1942427c122a26bc75e9b00b89fa3bc0" Jan 21 11:21:29 crc kubenswrapper[4881]: E0121 11:21:29.499599 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(ee4e7116-c2cd-43d5-af6b-9f30b5053e0e)\"" pod="openstack/watcher-decision-engine-0" podUID="ee4e7116-c2cd-43d5-af6b-9f30b5053e0e" Jan 21 11:21:29 crc kubenswrapper[4881]: I0121 11:21:29.546776 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 11:21:29 crc kubenswrapper[4881]: I0121 11:21:29.551394 4881 generic.go:334] "Generic (PLEG): container finished" podID="f51f915e-f553-4130-a16b-9e6af68a5a15" containerID="3a9e17862c5ff2f64ddcb7cb3eb9d73424fbbcd62c695e9a6f00fe4f1a20f86b" exitCode=0 Jan 21 11:21:29 crc kubenswrapper[4881]: I0121 11:21:29.551527 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-796dd99876-gb7nt" event={"ID":"f51f915e-f553-4130-a16b-9e6af68a5a15","Type":"ContainerDied","Data":"3a9e17862c5ff2f64ddcb7cb3eb9d73424fbbcd62c695e9a6f00fe4f1a20f86b"} Jan 21 11:21:29 crc kubenswrapper[4881]: I0121 11:21:29.555676 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b6314462-e91a-47e2-8c76-27d6045e4fd5","Type":"ContainerStarted","Data":"ab4aa207fe182d319e9733e2ab8db20e08b52da274839b54590750e50f1e0aa2"} Jan 21 11:21:29 crc kubenswrapper[4881]: I0121 11:21:29.563526 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b8ac2a63-dc28-4695-a77c-e82af400f4b9","Type":"ContainerStarted","Data":"c7d5411076516ac1067feb6fa2326814efce9d04ded39d593fa3f53c461d73dc"} Jan 21 11:21:29 crc kubenswrapper[4881]: I0121 11:21:29.563688 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="86045f5e-defd-4c68-a582-c51c9c26e5c7" containerName="cinder-scheduler" containerID="cri-o://f179a38b8e729fdba1d50653424c543fe9ebf0803e8ecb14e1eaa90d4edb87bf" gracePeriod=30 Jan 21 11:21:29 crc kubenswrapper[4881]: I0121 11:21:29.563985 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="86045f5e-defd-4c68-a582-c51c9c26e5c7" containerName="probe" containerID="cri-o://d776082f3aaee81cc1f230c5cf4abdaa34059f0c862a2df0c93b102e79762938" gracePeriod=30 Jan 21 11:21:29 crc kubenswrapper[4881]: I0121 11:21:29.595927 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.595902334 podStartE2EDuration="5.595902334s" podCreationTimestamp="2026-01-21 11:21:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:21:29.581478245 +0000 UTC m=+1476.841434724" watchObservedRunningTime="2026-01-21 11:21:29.595902334 +0000 UTC m=+1476.855858803" Jan 21 11:21:29 crc kubenswrapper[4881]: I0121 11:21:29.604375 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.604355345 podStartE2EDuration="5.604355345s" podCreationTimestamp="2026-01-21 11:21:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:21:29.603645817 +0000 UTC m=+1476.863602296" watchObservedRunningTime="2026-01-21 11:21:29.604355345 +0000 UTC m=+1476.864311814" Jan 21 11:21:29 crc kubenswrapper[4881]: I0121 11:21:29.654639 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 11:21:30 crc kubenswrapper[4881]: I0121 11:21:30.293950 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-796dd99876-gb7nt" Jan 21 11:21:30 crc kubenswrapper[4881]: I0121 11:21:30.461593 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f51f915e-f553-4130-a16b-9e6af68a5a15-httpd-config\") pod \"f51f915e-f553-4130-a16b-9e6af68a5a15\" (UID: \"f51f915e-f553-4130-a16b-9e6af68a5a15\") " Jan 21 11:21:30 crc kubenswrapper[4881]: I0121 11:21:30.461710 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f51f915e-f553-4130-a16b-9e6af68a5a15-combined-ca-bundle\") pod \"f51f915e-f553-4130-a16b-9e6af68a5a15\" (UID: \"f51f915e-f553-4130-a16b-9e6af68a5a15\") " Jan 21 11:21:30 crc kubenswrapper[4881]: I0121 11:21:30.461841 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f51f915e-f553-4130-a16b-9e6af68a5a15-config\") pod \"f51f915e-f553-4130-a16b-9e6af68a5a15\" (UID: \"f51f915e-f553-4130-a16b-9e6af68a5a15\") " Jan 21 11:21:30 crc kubenswrapper[4881]: I0121 11:21:30.461909 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgwv9\" (UniqueName: \"kubernetes.io/projected/f51f915e-f553-4130-a16b-9e6af68a5a15-kube-api-access-lgwv9\") pod \"f51f915e-f553-4130-a16b-9e6af68a5a15\" (UID: \"f51f915e-f553-4130-a16b-9e6af68a5a15\") " Jan 21 11:21:30 crc kubenswrapper[4881]: I0121 11:21:30.461943 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f51f915e-f553-4130-a16b-9e6af68a5a15-ovndb-tls-certs\") pod \"f51f915e-f553-4130-a16b-9e6af68a5a15\" (UID: \"f51f915e-f553-4130-a16b-9e6af68a5a15\") " Jan 21 11:21:30 crc kubenswrapper[4881]: I0121 11:21:30.478426 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f51f915e-f553-4130-a16b-9e6af68a5a15-kube-api-access-lgwv9" (OuterVolumeSpecName: "kube-api-access-lgwv9") pod "f51f915e-f553-4130-a16b-9e6af68a5a15" (UID: "f51f915e-f553-4130-a16b-9e6af68a5a15"). InnerVolumeSpecName "kube-api-access-lgwv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:21:30 crc kubenswrapper[4881]: I0121 11:21:30.493041 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f51f915e-f553-4130-a16b-9e6af68a5a15-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "f51f915e-f553-4130-a16b-9e6af68a5a15" (UID: "f51f915e-f553-4130-a16b-9e6af68a5a15"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:30 crc kubenswrapper[4881]: I0121 11:21:30.529692 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f51f915e-f553-4130-a16b-9e6af68a5a15-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f51f915e-f553-4130-a16b-9e6af68a5a15" (UID: "f51f915e-f553-4130-a16b-9e6af68a5a15"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:30 crc kubenswrapper[4881]: I0121 11:21:30.536366 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f51f915e-f553-4130-a16b-9e6af68a5a15-config" (OuterVolumeSpecName: "config") pod "f51f915e-f553-4130-a16b-9e6af68a5a15" (UID: "f51f915e-f553-4130-a16b-9e6af68a5a15"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:30 crc kubenswrapper[4881]: I0121 11:21:30.564504 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/f51f915e-f553-4130-a16b-9e6af68a5a15-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:30 crc kubenswrapper[4881]: I0121 11:21:30.564556 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgwv9\" (UniqueName: \"kubernetes.io/projected/f51f915e-f553-4130-a16b-9e6af68a5a15-kube-api-access-lgwv9\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:30 crc kubenswrapper[4881]: I0121 11:21:30.564573 4881 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/f51f915e-f553-4130-a16b-9e6af68a5a15-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:30 crc kubenswrapper[4881]: I0121 11:21:30.564586 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f51f915e-f553-4130-a16b-9e6af68a5a15-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:30 crc kubenswrapper[4881]: I0121 11:21:30.575991 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f51f915e-f553-4130-a16b-9e6af68a5a15-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "f51f915e-f553-4130-a16b-9e6af68a5a15" (UID: "f51f915e-f553-4130-a16b-9e6af68a5a15"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:30 crc kubenswrapper[4881]: I0121 11:21:30.582608 4881 generic.go:334] "Generic (PLEG): container finished" podID="86045f5e-defd-4c68-a582-c51c9c26e5c7" containerID="d776082f3aaee81cc1f230c5cf4abdaa34059f0c862a2df0c93b102e79762938" exitCode=0 Jan 21 11:21:30 crc kubenswrapper[4881]: I0121 11:21:30.582677 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"86045f5e-defd-4c68-a582-c51c9c26e5c7","Type":"ContainerDied","Data":"d776082f3aaee81cc1f230c5cf4abdaa34059f0c862a2df0c93b102e79762938"} Jan 21 11:21:30 crc kubenswrapper[4881]: I0121 11:21:30.585451 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-796dd99876-gb7nt" event={"ID":"f51f915e-f553-4130-a16b-9e6af68a5a15","Type":"ContainerDied","Data":"2e4be17fa483a6184f2eda034f9fc33ec23230c3292d5bb3f6f80cd50bfff6e9"} Jan 21 11:21:30 crc kubenswrapper[4881]: I0121 11:21:30.585494 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-796dd99876-gb7nt" Jan 21 11:21:30 crc kubenswrapper[4881]: I0121 11:21:30.585524 4881 scope.go:117] "RemoveContainer" containerID="d69bb72f9eba472479b5b854a392dd678dcf12a1e5ab100dffbf954eda114573" Jan 21 11:21:30 crc kubenswrapper[4881]: I0121 11:21:30.585953 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="b8ac2a63-dc28-4695-a77c-e82af400f4b9" containerName="glance-log" containerID="cri-o://243391ce37046a98efbd843bc1e6f28fda173bffe3ce05b733b63f613224e766" gracePeriod=30 Jan 21 11:21:30 crc kubenswrapper[4881]: I0121 11:21:30.586293 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="b8ac2a63-dc28-4695-a77c-e82af400f4b9" containerName="glance-httpd" containerID="cri-o://c7d5411076516ac1067feb6fa2326814efce9d04ded39d593fa3f53c461d73dc" gracePeriod=30 Jan 21 11:21:30 crc kubenswrapper[4881]: I0121 11:21:30.669494 4881 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f51f915e-f553-4130-a16b-9e6af68a5a15-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:30 crc kubenswrapper[4881]: I0121 11:21:30.693631 4881 scope.go:117] "RemoveContainer" containerID="3a9e17862c5ff2f64ddcb7cb3eb9d73424fbbcd62c695e9a6f00fe4f1a20f86b" Jan 21 11:21:30 crc kubenswrapper[4881]: I0121 11:21:30.700099 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-796dd99876-gb7nt"] Jan 21 11:21:30 crc kubenswrapper[4881]: I0121 11:21:30.709237 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-796dd99876-gb7nt"] Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.326864 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f51f915e-f553-4130-a16b-9e6af68a5a15" path="/var/lib/kubelet/pods/f51f915e-f553-4130-a16b-9e6af68a5a15/volumes" Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.619247 4881 generic.go:334] "Generic (PLEG): container finished" podID="2f516fb6-322b-4eee-9d4d-a10176959bbb" containerID="c37cb0dabfc7bd198de45353bd7d592c9381160bf0f186350e93353fe2ea4470" exitCode=137 Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.619358 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69c96776fd-k2z88" event={"ID":"2f516fb6-322b-4eee-9d4d-a10176959bbb","Type":"ContainerDied","Data":"c37cb0dabfc7bd198de45353bd7d592c9381160bf0f186350e93353fe2ea4470"} Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.640113 4881 generic.go:334] "Generic (PLEG): container finished" podID="b8ac2a63-dc28-4695-a77c-e82af400f4b9" containerID="c7d5411076516ac1067feb6fa2326814efce9d04ded39d593fa3f53c461d73dc" exitCode=0 Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.640158 4881 generic.go:334] "Generic (PLEG): container finished" podID="b8ac2a63-dc28-4695-a77c-e82af400f4b9" containerID="243391ce37046a98efbd843bc1e6f28fda173bffe3ce05b733b63f613224e766" exitCode=143 Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.640400 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="b6314462-e91a-47e2-8c76-27d6045e4fd5" containerName="glance-log" containerID="cri-o://434985857d701704a9774fd3d3052cd59ddf177800adf78d0e559512e010a9cb" gracePeriod=30 Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.640603 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b8ac2a63-dc28-4695-a77c-e82af400f4b9","Type":"ContainerDied","Data":"c7d5411076516ac1067feb6fa2326814efce9d04ded39d593fa3f53c461d73dc"} Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.640673 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b8ac2a63-dc28-4695-a77c-e82af400f4b9","Type":"ContainerDied","Data":"243391ce37046a98efbd843bc1e6f28fda173bffe3ce05b733b63f613224e766"} Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.640684 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b8ac2a63-dc28-4695-a77c-e82af400f4b9","Type":"ContainerDied","Data":"878fece72c860d769e8ee83651c9b53fe9a4d183577d57ce467d36c383c7548b"} Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.640701 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="878fece72c860d769e8ee83651c9b53fe9a4d183577d57ce467d36c383c7548b" Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.640722 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="b6314462-e91a-47e2-8c76-27d6045e4fd5" containerName="glance-httpd" containerID="cri-o://ab4aa207fe182d319e9733e2ab8db20e08b52da274839b54590750e50f1e0aa2" gracePeriod=30 Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.657349 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.825229 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8ac2a63-dc28-4695-a77c-e82af400f4b9-logs\") pod \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\" (UID: \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\") " Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.825794 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8ac2a63-dc28-4695-a77c-e82af400f4b9-logs" (OuterVolumeSpecName: "logs") pod "b8ac2a63-dc28-4695-a77c-e82af400f4b9" (UID: "b8ac2a63-dc28-4695-a77c-e82af400f4b9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.825957 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b8ac2a63-dc28-4695-a77c-e82af400f4b9-httpd-run\") pod \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\" (UID: \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\") " Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.826175 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8ac2a63-dc28-4695-a77c-e82af400f4b9-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b8ac2a63-dc28-4695-a77c-e82af400f4b9" (UID: "b8ac2a63-dc28-4695-a77c-e82af400f4b9"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.826314 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8ac2a63-dc28-4695-a77c-e82af400f4b9-config-data\") pod \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\" (UID: \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\") " Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.827068 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8ac2a63-dc28-4695-a77c-e82af400f4b9-scripts\") pod \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\" (UID: \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\") " Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.827187 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8ac2a63-dc28-4695-a77c-e82af400f4b9-combined-ca-bundle\") pod \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\" (UID: \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\") " Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.827229 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7p9c\" (UniqueName: \"kubernetes.io/projected/b8ac2a63-dc28-4695-a77c-e82af400f4b9-kube-api-access-m7p9c\") pod \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\" (UID: \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\") " Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.827416 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\" (UID: \"b8ac2a63-dc28-4695-a77c-e82af400f4b9\") " Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.828309 4881 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8ac2a63-dc28-4695-a77c-e82af400f4b9-logs\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.828329 4881 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b8ac2a63-dc28-4695-a77c-e82af400f4b9-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.837425 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8ac2a63-dc28-4695-a77c-e82af400f4b9-kube-api-access-m7p9c" (OuterVolumeSpecName: "kube-api-access-m7p9c") pod "b8ac2a63-dc28-4695-a77c-e82af400f4b9" (UID: "b8ac2a63-dc28-4695-a77c-e82af400f4b9"). InnerVolumeSpecName "kube-api-access-m7p9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.837963 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8ac2a63-dc28-4695-a77c-e82af400f4b9-scripts" (OuterVolumeSpecName: "scripts") pod "b8ac2a63-dc28-4695-a77c-e82af400f4b9" (UID: "b8ac2a63-dc28-4695-a77c-e82af400f4b9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.838140 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "b8ac2a63-dc28-4695-a77c-e82af400f4b9" (UID: "b8ac2a63-dc28-4695-a77c-e82af400f4b9"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.867338 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8ac2a63-dc28-4695-a77c-e82af400f4b9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b8ac2a63-dc28-4695-a77c-e82af400f4b9" (UID: "b8ac2a63-dc28-4695-a77c-e82af400f4b9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.897030 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8ac2a63-dc28-4695-a77c-e82af400f4b9-config-data" (OuterVolumeSpecName: "config-data") pod "b8ac2a63-dc28-4695-a77c-e82af400f4b9" (UID: "b8ac2a63-dc28-4695-a77c-e82af400f4b9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.934924 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8ac2a63-dc28-4695-a77c-e82af400f4b9-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.934964 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8ac2a63-dc28-4695-a77c-e82af400f4b9-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.934976 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8ac2a63-dc28-4695-a77c-e82af400f4b9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.934988 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7p9c\" (UniqueName: \"kubernetes.io/projected/b8ac2a63-dc28-4695-a77c-e82af400f4b9-kube-api-access-m7p9c\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.935019 4881 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.974991 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69c96776fd-k2z88" Jan 21 11:21:31 crc kubenswrapper[4881]: I0121 11:21:31.979531 4881 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.037336 4881 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.140491 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2f516fb6-322b-4eee-9d4d-a10176959bbb-config-data\") pod \"2f516fb6-322b-4eee-9d4d-a10176959bbb\" (UID: \"2f516fb6-322b-4eee-9d4d-a10176959bbb\") " Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.140552 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f516fb6-322b-4eee-9d4d-a10176959bbb-combined-ca-bundle\") pod \"2f516fb6-322b-4eee-9d4d-a10176959bbb\" (UID: \"2f516fb6-322b-4eee-9d4d-a10176959bbb\") " Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.140838 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f516fb6-322b-4eee-9d4d-a10176959bbb-logs\") pod \"2f516fb6-322b-4eee-9d4d-a10176959bbb\" (UID: \"2f516fb6-322b-4eee-9d4d-a10176959bbb\") " Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.141017 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f516fb6-322b-4eee-9d4d-a10176959bbb-horizon-tls-certs\") pod \"2f516fb6-322b-4eee-9d4d-a10176959bbb\" (UID: \"2f516fb6-322b-4eee-9d4d-a10176959bbb\") " Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.141096 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lfrt\" (UniqueName: \"kubernetes.io/projected/2f516fb6-322b-4eee-9d4d-a10176959bbb-kube-api-access-2lfrt\") pod \"2f516fb6-322b-4eee-9d4d-a10176959bbb\" (UID: \"2f516fb6-322b-4eee-9d4d-a10176959bbb\") " Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.141225 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f516fb6-322b-4eee-9d4d-a10176959bbb-scripts\") pod \"2f516fb6-322b-4eee-9d4d-a10176959bbb\" (UID: \"2f516fb6-322b-4eee-9d4d-a10176959bbb\") " Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.141314 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2f516fb6-322b-4eee-9d4d-a10176959bbb-horizon-secret-key\") pod \"2f516fb6-322b-4eee-9d4d-a10176959bbb\" (UID: \"2f516fb6-322b-4eee-9d4d-a10176959bbb\") " Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.142470 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f516fb6-322b-4eee-9d4d-a10176959bbb-logs" (OuterVolumeSpecName: "logs") pod "2f516fb6-322b-4eee-9d4d-a10176959bbb" (UID: "2f516fb6-322b-4eee-9d4d-a10176959bbb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.148925 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f516fb6-322b-4eee-9d4d-a10176959bbb-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "2f516fb6-322b-4eee-9d4d-a10176959bbb" (UID: "2f516fb6-322b-4eee-9d4d-a10176959bbb"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.149138 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f516fb6-322b-4eee-9d4d-a10176959bbb-kube-api-access-2lfrt" (OuterVolumeSpecName: "kube-api-access-2lfrt") pod "2f516fb6-322b-4eee-9d4d-a10176959bbb" (UID: "2f516fb6-322b-4eee-9d4d-a10176959bbb"). InnerVolumeSpecName "kube-api-access-2lfrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.175638 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f516fb6-322b-4eee-9d4d-a10176959bbb-scripts" (OuterVolumeSpecName: "scripts") pod "2f516fb6-322b-4eee-9d4d-a10176959bbb" (UID: "2f516fb6-322b-4eee-9d4d-a10176959bbb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.179971 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f516fb6-322b-4eee-9d4d-a10176959bbb-config-data" (OuterVolumeSpecName: "config-data") pod "2f516fb6-322b-4eee-9d4d-a10176959bbb" (UID: "2f516fb6-322b-4eee-9d4d-a10176959bbb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.201296 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f516fb6-322b-4eee-9d4d-a10176959bbb-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "2f516fb6-322b-4eee-9d4d-a10176959bbb" (UID: "2f516fb6-322b-4eee-9d4d-a10176959bbb"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.205556 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f516fb6-322b-4eee-9d4d-a10176959bbb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2f516fb6-322b-4eee-9d4d-a10176959bbb" (UID: "2f516fb6-322b-4eee-9d4d-a10176959bbb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.249472 4881 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/2f516fb6-322b-4eee-9d4d-a10176959bbb-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.249511 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lfrt\" (UniqueName: \"kubernetes.io/projected/2f516fb6-322b-4eee-9d4d-a10176959bbb-kube-api-access-2lfrt\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.249524 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2f516fb6-322b-4eee-9d4d-a10176959bbb-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.249533 4881 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2f516fb6-322b-4eee-9d4d-a10176959bbb-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.249542 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2f516fb6-322b-4eee-9d4d-a10176959bbb-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.249551 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f516fb6-322b-4eee-9d4d-a10176959bbb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.249558 4881 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f516fb6-322b-4eee-9d4d-a10176959bbb-logs\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.374853 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.454162 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6314462-e91a-47e2-8c76-27d6045e4fd5-scripts\") pod \"b6314462-e91a-47e2-8c76-27d6045e4fd5\" (UID: \"b6314462-e91a-47e2-8c76-27d6045e4fd5\") " Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.454911 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6314462-e91a-47e2-8c76-27d6045e4fd5-config-data\") pod \"b6314462-e91a-47e2-8c76-27d6045e4fd5\" (UID: \"b6314462-e91a-47e2-8c76-27d6045e4fd5\") " Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.454999 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"b6314462-e91a-47e2-8c76-27d6045e4fd5\" (UID: \"b6314462-e91a-47e2-8c76-27d6045e4fd5\") " Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.455059 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6314462-e91a-47e2-8c76-27d6045e4fd5-combined-ca-bundle\") pod \"b6314462-e91a-47e2-8c76-27d6045e4fd5\" (UID: \"b6314462-e91a-47e2-8c76-27d6045e4fd5\") " Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.455227 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b6314462-e91a-47e2-8c76-27d6045e4fd5-httpd-run\") pod \"b6314462-e91a-47e2-8c76-27d6045e4fd5\" (UID: \"b6314462-e91a-47e2-8c76-27d6045e4fd5\") " Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.455292 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c96n8\" (UniqueName: \"kubernetes.io/projected/b6314462-e91a-47e2-8c76-27d6045e4fd5-kube-api-access-c96n8\") pod \"b6314462-e91a-47e2-8c76-27d6045e4fd5\" (UID: \"b6314462-e91a-47e2-8c76-27d6045e4fd5\") " Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.455372 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6314462-e91a-47e2-8c76-27d6045e4fd5-logs\") pod \"b6314462-e91a-47e2-8c76-27d6045e4fd5\" (UID: \"b6314462-e91a-47e2-8c76-27d6045e4fd5\") " Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.456993 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6314462-e91a-47e2-8c76-27d6045e4fd5-logs" (OuterVolumeSpecName: "logs") pod "b6314462-e91a-47e2-8c76-27d6045e4fd5" (UID: "b6314462-e91a-47e2-8c76-27d6045e4fd5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.457207 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6314462-e91a-47e2-8c76-27d6045e4fd5-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b6314462-e91a-47e2-8c76-27d6045e4fd5" (UID: "b6314462-e91a-47e2-8c76-27d6045e4fd5"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.460985 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6314462-e91a-47e2-8c76-27d6045e4fd5-kube-api-access-c96n8" (OuterVolumeSpecName: "kube-api-access-c96n8") pod "b6314462-e91a-47e2-8c76-27d6045e4fd5" (UID: "b6314462-e91a-47e2-8c76-27d6045e4fd5"). InnerVolumeSpecName "kube-api-access-c96n8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.464224 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "b6314462-e91a-47e2-8c76-27d6045e4fd5" (UID: "b6314462-e91a-47e2-8c76-27d6045e4fd5"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.467197 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6314462-e91a-47e2-8c76-27d6045e4fd5-scripts" (OuterVolumeSpecName: "scripts") pod "b6314462-e91a-47e2-8c76-27d6045e4fd5" (UID: "b6314462-e91a-47e2-8c76-27d6045e4fd5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.487775 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6314462-e91a-47e2-8c76-27d6045e4fd5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b6314462-e91a-47e2-8c76-27d6045e4fd5" (UID: "b6314462-e91a-47e2-8c76-27d6045e4fd5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.544013 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6314462-e91a-47e2-8c76-27d6045e4fd5-config-data" (OuterVolumeSpecName: "config-data") pod "b6314462-e91a-47e2-8c76-27d6045e4fd5" (UID: "b6314462-e91a-47e2-8c76-27d6045e4fd5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.558115 4881 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.558162 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6314462-e91a-47e2-8c76-27d6045e4fd5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.558211 4881 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b6314462-e91a-47e2-8c76-27d6045e4fd5-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.558233 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c96n8\" (UniqueName: \"kubernetes.io/projected/b6314462-e91a-47e2-8c76-27d6045e4fd5-kube-api-access-c96n8\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.558248 4881 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6314462-e91a-47e2-8c76-27d6045e4fd5-logs\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.558258 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6314462-e91a-47e2-8c76-27d6045e4fd5-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.558298 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6314462-e91a-47e2-8c76-27d6045e4fd5-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.588093 4881 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.654236 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69c96776fd-k2z88" event={"ID":"2f516fb6-322b-4eee-9d4d-a10176959bbb","Type":"ContainerDied","Data":"1c1c6837f2242fbd603bbb32074adc55de9c3121097b94c5088bc30db69ba787"} Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.654333 4881 scope.go:117] "RemoveContainer" containerID="20e9501e200b98586a1c9e7d12e2adf41d01903bd2505ab83e7f8f0fc5404f52" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.654268 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69c96776fd-k2z88" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.659247 4881 generic.go:334] "Generic (PLEG): container finished" podID="b6314462-e91a-47e2-8c76-27d6045e4fd5" containerID="ab4aa207fe182d319e9733e2ab8db20e08b52da274839b54590750e50f1e0aa2" exitCode=0 Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.659284 4881 generic.go:334] "Generic (PLEG): container finished" podID="b6314462-e91a-47e2-8c76-27d6045e4fd5" containerID="434985857d701704a9774fd3d3052cd59ddf177800adf78d0e559512e010a9cb" exitCode=143 Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.659357 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.659922 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.660044 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b6314462-e91a-47e2-8c76-27d6045e4fd5","Type":"ContainerDied","Data":"ab4aa207fe182d319e9733e2ab8db20e08b52da274839b54590750e50f1e0aa2"} Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.660072 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b6314462-e91a-47e2-8c76-27d6045e4fd5","Type":"ContainerDied","Data":"434985857d701704a9774fd3d3052cd59ddf177800adf78d0e559512e010a9cb"} Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.660086 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b6314462-e91a-47e2-8c76-27d6045e4fd5","Type":"ContainerDied","Data":"1b45bab75ec786490c31073f33d23492c5ef48b13f2754d5543dd412a6220954"} Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.660898 4881 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.736391 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-69c96776fd-k2z88"] Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.773873 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-69c96776fd-k2z88"] Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.824297 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.848644 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.862270 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.878838 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 11:21:32 crc kubenswrapper[4881]: E0121 11:21:32.879447 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f516fb6-322b-4eee-9d4d-a10176959bbb" containerName="horizon-log" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.879470 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f516fb6-322b-4eee-9d4d-a10176959bbb" containerName="horizon-log" Jan 21 11:21:32 crc kubenswrapper[4881]: E0121 11:21:32.879487 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f51f915e-f553-4130-a16b-9e6af68a5a15" containerName="neutron-api" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.879495 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="f51f915e-f553-4130-a16b-9e6af68a5a15" containerName="neutron-api" Jan 21 11:21:32 crc kubenswrapper[4881]: E0121 11:21:32.879504 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f516fb6-322b-4eee-9d4d-a10176959bbb" containerName="horizon" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.879512 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f516fb6-322b-4eee-9d4d-a10176959bbb" containerName="horizon" Jan 21 11:21:32 crc kubenswrapper[4881]: E0121 11:21:32.879531 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8ac2a63-dc28-4695-a77c-e82af400f4b9" containerName="glance-log" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.879538 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8ac2a63-dc28-4695-a77c-e82af400f4b9" containerName="glance-log" Jan 21 11:21:32 crc kubenswrapper[4881]: E0121 11:21:32.879549 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f51f915e-f553-4130-a16b-9e6af68a5a15" containerName="neutron-httpd" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.879556 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="f51f915e-f553-4130-a16b-9e6af68a5a15" containerName="neutron-httpd" Jan 21 11:21:32 crc kubenswrapper[4881]: E0121 11:21:32.879571 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6314462-e91a-47e2-8c76-27d6045e4fd5" containerName="glance-httpd" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.879577 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6314462-e91a-47e2-8c76-27d6045e4fd5" containerName="glance-httpd" Jan 21 11:21:32 crc kubenswrapper[4881]: E0121 11:21:32.879606 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8ac2a63-dc28-4695-a77c-e82af400f4b9" containerName="glance-httpd" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.879614 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8ac2a63-dc28-4695-a77c-e82af400f4b9" containerName="glance-httpd" Jan 21 11:21:32 crc kubenswrapper[4881]: E0121 11:21:32.879632 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6314462-e91a-47e2-8c76-27d6045e4fd5" containerName="glance-log" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.879639 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6314462-e91a-47e2-8c76-27d6045e4fd5" containerName="glance-log" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.879902 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f516fb6-322b-4eee-9d4d-a10176959bbb" containerName="horizon-log" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.879922 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f516fb6-322b-4eee-9d4d-a10176959bbb" containerName="horizon" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.879931 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="f51f915e-f553-4130-a16b-9e6af68a5a15" containerName="neutron-api" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.879949 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8ac2a63-dc28-4695-a77c-e82af400f4b9" containerName="glance-log" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.879961 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6314462-e91a-47e2-8c76-27d6045e4fd5" containerName="glance-log" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.879974 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8ac2a63-dc28-4695-a77c-e82af400f4b9" containerName="glance-httpd" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.879981 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6314462-e91a-47e2-8c76-27d6045e4fd5" containerName="glance-httpd" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.879994 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="f51f915e-f553-4130-a16b-9e6af68a5a15" containerName="neutron-httpd" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.881122 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.883364 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.883969 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.884033 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-f8snw" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.884136 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.887103 4881 scope.go:117] "RemoveContainer" containerID="c37cb0dabfc7bd198de45353bd7d592c9381160bf0f186350e93353fe2ea4470" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.888064 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.901216 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.920092 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.925195 4881 scope.go:117] "RemoveContainer" containerID="ab4aa207fe182d319e9733e2ab8db20e08b52da274839b54590750e50f1e0aa2" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.940902 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.941629 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.946132 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.946322 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.969196 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86debe8b-5d02-4f2e-a311-6106609aeb1c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.969255 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.969312 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6fqw\" (UniqueName: \"kubernetes.io/projected/86debe8b-5d02-4f2e-a311-6106609aeb1c-kube-api-access-v6fqw\") pod \"glance-default-internal-api-0\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.969345 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/86debe8b-5d02-4f2e-a311-6106609aeb1c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.969365 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86debe8b-5d02-4f2e-a311-6106609aeb1c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.969400 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86debe8b-5d02-4f2e-a311-6106609aeb1c-logs\") pod \"glance-default-internal-api-0\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.969425 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/86debe8b-5d02-4f2e-a311-6106609aeb1c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.969456 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86debe8b-5d02-4f2e-a311-6106609aeb1c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:32 crc kubenswrapper[4881]: I0121 11:21:32.982703 4881 scope.go:117] "RemoveContainer" containerID="434985857d701704a9774fd3d3052cd59ddf177800adf78d0e559512e010a9cb" Jan 21 11:21:33 crc kubenswrapper[4881]: E0121 11:21:33.002855 4881 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2f516fb6_322b_4eee_9d4d_a10176959bbb.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8ac2a63_dc28_4695_a77c_e82af400f4b9.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6314462_e91a_47e2_8c76_27d6045e4fd5.slice\": RecentStats: unable to find data in memory cache]" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.060615 4881 scope.go:117] "RemoveContainer" containerID="ab4aa207fe182d319e9733e2ab8db20e08b52da274839b54590750e50f1e0aa2" Jan 21 11:21:33 crc kubenswrapper[4881]: E0121 11:21:33.064507 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab4aa207fe182d319e9733e2ab8db20e08b52da274839b54590750e50f1e0aa2\": container with ID starting with ab4aa207fe182d319e9733e2ab8db20e08b52da274839b54590750e50f1e0aa2 not found: ID does not exist" containerID="ab4aa207fe182d319e9733e2ab8db20e08b52da274839b54590750e50f1e0aa2" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.064547 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab4aa207fe182d319e9733e2ab8db20e08b52da274839b54590750e50f1e0aa2"} err="failed to get container status \"ab4aa207fe182d319e9733e2ab8db20e08b52da274839b54590750e50f1e0aa2\": rpc error: code = NotFound desc = could not find container \"ab4aa207fe182d319e9733e2ab8db20e08b52da274839b54590750e50f1e0aa2\": container with ID starting with ab4aa207fe182d319e9733e2ab8db20e08b52da274839b54590750e50f1e0aa2 not found: ID does not exist" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.064572 4881 scope.go:117] "RemoveContainer" containerID="434985857d701704a9774fd3d3052cd59ddf177800adf78d0e559512e010a9cb" Jan 21 11:21:33 crc kubenswrapper[4881]: E0121 11:21:33.065190 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"434985857d701704a9774fd3d3052cd59ddf177800adf78d0e559512e010a9cb\": container with ID starting with 434985857d701704a9774fd3d3052cd59ddf177800adf78d0e559512e010a9cb not found: ID does not exist" containerID="434985857d701704a9774fd3d3052cd59ddf177800adf78d0e559512e010a9cb" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.065224 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"434985857d701704a9774fd3d3052cd59ddf177800adf78d0e559512e010a9cb"} err="failed to get container status \"434985857d701704a9774fd3d3052cd59ddf177800adf78d0e559512e010a9cb\": rpc error: code = NotFound desc = could not find container \"434985857d701704a9774fd3d3052cd59ddf177800adf78d0e559512e010a9cb\": container with ID starting with 434985857d701704a9774fd3d3052cd59ddf177800adf78d0e559512e010a9cb not found: ID does not exist" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.065266 4881 scope.go:117] "RemoveContainer" containerID="ab4aa207fe182d319e9733e2ab8db20e08b52da274839b54590750e50f1e0aa2" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.065844 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab4aa207fe182d319e9733e2ab8db20e08b52da274839b54590750e50f1e0aa2"} err="failed to get container status \"ab4aa207fe182d319e9733e2ab8db20e08b52da274839b54590750e50f1e0aa2\": rpc error: code = NotFound desc = could not find container \"ab4aa207fe182d319e9733e2ab8db20e08b52da274839b54590750e50f1e0aa2\": container with ID starting with ab4aa207fe182d319e9733e2ab8db20e08b52da274839b54590750e50f1e0aa2 not found: ID does not exist" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.065872 4881 scope.go:117] "RemoveContainer" containerID="434985857d701704a9774fd3d3052cd59ddf177800adf78d0e559512e010a9cb" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.066368 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"434985857d701704a9774fd3d3052cd59ddf177800adf78d0e559512e010a9cb"} err="failed to get container status \"434985857d701704a9774fd3d3052cd59ddf177800adf78d0e559512e010a9cb\": rpc error: code = NotFound desc = could not find container \"434985857d701704a9774fd3d3052cd59ddf177800adf78d0e559512e010a9cb\": container with ID starting with 434985857d701704a9774fd3d3052cd59ddf177800adf78d0e559512e010a9cb not found: ID does not exist" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.071840 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a22f004-7d84-4edc-86f7-d58adb131a45-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.071919 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6fqw\" (UniqueName: \"kubernetes.io/projected/86debe8b-5d02-4f2e-a311-6106609aeb1c-kube-api-access-v6fqw\") pod \"glance-default-internal-api-0\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.071966 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5a22f004-7d84-4edc-86f7-d58adb131a45-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.071986 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/86debe8b-5d02-4f2e-a311-6106609aeb1c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.072009 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86debe8b-5d02-4f2e-a311-6106609aeb1c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.072045 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86debe8b-5d02-4f2e-a311-6106609aeb1c-logs\") pod \"glance-default-internal-api-0\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.072071 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a22f004-7d84-4edc-86f7-d58adb131a45-logs\") pod \"glance-default-external-api-0\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.072088 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/86debe8b-5d02-4f2e-a311-6106609aeb1c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.072109 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a22f004-7d84-4edc-86f7-d58adb131a45-config-data\") pod \"glance-default-external-api-0\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.072138 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86debe8b-5d02-4f2e-a311-6106609aeb1c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.072157 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh585\" (UniqueName: \"kubernetes.io/projected/5a22f004-7d84-4edc-86f7-d58adb131a45-kube-api-access-xh585\") pod \"glance-default-external-api-0\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.072197 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.072264 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86debe8b-5d02-4f2e-a311-6106609aeb1c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.072292 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a22f004-7d84-4edc-86f7-d58adb131a45-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.072311 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a22f004-7d84-4edc-86f7-d58adb131a45-scripts\") pod \"glance-default-external-api-0\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.072334 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.072626 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.073560 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86debe8b-5d02-4f2e-a311-6106609aeb1c-logs\") pod \"glance-default-internal-api-0\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.073654 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/86debe8b-5d02-4f2e-a311-6106609aeb1c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.078602 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86debe8b-5d02-4f2e-a311-6106609aeb1c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.081076 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86debe8b-5d02-4f2e-a311-6106609aeb1c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.090331 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86debe8b-5d02-4f2e-a311-6106609aeb1c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.091079 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/86debe8b-5d02-4f2e-a311-6106609aeb1c-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.093577 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6fqw\" (UniqueName: \"kubernetes.io/projected/86debe8b-5d02-4f2e-a311-6106609aeb1c-kube-api-access-v6fqw\") pod \"glance-default-internal-api-0\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.122055 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.174311 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a22f004-7d84-4edc-86f7-d58adb131a45-config-data\") pod \"glance-default-external-api-0\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.174629 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xh585\" (UniqueName: \"kubernetes.io/projected/5a22f004-7d84-4edc-86f7-d58adb131a45-kube-api-access-xh585\") pod \"glance-default-external-api-0\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.174656 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.174739 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a22f004-7d84-4edc-86f7-d58adb131a45-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.174756 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a22f004-7d84-4edc-86f7-d58adb131a45-scripts\") pod \"glance-default-external-api-0\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.174809 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a22f004-7d84-4edc-86f7-d58adb131a45-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.174847 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5a22f004-7d84-4edc-86f7-d58adb131a45-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.174893 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a22f004-7d84-4edc-86f7-d58adb131a45-logs\") pod \"glance-default-external-api-0\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.175231 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-external-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.175425 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a22f004-7d84-4edc-86f7-d58adb131a45-logs\") pod \"glance-default-external-api-0\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.175470 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5a22f004-7d84-4edc-86f7-d58adb131a45-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.183966 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a22f004-7d84-4edc-86f7-d58adb131a45-config-data\") pod \"glance-default-external-api-0\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.185289 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a22f004-7d84-4edc-86f7-d58adb131a45-scripts\") pod \"glance-default-external-api-0\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.194642 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a22f004-7d84-4edc-86f7-d58adb131a45-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.194761 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a22f004-7d84-4edc-86f7-d58adb131a45-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.198636 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xh585\" (UniqueName: \"kubernetes.io/projected/5a22f004-7d84-4edc-86f7-d58adb131a45-kube-api-access-xh585\") pod \"glance-default-external-api-0\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.208957 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.238280 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.301438 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.329208 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f516fb6-322b-4eee-9d4d-a10176959bbb" path="/var/lib/kubelet/pods/2f516fb6-322b-4eee-9d4d-a10176959bbb/volumes" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.330097 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6314462-e91a-47e2-8c76-27d6045e4fd5" path="/var/lib/kubelet/pods/b6314462-e91a-47e2-8c76-27d6045e4fd5/volumes" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.331240 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8ac2a63-dc28-4695-a77c-e82af400f4b9" path="/var/lib/kubelet/pods/b8ac2a63-dc28-4695-a77c-e82af400f4b9/volumes" Jan 21 11:21:33 crc kubenswrapper[4881]: I0121 11:21:33.939567 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 11:21:34 crc kubenswrapper[4881]: I0121 11:21:34.108768 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 11:21:34 crc kubenswrapper[4881]: W0121 11:21:34.130656 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a22f004_7d84_4edc_86f7_d58adb131a45.slice/crio-c118bf221673b7075db16b12d92f917f44d316d1edbfb63816381a8a7fe9bfa7 WatchSource:0}: Error finding container c118bf221673b7075db16b12d92f917f44d316d1edbfb63816381a8a7fe9bfa7: Status 404 returned error can't find the container with id c118bf221673b7075db16b12d92f917f44d316d1edbfb63816381a8a7fe9bfa7 Jan 21 11:21:34 crc kubenswrapper[4881]: I0121 11:21:34.696143 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5a22f004-7d84-4edc-86f7-d58adb131a45","Type":"ContainerStarted","Data":"c118bf221673b7075db16b12d92f917f44d316d1edbfb63816381a8a7fe9bfa7"} Jan 21 11:21:34 crc kubenswrapper[4881]: I0121 11:21:34.698002 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"86debe8b-5d02-4f2e-a311-6106609aeb1c","Type":"ContainerStarted","Data":"d67de62ed844d45b06b45329375dde0d59a63d15e298263c3618894b7576c1ba"} Jan 21 11:21:34 crc kubenswrapper[4881]: I0121 11:21:34.852105 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 21 11:21:34 crc kubenswrapper[4881]: I0121 11:21:34.912924 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-c849cf559-fjllv" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.045091 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77b944d67-mw2nq"] Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.045327 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-77b944d67-mw2nq" podUID="b0326de6-1c1a-4e21-9592-ae86b46d7a3f" containerName="dnsmasq-dns" containerID="cri-o://74a966ab9ba8420c744ac8e1932e9ad473ca91de2100fd5d2f1bf2544fd837be" gracePeriod=10 Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.362387 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-7564f958f5-jmdx2"] Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.366424 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7564f958f5-jmdx2"] Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.380611 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7564f958f5-jmdx2" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.388244 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.388617 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.389072 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.425900 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56bl5\" (UniqueName: \"kubernetes.io/projected/86a11f48-404e-4c5e-8ff4-5033a6411956-kube-api-access-56bl5\") pod \"swift-proxy-7564f958f5-jmdx2\" (UID: \"86a11f48-404e-4c5e-8ff4-5033a6411956\") " pod="openstack/swift-proxy-7564f958f5-jmdx2" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.426008 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86a11f48-404e-4c5e-8ff4-5033a6411956-config-data\") pod \"swift-proxy-7564f958f5-jmdx2\" (UID: \"86a11f48-404e-4c5e-8ff4-5033a6411956\") " pod="openstack/swift-proxy-7564f958f5-jmdx2" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.426084 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86a11f48-404e-4c5e-8ff4-5033a6411956-combined-ca-bundle\") pod \"swift-proxy-7564f958f5-jmdx2\" (UID: \"86a11f48-404e-4c5e-8ff4-5033a6411956\") " pod="openstack/swift-proxy-7564f958f5-jmdx2" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.426189 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86a11f48-404e-4c5e-8ff4-5033a6411956-run-httpd\") pod \"swift-proxy-7564f958f5-jmdx2\" (UID: \"86a11f48-404e-4c5e-8ff4-5033a6411956\") " pod="openstack/swift-proxy-7564f958f5-jmdx2" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.426210 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/86a11f48-404e-4c5e-8ff4-5033a6411956-internal-tls-certs\") pod \"swift-proxy-7564f958f5-jmdx2\" (UID: \"86a11f48-404e-4c5e-8ff4-5033a6411956\") " pod="openstack/swift-proxy-7564f958f5-jmdx2" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.426232 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/86a11f48-404e-4c5e-8ff4-5033a6411956-etc-swift\") pod \"swift-proxy-7564f958f5-jmdx2\" (UID: \"86a11f48-404e-4c5e-8ff4-5033a6411956\") " pod="openstack/swift-proxy-7564f958f5-jmdx2" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.426258 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86a11f48-404e-4c5e-8ff4-5033a6411956-log-httpd\") pod \"swift-proxy-7564f958f5-jmdx2\" (UID: \"86a11f48-404e-4c5e-8ff4-5033a6411956\") " pod="openstack/swift-proxy-7564f958f5-jmdx2" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.426441 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/86a11f48-404e-4c5e-8ff4-5033a6411956-public-tls-certs\") pod \"swift-proxy-7564f958f5-jmdx2\" (UID: \"86a11f48-404e-4c5e-8ff4-5033a6411956\") " pod="openstack/swift-proxy-7564f958f5-jmdx2" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.460821 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.527911 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86045f5e-defd-4c68-a582-c51c9c26e5c7-scripts\") pod \"86045f5e-defd-4c68-a582-c51c9c26e5c7\" (UID: \"86045f5e-defd-4c68-a582-c51c9c26e5c7\") " Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.528009 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/86045f5e-defd-4c68-a582-c51c9c26e5c7-config-data-custom\") pod \"86045f5e-defd-4c68-a582-c51c9c26e5c7\" (UID: \"86045f5e-defd-4c68-a582-c51c9c26e5c7\") " Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.528059 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/86045f5e-defd-4c68-a582-c51c9c26e5c7-etc-machine-id\") pod \"86045f5e-defd-4c68-a582-c51c9c26e5c7\" (UID: \"86045f5e-defd-4c68-a582-c51c9c26e5c7\") " Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.528131 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h42sc\" (UniqueName: \"kubernetes.io/projected/86045f5e-defd-4c68-a582-c51c9c26e5c7-kube-api-access-h42sc\") pod \"86045f5e-defd-4c68-a582-c51c9c26e5c7\" (UID: \"86045f5e-defd-4c68-a582-c51c9c26e5c7\") " Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.528170 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86045f5e-defd-4c68-a582-c51c9c26e5c7-config-data\") pod \"86045f5e-defd-4c68-a582-c51c9c26e5c7\" (UID: \"86045f5e-defd-4c68-a582-c51c9c26e5c7\") " Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.528201 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86045f5e-defd-4c68-a582-c51c9c26e5c7-combined-ca-bundle\") pod \"86045f5e-defd-4c68-a582-c51c9c26e5c7\" (UID: \"86045f5e-defd-4c68-a582-c51c9c26e5c7\") " Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.528582 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/86a11f48-404e-4c5e-8ff4-5033a6411956-public-tls-certs\") pod \"swift-proxy-7564f958f5-jmdx2\" (UID: \"86a11f48-404e-4c5e-8ff4-5033a6411956\") " pod="openstack/swift-proxy-7564f958f5-jmdx2" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.528681 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56bl5\" (UniqueName: \"kubernetes.io/projected/86a11f48-404e-4c5e-8ff4-5033a6411956-kube-api-access-56bl5\") pod \"swift-proxy-7564f958f5-jmdx2\" (UID: \"86a11f48-404e-4c5e-8ff4-5033a6411956\") " pod="openstack/swift-proxy-7564f958f5-jmdx2" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.528713 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86a11f48-404e-4c5e-8ff4-5033a6411956-config-data\") pod \"swift-proxy-7564f958f5-jmdx2\" (UID: \"86a11f48-404e-4c5e-8ff4-5033a6411956\") " pod="openstack/swift-proxy-7564f958f5-jmdx2" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.528744 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86a11f48-404e-4c5e-8ff4-5033a6411956-combined-ca-bundle\") pod \"swift-proxy-7564f958f5-jmdx2\" (UID: \"86a11f48-404e-4c5e-8ff4-5033a6411956\") " pod="openstack/swift-proxy-7564f958f5-jmdx2" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.528780 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86a11f48-404e-4c5e-8ff4-5033a6411956-run-httpd\") pod \"swift-proxy-7564f958f5-jmdx2\" (UID: \"86a11f48-404e-4c5e-8ff4-5033a6411956\") " pod="openstack/swift-proxy-7564f958f5-jmdx2" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.528870 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/86a11f48-404e-4c5e-8ff4-5033a6411956-internal-tls-certs\") pod \"swift-proxy-7564f958f5-jmdx2\" (UID: \"86a11f48-404e-4c5e-8ff4-5033a6411956\") " pod="openstack/swift-proxy-7564f958f5-jmdx2" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.528886 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/86a11f48-404e-4c5e-8ff4-5033a6411956-etc-swift\") pod \"swift-proxy-7564f958f5-jmdx2\" (UID: \"86a11f48-404e-4c5e-8ff4-5033a6411956\") " pod="openstack/swift-proxy-7564f958f5-jmdx2" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.528903 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86a11f48-404e-4c5e-8ff4-5033a6411956-log-httpd\") pod \"swift-proxy-7564f958f5-jmdx2\" (UID: \"86a11f48-404e-4c5e-8ff4-5033a6411956\") " pod="openstack/swift-proxy-7564f958f5-jmdx2" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.529968 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86a11f48-404e-4c5e-8ff4-5033a6411956-log-httpd\") pod \"swift-proxy-7564f958f5-jmdx2\" (UID: \"86a11f48-404e-4c5e-8ff4-5033a6411956\") " pod="openstack/swift-proxy-7564f958f5-jmdx2" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.530361 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/86a11f48-404e-4c5e-8ff4-5033a6411956-run-httpd\") pod \"swift-proxy-7564f958f5-jmdx2\" (UID: \"86a11f48-404e-4c5e-8ff4-5033a6411956\") " pod="openstack/swift-proxy-7564f958f5-jmdx2" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.534651 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86045f5e-defd-4c68-a582-c51c9c26e5c7-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "86045f5e-defd-4c68-a582-c51c9c26e5c7" (UID: "86045f5e-defd-4c68-a582-c51c9c26e5c7"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.537873 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/86a11f48-404e-4c5e-8ff4-5033a6411956-etc-swift\") pod \"swift-proxy-7564f958f5-jmdx2\" (UID: \"86a11f48-404e-4c5e-8ff4-5033a6411956\") " pod="openstack/swift-proxy-7564f958f5-jmdx2" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.538465 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86045f5e-defd-4c68-a582-c51c9c26e5c7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "86045f5e-defd-4c68-a582-c51c9c26e5c7" (UID: "86045f5e-defd-4c68-a582-c51c9c26e5c7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.552889 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86045f5e-defd-4c68-a582-c51c9c26e5c7-kube-api-access-h42sc" (OuterVolumeSpecName: "kube-api-access-h42sc") pod "86045f5e-defd-4c68-a582-c51c9c26e5c7" (UID: "86045f5e-defd-4c68-a582-c51c9c26e5c7"). InnerVolumeSpecName "kube-api-access-h42sc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.553275 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86a11f48-404e-4c5e-8ff4-5033a6411956-combined-ca-bundle\") pod \"swift-proxy-7564f958f5-jmdx2\" (UID: \"86a11f48-404e-4c5e-8ff4-5033a6411956\") " pod="openstack/swift-proxy-7564f958f5-jmdx2" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.553400 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/86a11f48-404e-4c5e-8ff4-5033a6411956-internal-tls-certs\") pod \"swift-proxy-7564f958f5-jmdx2\" (UID: \"86a11f48-404e-4c5e-8ff4-5033a6411956\") " pod="openstack/swift-proxy-7564f958f5-jmdx2" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.587197 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86a11f48-404e-4c5e-8ff4-5033a6411956-config-data\") pod \"swift-proxy-7564f958f5-jmdx2\" (UID: \"86a11f48-404e-4c5e-8ff4-5033a6411956\") " pod="openstack/swift-proxy-7564f958f5-jmdx2" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.587250 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/86a11f48-404e-4c5e-8ff4-5033a6411956-public-tls-certs\") pod \"swift-proxy-7564f958f5-jmdx2\" (UID: \"86a11f48-404e-4c5e-8ff4-5033a6411956\") " pod="openstack/swift-proxy-7564f958f5-jmdx2" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.591441 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56bl5\" (UniqueName: \"kubernetes.io/projected/86a11f48-404e-4c5e-8ff4-5033a6411956-kube-api-access-56bl5\") pod \"swift-proxy-7564f958f5-jmdx2\" (UID: \"86a11f48-404e-4c5e-8ff4-5033a6411956\") " pod="openstack/swift-proxy-7564f958f5-jmdx2" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.600355 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86045f5e-defd-4c68-a582-c51c9c26e5c7-scripts" (OuterVolumeSpecName: "scripts") pod "86045f5e-defd-4c68-a582-c51c9c26e5c7" (UID: "86045f5e-defd-4c68-a582-c51c9c26e5c7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.606466 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.635070 4881 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/86045f5e-defd-4c68-a582-c51c9c26e5c7-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.635097 4881 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/86045f5e-defd-4c68-a582-c51c9c26e5c7-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.635108 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h42sc\" (UniqueName: \"kubernetes.io/projected/86045f5e-defd-4c68-a582-c51c9c26e5c7-kube-api-access-h42sc\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.635119 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86045f5e-defd-4c68-a582-c51c9c26e5c7-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.662545 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86045f5e-defd-4c68-a582-c51c9c26e5c7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "86045f5e-defd-4c68-a582-c51c9c26e5c7" (UID: "86045f5e-defd-4c68-a582-c51c9c26e5c7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.733323 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86045f5e-defd-4c68-a582-c51c9c26e5c7-config-data" (OuterVolumeSpecName: "config-data") pod "86045f5e-defd-4c68-a582-c51c9c26e5c7" (UID: "86045f5e-defd-4c68-a582-c51c9c26e5c7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.737219 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86045f5e-defd-4c68-a582-c51c9c26e5c7-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.737373 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86045f5e-defd-4c68-a582-c51c9c26e5c7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.760622 4881 generic.go:334] "Generic (PLEG): container finished" podID="b0326de6-1c1a-4e21-9592-ae86b46d7a3f" containerID="74a966ab9ba8420c744ac8e1932e9ad473ca91de2100fd5d2f1bf2544fd837be" exitCode=0 Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.760731 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77b944d67-mw2nq" event={"ID":"b0326de6-1c1a-4e21-9592-ae86b46d7a3f","Type":"ContainerDied","Data":"74a966ab9ba8420c744ac8e1932e9ad473ca91de2100fd5d2f1bf2544fd837be"} Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.778082 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5a22f004-7d84-4edc-86f7-d58adb131a45","Type":"ContainerStarted","Data":"9286d3d52dfda503e9a39d6bc904388c1d8fb7d48591cc6a081eaedbcac3451b"} Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.783883 4881 generic.go:334] "Generic (PLEG): container finished" podID="86045f5e-defd-4c68-a582-c51c9c26e5c7" containerID="f179a38b8e729fdba1d50653424c543fe9ebf0803e8ecb14e1eaa90d4edb87bf" exitCode=0 Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.783943 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"86045f5e-defd-4c68-a582-c51c9c26e5c7","Type":"ContainerDied","Data":"f179a38b8e729fdba1d50653424c543fe9ebf0803e8ecb14e1eaa90d4edb87bf"} Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.783967 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"86045f5e-defd-4c68-a582-c51c9c26e5c7","Type":"ContainerDied","Data":"37f117f350f4a5bb6279fc8d328dfd979286450f9c150553b8cff2ebf1ef387c"} Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.783984 4881 scope.go:117] "RemoveContainer" containerID="d776082f3aaee81cc1f230c5cf4abdaa34059f0c862a2df0c93b102e79762938" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.784078 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.801181 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"86debe8b-5d02-4f2e-a311-6106609aeb1c","Type":"ContainerStarted","Data":"f3bc5d7bc188f1c4ac565e1d75e559e4a8e17c15c9ed4b157de750543aaa6b37"} Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.801475 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="75119e97-b896-4b71-ab1f-28db45a4606d" containerName="ceilometer-central-agent" containerID="cri-o://bc7224d9bf84f344828f19a13fb8096ac19d517cb3bb70d8fce495b5aa46625b" gracePeriod=30 Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.801686 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="75119e97-b896-4b71-ab1f-28db45a4606d" containerName="proxy-httpd" containerID="cri-o://80eb788c6d10eab27f68e4afaa093b8aa3a02ead209347f52848e0e84c80db9f" gracePeriod=30 Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.801814 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="75119e97-b896-4b71-ab1f-28db45a4606d" containerName="ceilometer-notification-agent" containerID="cri-o://53e2fe665bdaeb7b9eb972106db909c519d01d1c08141b3cb40de82bd0536330" gracePeriod=30 Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.801920 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="75119e97-b896-4b71-ab1f-28db45a4606d" containerName="sg-core" containerID="cri-o://899f70ee131f6e530963ca573a67921fd95a35fbdae76709308568e8f0b66d06" gracePeriod=30 Jan 21 11:21:35 crc kubenswrapper[4881]: I0121 11:21:35.821873 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7564f958f5-jmdx2" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:35.988463 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77b944d67-mw2nq" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.025519 4881 scope.go:117] "RemoveContainer" containerID="f179a38b8e729fdba1d50653424c543fe9ebf0803e8ecb14e1eaa90d4edb87bf" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.052622 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-dns-svc\") pod \"b0326de6-1c1a-4e21-9592-ae86b46d7a3f\" (UID: \"b0326de6-1c1a-4e21-9592-ae86b46d7a3f\") " Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.052684 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8kc6\" (UniqueName: \"kubernetes.io/projected/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-kube-api-access-h8kc6\") pod \"b0326de6-1c1a-4e21-9592-ae86b46d7a3f\" (UID: \"b0326de6-1c1a-4e21-9592-ae86b46d7a3f\") " Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.052717 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-ovsdbserver-nb\") pod \"b0326de6-1c1a-4e21-9592-ae86b46d7a3f\" (UID: \"b0326de6-1c1a-4e21-9592-ae86b46d7a3f\") " Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.052772 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-ovsdbserver-sb\") pod \"b0326de6-1c1a-4e21-9592-ae86b46d7a3f\" (UID: \"b0326de6-1c1a-4e21-9592-ae86b46d7a3f\") " Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.053383 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-config\") pod \"b0326de6-1c1a-4e21-9592-ae86b46d7a3f\" (UID: \"b0326de6-1c1a-4e21-9592-ae86b46d7a3f\") " Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.053472 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-dns-swift-storage-0\") pod \"b0326de6-1c1a-4e21-9592-ae86b46d7a3f\" (UID: \"b0326de6-1c1a-4e21-9592-ae86b46d7a3f\") " Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.064757 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-kube-api-access-h8kc6" (OuterVolumeSpecName: "kube-api-access-h8kc6") pod "b0326de6-1c1a-4e21-9592-ae86b46d7a3f" (UID: "b0326de6-1c1a-4e21-9592-ae86b46d7a3f"). InnerVolumeSpecName "kube-api-access-h8kc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.074849 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.099023 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.120052 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 11:21:36 crc kubenswrapper[4881]: E0121 11:21:36.120676 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86045f5e-defd-4c68-a582-c51c9c26e5c7" containerName="probe" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.120699 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="86045f5e-defd-4c68-a582-c51c9c26e5c7" containerName="probe" Jan 21 11:21:36 crc kubenswrapper[4881]: E0121 11:21:36.120723 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0326de6-1c1a-4e21-9592-ae86b46d7a3f" containerName="dnsmasq-dns" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.120734 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0326de6-1c1a-4e21-9592-ae86b46d7a3f" containerName="dnsmasq-dns" Jan 21 11:21:36 crc kubenswrapper[4881]: E0121 11:21:36.120742 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86045f5e-defd-4c68-a582-c51c9c26e5c7" containerName="cinder-scheduler" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.120749 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="86045f5e-defd-4c68-a582-c51c9c26e5c7" containerName="cinder-scheduler" Jan 21 11:21:36 crc kubenswrapper[4881]: E0121 11:21:36.120763 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0326de6-1c1a-4e21-9592-ae86b46d7a3f" containerName="init" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.120770 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0326de6-1c1a-4e21-9592-ae86b46d7a3f" containerName="init" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.121060 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0326de6-1c1a-4e21-9592-ae86b46d7a3f" containerName="dnsmasq-dns" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.121079 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="86045f5e-defd-4c68-a582-c51c9c26e5c7" containerName="cinder-scheduler" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.121096 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="86045f5e-defd-4c68-a582-c51c9c26e5c7" containerName="probe" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.124988 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.136808 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.147364 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.154755 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b0326de6-1c1a-4e21-9592-ae86b46d7a3f" (UID: "b0326de6-1c1a-4e21-9592-ae86b46d7a3f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.156491 4881 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.156522 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8kc6\" (UniqueName: \"kubernetes.io/projected/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-kube-api-access-h8kc6\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.186189 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b0326de6-1c1a-4e21-9592-ae86b46d7a3f" (UID: "b0326de6-1c1a-4e21-9592-ae86b46d7a3f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.216519 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b0326de6-1c1a-4e21-9592-ae86b46d7a3f" (UID: "b0326de6-1c1a-4e21-9592-ae86b46d7a3f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.220488 4881 scope.go:117] "RemoveContainer" containerID="d776082f3aaee81cc1f230c5cf4abdaa34059f0c862a2df0c93b102e79762938" Jan 21 11:21:36 crc kubenswrapper[4881]: E0121 11:21:36.221342 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d776082f3aaee81cc1f230c5cf4abdaa34059f0c862a2df0c93b102e79762938\": container with ID starting with d776082f3aaee81cc1f230c5cf4abdaa34059f0c862a2df0c93b102e79762938 not found: ID does not exist" containerID="d776082f3aaee81cc1f230c5cf4abdaa34059f0c862a2df0c93b102e79762938" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.221384 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d776082f3aaee81cc1f230c5cf4abdaa34059f0c862a2df0c93b102e79762938"} err="failed to get container status \"d776082f3aaee81cc1f230c5cf4abdaa34059f0c862a2df0c93b102e79762938\": rpc error: code = NotFound desc = could not find container \"d776082f3aaee81cc1f230c5cf4abdaa34059f0c862a2df0c93b102e79762938\": container with ID starting with d776082f3aaee81cc1f230c5cf4abdaa34059f0c862a2df0c93b102e79762938 not found: ID does not exist" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.221412 4881 scope.go:117] "RemoveContainer" containerID="f179a38b8e729fdba1d50653424c543fe9ebf0803e8ecb14e1eaa90d4edb87bf" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.225716 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b0326de6-1c1a-4e21-9592-ae86b46d7a3f" (UID: "b0326de6-1c1a-4e21-9592-ae86b46d7a3f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:21:36 crc kubenswrapper[4881]: E0121 11:21:36.228156 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f179a38b8e729fdba1d50653424c543fe9ebf0803e8ecb14e1eaa90d4edb87bf\": container with ID starting with f179a38b8e729fdba1d50653424c543fe9ebf0803e8ecb14e1eaa90d4edb87bf not found: ID does not exist" containerID="f179a38b8e729fdba1d50653424c543fe9ebf0803e8ecb14e1eaa90d4edb87bf" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.228205 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f179a38b8e729fdba1d50653424c543fe9ebf0803e8ecb14e1eaa90d4edb87bf"} err="failed to get container status \"f179a38b8e729fdba1d50653424c543fe9ebf0803e8ecb14e1eaa90d4edb87bf\": rpc error: code = NotFound desc = could not find container \"f179a38b8e729fdba1d50653424c543fe9ebf0803e8ecb14e1eaa90d4edb87bf\": container with ID starting with f179a38b8e729fdba1d50653424c543fe9ebf0803e8ecb14e1eaa90d4edb87bf not found: ID does not exist" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.259040 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ab676e77-1ab3-4cab-9960-a00babfe74fb-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ab676e77-1ab3-4cab-9960-a00babfe74fb\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.259100 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab676e77-1ab3-4cab-9960-a00babfe74fb-scripts\") pod \"cinder-scheduler-0\" (UID: \"ab676e77-1ab3-4cab-9960-a00babfe74fb\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.259138 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab676e77-1ab3-4cab-9960-a00babfe74fb-config-data\") pod \"cinder-scheduler-0\" (UID: \"ab676e77-1ab3-4cab-9960-a00babfe74fb\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.259177 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkc2q\" (UniqueName: \"kubernetes.io/projected/ab676e77-1ab3-4cab-9960-a00babfe74fb-kube-api-access-xkc2q\") pod \"cinder-scheduler-0\" (UID: \"ab676e77-1ab3-4cab-9960-a00babfe74fb\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.259230 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab676e77-1ab3-4cab-9960-a00babfe74fb-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ab676e77-1ab3-4cab-9960-a00babfe74fb\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.259257 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ab676e77-1ab3-4cab-9960-a00babfe74fb-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ab676e77-1ab3-4cab-9960-a00babfe74fb\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.259421 4881 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.259437 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.259475 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.300683 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-config" (OuterVolumeSpecName: "config") pod "b0326de6-1c1a-4e21-9592-ae86b46d7a3f" (UID: "b0326de6-1c1a-4e21-9592-ae86b46d7a3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.363014 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkc2q\" (UniqueName: \"kubernetes.io/projected/ab676e77-1ab3-4cab-9960-a00babfe74fb-kube-api-access-xkc2q\") pod \"cinder-scheduler-0\" (UID: \"ab676e77-1ab3-4cab-9960-a00babfe74fb\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.363121 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab676e77-1ab3-4cab-9960-a00babfe74fb-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ab676e77-1ab3-4cab-9960-a00babfe74fb\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.363168 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ab676e77-1ab3-4cab-9960-a00babfe74fb-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ab676e77-1ab3-4cab-9960-a00babfe74fb\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.363254 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ab676e77-1ab3-4cab-9960-a00babfe74fb-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ab676e77-1ab3-4cab-9960-a00babfe74fb\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.363301 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab676e77-1ab3-4cab-9960-a00babfe74fb-scripts\") pod \"cinder-scheduler-0\" (UID: \"ab676e77-1ab3-4cab-9960-a00babfe74fb\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.363346 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab676e77-1ab3-4cab-9960-a00babfe74fb-config-data\") pod \"cinder-scheduler-0\" (UID: \"ab676e77-1ab3-4cab-9960-a00babfe74fb\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.363399 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0326de6-1c1a-4e21-9592-ae86b46d7a3f-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.364825 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ab676e77-1ab3-4cab-9960-a00babfe74fb-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ab676e77-1ab3-4cab-9960-a00babfe74fb\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.372855 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab676e77-1ab3-4cab-9960-a00babfe74fb-scripts\") pod \"cinder-scheduler-0\" (UID: \"ab676e77-1ab3-4cab-9960-a00babfe74fb\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.376069 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ab676e77-1ab3-4cab-9960-a00babfe74fb-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ab676e77-1ab3-4cab-9960-a00babfe74fb\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.377461 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab676e77-1ab3-4cab-9960-a00babfe74fb-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ab676e77-1ab3-4cab-9960-a00babfe74fb\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.382087 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkc2q\" (UniqueName: \"kubernetes.io/projected/ab676e77-1ab3-4cab-9960-a00babfe74fb-kube-api-access-xkc2q\") pod \"cinder-scheduler-0\" (UID: \"ab676e77-1ab3-4cab-9960-a00babfe74fb\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.386657 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab676e77-1ab3-4cab-9960-a00babfe74fb-config-data\") pod \"cinder-scheduler-0\" (UID: \"ab676e77-1ab3-4cab-9960-a00babfe74fb\") " pod="openstack/cinder-scheduler-0" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.524991 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.599719 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.615209 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7564f958f5-jmdx2"] Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.889107 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"86debe8b-5d02-4f2e-a311-6106609aeb1c","Type":"ContainerStarted","Data":"2fa6aa1996c6f4201fc93d5c8a39f33293aba78e3cc280dea3665101a00cd065"} Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.895412 4881 generic.go:334] "Generic (PLEG): container finished" podID="75119e97-b896-4b71-ab1f-28db45a4606d" containerID="80eb788c6d10eab27f68e4afaa093b8aa3a02ead209347f52848e0e84c80db9f" exitCode=0 Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.895434 4881 generic.go:334] "Generic (PLEG): container finished" podID="75119e97-b896-4b71-ab1f-28db45a4606d" containerID="899f70ee131f6e530963ca573a67921fd95a35fbdae76709308568e8f0b66d06" exitCode=2 Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.895442 4881 generic.go:334] "Generic (PLEG): container finished" podID="75119e97-b896-4b71-ab1f-28db45a4606d" containerID="bc7224d9bf84f344828f19a13fb8096ac19d517cb3bb70d8fce495b5aa46625b" exitCode=0 Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.895468 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75119e97-b896-4b71-ab1f-28db45a4606d","Type":"ContainerDied","Data":"80eb788c6d10eab27f68e4afaa093b8aa3a02ead209347f52848e0e84c80db9f"} Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.895484 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75119e97-b896-4b71-ab1f-28db45a4606d","Type":"ContainerDied","Data":"899f70ee131f6e530963ca573a67921fd95a35fbdae76709308568e8f0b66d06"} Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.895494 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75119e97-b896-4b71-ab1f-28db45a4606d","Type":"ContainerDied","Data":"bc7224d9bf84f344828f19a13fb8096ac19d517cb3bb70d8fce495b5aa46625b"} Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.910320 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7564f958f5-jmdx2" event={"ID":"86a11f48-404e-4c5e-8ff4-5033a6411956","Type":"ContainerStarted","Data":"e66306d0119128d45b02df2c6c9e9269ad3c75d2a1f457ad3a5b6b7da2f4d4bf"} Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.934311 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77b944d67-mw2nq" event={"ID":"b0326de6-1c1a-4e21-9592-ae86b46d7a3f","Type":"ContainerDied","Data":"74a53a8b6fc2a23210eccd53e198b676934ec49275b7b25077e7e841617ab615"} Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.934376 4881 scope.go:117] "RemoveContainer" containerID="74a966ab9ba8420c744ac8e1932e9ad473ca91de2100fd5d2f1bf2544fd837be" Jan 21 11:21:36 crc kubenswrapper[4881]: I0121 11:21:36.934626 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77b944d67-mw2nq" Jan 21 11:21:37 crc kubenswrapper[4881]: I0121 11:21:37.059045 4881 scope.go:117] "RemoveContainer" containerID="da41cb40adea77808d3ff28a4531a5534241d5f62e3dd8c6c92475b8c399e085" Jan 21 11:21:37 crc kubenswrapper[4881]: I0121 11:21:37.132134 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77b944d67-mw2nq"] Jan 21 11:21:37 crc kubenswrapper[4881]: I0121 11:21:37.140969 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77b944d67-mw2nq"] Jan 21 11:21:37 crc kubenswrapper[4881]: I0121 11:21:37.333077 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86045f5e-defd-4c68-a582-c51c9c26e5c7" path="/var/lib/kubelet/pods/86045f5e-defd-4c68-a582-c51c9c26e5c7/volumes" Jan 21 11:21:37 crc kubenswrapper[4881]: I0121 11:21:37.334657 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0326de6-1c1a-4e21-9592-ae86b46d7a3f" path="/var/lib/kubelet/pods/b0326de6-1c1a-4e21-9592-ae86b46d7a3f/volumes" Jan 21 11:21:37 crc kubenswrapper[4881]: I0121 11:21:37.400470 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 11:21:37 crc kubenswrapper[4881]: I0121 11:21:37.989072 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7564f958f5-jmdx2" event={"ID":"86a11f48-404e-4c5e-8ff4-5033a6411956","Type":"ContainerStarted","Data":"616b0a41fd8a2c2ee5e28c950cc2732d336ea85ed0279baddd3033e5e8047a29"} Jan 21 11:21:38 crc kubenswrapper[4881]: I0121 11:21:38.009194 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5a22f004-7d84-4edc-86f7-d58adb131a45","Type":"ContainerStarted","Data":"9208d05b46bed633028f2197d2ac1411d6db48aa25317dd65e06acc08bb66328"} Jan 21 11:21:38 crc kubenswrapper[4881]: I0121 11:21:38.014932 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ab676e77-1ab3-4cab-9960-a00babfe74fb","Type":"ContainerStarted","Data":"ec697af1abb76944c05edd307cb15b0a7d14c5932e05640765d6f6ebaadd7de2"} Jan 21 11:21:38 crc kubenswrapper[4881]: I0121 11:21:38.041610 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.041571866 podStartE2EDuration="6.041571866s" podCreationTimestamp="2026-01-21 11:21:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:21:38.035404551 +0000 UTC m=+1485.295361020" watchObservedRunningTime="2026-01-21 11:21:38.041571866 +0000 UTC m=+1485.301528335" Jan 21 11:21:38 crc kubenswrapper[4881]: I0121 11:21:38.079536 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.07950211 podStartE2EDuration="6.07950211s" podCreationTimestamp="2026-01-21 11:21:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:21:38.063346017 +0000 UTC m=+1485.323302486" watchObservedRunningTime="2026-01-21 11:21:38.07950211 +0000 UTC m=+1485.339458579" Jan 21 11:21:38 crc kubenswrapper[4881]: I0121 11:21:38.941277 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 11:21:39 crc kubenswrapper[4881]: I0121 11:21:39.058105 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ab676e77-1ab3-4cab-9960-a00babfe74fb","Type":"ContainerStarted","Data":"d62273d5cdeb4b121af08c0292482795d31525d1f2baaa55aa351bbc86862520"} Jan 21 11:21:39 crc kubenswrapper[4881]: I0121 11:21:39.096834 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7564f958f5-jmdx2" event={"ID":"86a11f48-404e-4c5e-8ff4-5033a6411956","Type":"ContainerStarted","Data":"7ad067f868610aee4ea7f627e59a4b3c0b472fe4011f02001c57d175d9919418"} Jan 21 11:21:39 crc kubenswrapper[4881]: I0121 11:21:39.133568 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-7564f958f5-jmdx2" podStartSLOduration=4.133545866 podStartE2EDuration="4.133545866s" podCreationTimestamp="2026-01-21 11:21:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:21:39.127736881 +0000 UTC m=+1486.387693360" watchObservedRunningTime="2026-01-21 11:21:39.133545866 +0000 UTC m=+1486.393502335" Jan 21 11:21:40 crc kubenswrapper[4881]: I0121 11:21:40.113886 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ab676e77-1ab3-4cab-9960-a00babfe74fb","Type":"ContainerStarted","Data":"7f3bced5d39c83f298bf37457f234485d7ed500eb6155a08dcf21e5e09d9c064"} Jan 21 11:21:40 crc kubenswrapper[4881]: I0121 11:21:40.114290 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7564f958f5-jmdx2" Jan 21 11:21:40 crc kubenswrapper[4881]: I0121 11:21:40.114307 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7564f958f5-jmdx2" Jan 21 11:21:40 crc kubenswrapper[4881]: I0121 11:21:40.114040 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="5a22f004-7d84-4edc-86f7-d58adb131a45" containerName="glance-log" containerID="cri-o://9286d3d52dfda503e9a39d6bc904388c1d8fb7d48591cc6a081eaedbcac3451b" gracePeriod=30 Jan 21 11:21:40 crc kubenswrapper[4881]: I0121 11:21:40.114072 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="5a22f004-7d84-4edc-86f7-d58adb131a45" containerName="glance-httpd" containerID="cri-o://9208d05b46bed633028f2197d2ac1411d6db48aa25317dd65e06acc08bb66328" gracePeriod=30 Jan 21 11:21:40 crc kubenswrapper[4881]: I0121 11:21:40.152858 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.152829357 podStartE2EDuration="4.152829357s" podCreationTimestamp="2026-01-21 11:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:21:40.1369128 +0000 UTC m=+1487.396869279" watchObservedRunningTime="2026-01-21 11:21:40.152829357 +0000 UTC m=+1487.412785826" Jan 21 11:21:41 crc kubenswrapper[4881]: I0121 11:21:41.129879 4881 generic.go:334] "Generic (PLEG): container finished" podID="5a22f004-7d84-4edc-86f7-d58adb131a45" containerID="9286d3d52dfda503e9a39d6bc904388c1d8fb7d48591cc6a081eaedbcac3451b" exitCode=143 Jan 21 11:21:41 crc kubenswrapper[4881]: I0121 11:21:41.129991 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5a22f004-7d84-4edc-86f7-d58adb131a45","Type":"ContainerDied","Data":"9286d3d52dfda503e9a39d6bc904388c1d8fb7d48591cc6a081eaedbcac3451b"} Jan 21 11:21:41 crc kubenswrapper[4881]: I0121 11:21:41.526368 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 21 11:21:42 crc kubenswrapper[4881]: I0121 11:21:42.009130 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 11:21:42 crc kubenswrapper[4881]: I0121 11:21:42.009673 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="86debe8b-5d02-4f2e-a311-6106609aeb1c" containerName="glance-log" containerID="cri-o://f3bc5d7bc188f1c4ac565e1d75e559e4a8e17c15c9ed4b157de750543aaa6b37" gracePeriod=30 Jan 21 11:21:42 crc kubenswrapper[4881]: I0121 11:21:42.009903 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="86debe8b-5d02-4f2e-a311-6106609aeb1c" containerName="glance-httpd" containerID="cri-o://2fa6aa1996c6f4201fc93d5c8a39f33293aba78e3cc280dea3665101a00cd065" gracePeriod=30 Jan 21 11:21:42 crc kubenswrapper[4881]: I0121 11:21:42.170072 4881 generic.go:334] "Generic (PLEG): container finished" podID="5a22f004-7d84-4edc-86f7-d58adb131a45" containerID="9208d05b46bed633028f2197d2ac1411d6db48aa25317dd65e06acc08bb66328" exitCode=0 Jan 21 11:21:42 crc kubenswrapper[4881]: I0121 11:21:42.170162 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5a22f004-7d84-4edc-86f7-d58adb131a45","Type":"ContainerDied","Data":"9208d05b46bed633028f2197d2ac1411d6db48aa25317dd65e06acc08bb66328"} Jan 21 11:21:42 crc kubenswrapper[4881]: I0121 11:21:42.175187 4881 generic.go:334] "Generic (PLEG): container finished" podID="86debe8b-5d02-4f2e-a311-6106609aeb1c" containerID="f3bc5d7bc188f1c4ac565e1d75e559e4a8e17c15c9ed4b157de750543aaa6b37" exitCode=143 Jan 21 11:21:42 crc kubenswrapper[4881]: I0121 11:21:42.175259 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"86debe8b-5d02-4f2e-a311-6106609aeb1c","Type":"ContainerDied","Data":"f3bc5d7bc188f1c4ac565e1d75e559e4a8e17c15c9ed4b157de750543aaa6b37"} Jan 21 11:21:42 crc kubenswrapper[4881]: I0121 11:21:42.178985 4881 generic.go:334] "Generic (PLEG): container finished" podID="75119e97-b896-4b71-ab1f-28db45a4606d" containerID="53e2fe665bdaeb7b9eb972106db909c519d01d1c08141b3cb40de82bd0536330" exitCode=0 Jan 21 11:21:42 crc kubenswrapper[4881]: I0121 11:21:42.179103 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75119e97-b896-4b71-ab1f-28db45a4606d","Type":"ContainerDied","Data":"53e2fe665bdaeb7b9eb972106db909c519d01d1c08141b3cb40de82bd0536330"} Jan 21 11:21:43 crc kubenswrapper[4881]: I0121 11:21:43.191531 4881 generic.go:334] "Generic (PLEG): container finished" podID="86debe8b-5d02-4f2e-a311-6106609aeb1c" containerID="2fa6aa1996c6f4201fc93d5c8a39f33293aba78e3cc280dea3665101a00cd065" exitCode=0 Jan 21 11:21:43 crc kubenswrapper[4881]: I0121 11:21:43.191631 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"86debe8b-5d02-4f2e-a311-6106609aeb1c","Type":"ContainerDied","Data":"2fa6aa1996c6f4201fc93d5c8a39f33293aba78e3cc280dea3665101a00cd065"} Jan 21 11:21:44 crc kubenswrapper[4881]: I0121 11:21:44.311028 4881 scope.go:117] "RemoveContainer" containerID="5ccae223d32b8d30267f4d247c29e77d1942427c122a26bc75e9b00b89fa3bc0" Jan 21 11:21:45 crc kubenswrapper[4881]: I0121 11:21:45.835368 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7564f958f5-jmdx2" Jan 21 11:21:45 crc kubenswrapper[4881]: I0121 11:21:45.846914 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7564f958f5-jmdx2" Jan 21 11:21:46 crc kubenswrapper[4881]: I0121 11:21:46.708698 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:21:46 crc kubenswrapper[4881]: I0121 11:21:46.893640 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 11:21:46 crc kubenswrapper[4881]: I0121 11:21:46.895672 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75119e97-b896-4b71-ab1f-28db45a4606d-log-httpd\") pod \"75119e97-b896-4b71-ab1f-28db45a4606d\" (UID: \"75119e97-b896-4b71-ab1f-28db45a4606d\") " Jan 21 11:21:46 crc kubenswrapper[4881]: I0121 11:21:46.895743 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75119e97-b896-4b71-ab1f-28db45a4606d-config-data\") pod \"75119e97-b896-4b71-ab1f-28db45a4606d\" (UID: \"75119e97-b896-4b71-ab1f-28db45a4606d\") " Jan 21 11:21:46 crc kubenswrapper[4881]: I0121 11:21:46.895829 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/75119e97-b896-4b71-ab1f-28db45a4606d-sg-core-conf-yaml\") pod \"75119e97-b896-4b71-ab1f-28db45a4606d\" (UID: \"75119e97-b896-4b71-ab1f-28db45a4606d\") " Jan 21 11:21:46 crc kubenswrapper[4881]: I0121 11:21:46.895914 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cmwf\" (UniqueName: \"kubernetes.io/projected/75119e97-b896-4b71-ab1f-28db45a4606d-kube-api-access-2cmwf\") pod \"75119e97-b896-4b71-ab1f-28db45a4606d\" (UID: \"75119e97-b896-4b71-ab1f-28db45a4606d\") " Jan 21 11:21:46 crc kubenswrapper[4881]: I0121 11:21:46.895952 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75119e97-b896-4b71-ab1f-28db45a4606d-scripts\") pod \"75119e97-b896-4b71-ab1f-28db45a4606d\" (UID: \"75119e97-b896-4b71-ab1f-28db45a4606d\") " Jan 21 11:21:46 crc kubenswrapper[4881]: I0121 11:21:46.896040 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75119e97-b896-4b71-ab1f-28db45a4606d-combined-ca-bundle\") pod \"75119e97-b896-4b71-ab1f-28db45a4606d\" (UID: \"75119e97-b896-4b71-ab1f-28db45a4606d\") " Jan 21 11:21:46 crc kubenswrapper[4881]: I0121 11:21:46.896164 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75119e97-b896-4b71-ab1f-28db45a4606d-run-httpd\") pod \"75119e97-b896-4b71-ab1f-28db45a4606d\" (UID: \"75119e97-b896-4b71-ab1f-28db45a4606d\") " Jan 21 11:21:46 crc kubenswrapper[4881]: I0121 11:21:46.896957 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75119e97-b896-4b71-ab1f-28db45a4606d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "75119e97-b896-4b71-ab1f-28db45a4606d" (UID: "75119e97-b896-4b71-ab1f-28db45a4606d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:21:46 crc kubenswrapper[4881]: I0121 11:21:46.897203 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75119e97-b896-4b71-ab1f-28db45a4606d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "75119e97-b896-4b71-ab1f-28db45a4606d" (UID: "75119e97-b896-4b71-ab1f-28db45a4606d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:21:46 crc kubenswrapper[4881]: I0121 11:21:46.900962 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 11:21:46 crc kubenswrapper[4881]: I0121 11:21:46.918145 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75119e97-b896-4b71-ab1f-28db45a4606d-kube-api-access-2cmwf" (OuterVolumeSpecName: "kube-api-access-2cmwf") pod "75119e97-b896-4b71-ab1f-28db45a4606d" (UID: "75119e97-b896-4b71-ab1f-28db45a4606d"). InnerVolumeSpecName "kube-api-access-2cmwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:21:46 crc kubenswrapper[4881]: I0121 11:21:46.934125 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75119e97-b896-4b71-ab1f-28db45a4606d-scripts" (OuterVolumeSpecName: "scripts") pod "75119e97-b896-4b71-ab1f-28db45a4606d" (UID: "75119e97-b896-4b71-ab1f-28db45a4606d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.000940 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75119e97-b896-4b71-ab1f-28db45a4606d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "75119e97-b896-4b71-ab1f-28db45a4606d" (UID: "75119e97-b896-4b71-ab1f-28db45a4606d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.005937 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a22f004-7d84-4edc-86f7-d58adb131a45-public-tls-certs\") pod \"5a22f004-7d84-4edc-86f7-d58adb131a45\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.006000 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86debe8b-5d02-4f2e-a311-6106609aeb1c-config-data\") pod \"86debe8b-5d02-4f2e-a311-6106609aeb1c\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.006042 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"86debe8b-5d02-4f2e-a311-6106609aeb1c\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.006063 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5a22f004-7d84-4edc-86f7-d58adb131a45-httpd-run\") pod \"5a22f004-7d84-4edc-86f7-d58adb131a45\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.006095 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a22f004-7d84-4edc-86f7-d58adb131a45-scripts\") pod \"5a22f004-7d84-4edc-86f7-d58adb131a45\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.006123 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6fqw\" (UniqueName: \"kubernetes.io/projected/86debe8b-5d02-4f2e-a311-6106609aeb1c-kube-api-access-v6fqw\") pod \"86debe8b-5d02-4f2e-a311-6106609aeb1c\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.006215 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a22f004-7d84-4edc-86f7-d58adb131a45-config-data\") pod \"5a22f004-7d84-4edc-86f7-d58adb131a45\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.006234 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/86debe8b-5d02-4f2e-a311-6106609aeb1c-internal-tls-certs\") pod \"86debe8b-5d02-4f2e-a311-6106609aeb1c\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.006269 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86debe8b-5d02-4f2e-a311-6106609aeb1c-combined-ca-bundle\") pod \"86debe8b-5d02-4f2e-a311-6106609aeb1c\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.006300 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xh585\" (UniqueName: \"kubernetes.io/projected/5a22f004-7d84-4edc-86f7-d58adb131a45-kube-api-access-xh585\") pod \"5a22f004-7d84-4edc-86f7-d58adb131a45\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.006324 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a22f004-7d84-4edc-86f7-d58adb131a45-logs\") pod \"5a22f004-7d84-4edc-86f7-d58adb131a45\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.006403 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a22f004-7d84-4edc-86f7-d58adb131a45-combined-ca-bundle\") pod \"5a22f004-7d84-4edc-86f7-d58adb131a45\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.006443 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86debe8b-5d02-4f2e-a311-6106609aeb1c-logs\") pod \"86debe8b-5d02-4f2e-a311-6106609aeb1c\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.006550 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86debe8b-5d02-4f2e-a311-6106609aeb1c-scripts\") pod \"86debe8b-5d02-4f2e-a311-6106609aeb1c\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.006583 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"5a22f004-7d84-4edc-86f7-d58adb131a45\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.006609 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/86debe8b-5d02-4f2e-a311-6106609aeb1c-httpd-run\") pod \"86debe8b-5d02-4f2e-a311-6106609aeb1c\" (UID: \"86debe8b-5d02-4f2e-a311-6106609aeb1c\") " Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.007089 4881 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75119e97-b896-4b71-ab1f-28db45a4606d-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.007114 4881 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75119e97-b896-4b71-ab1f-28db45a4606d-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.007123 4881 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/75119e97-b896-4b71-ab1f-28db45a4606d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.007133 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cmwf\" (UniqueName: \"kubernetes.io/projected/75119e97-b896-4b71-ab1f-28db45a4606d-kube-api-access-2cmwf\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.007143 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75119e97-b896-4b71-ab1f-28db45a4606d-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.021037 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86debe8b-5d02-4f2e-a311-6106609aeb1c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "86debe8b-5d02-4f2e-a311-6106609aeb1c" (UID: "86debe8b-5d02-4f2e-a311-6106609aeb1c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.021347 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86debe8b-5d02-4f2e-a311-6106609aeb1c-logs" (OuterVolumeSpecName: "logs") pod "86debe8b-5d02-4f2e-a311-6106609aeb1c" (UID: "86debe8b-5d02-4f2e-a311-6106609aeb1c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.022046 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a22f004-7d84-4edc-86f7-d58adb131a45-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5a22f004-7d84-4edc-86f7-d58adb131a45" (UID: "5a22f004-7d84-4edc-86f7-d58adb131a45"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.022236 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a22f004-7d84-4edc-86f7-d58adb131a45-logs" (OuterVolumeSpecName: "logs") pod "5a22f004-7d84-4edc-86f7-d58adb131a45" (UID: "5a22f004-7d84-4edc-86f7-d58adb131a45"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.054807 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "5a22f004-7d84-4edc-86f7-d58adb131a45" (UID: "5a22f004-7d84-4edc-86f7-d58adb131a45"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.055561 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86debe8b-5d02-4f2e-a311-6106609aeb1c-scripts" (OuterVolumeSpecName: "scripts") pod "86debe8b-5d02-4f2e-a311-6106609aeb1c" (UID: "86debe8b-5d02-4f2e-a311-6106609aeb1c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.057240 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86debe8b-5d02-4f2e-a311-6106609aeb1c-kube-api-access-v6fqw" (OuterVolumeSpecName: "kube-api-access-v6fqw") pod "86debe8b-5d02-4f2e-a311-6106609aeb1c" (UID: "86debe8b-5d02-4f2e-a311-6106609aeb1c"). InnerVolumeSpecName "kube-api-access-v6fqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.058025 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a22f004-7d84-4edc-86f7-d58adb131a45-kube-api-access-xh585" (OuterVolumeSpecName: "kube-api-access-xh585") pod "5a22f004-7d84-4edc-86f7-d58adb131a45" (UID: "5a22f004-7d84-4edc-86f7-d58adb131a45"). InnerVolumeSpecName "kube-api-access-xh585". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.068228 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a22f004-7d84-4edc-86f7-d58adb131a45-scripts" (OuterVolumeSpecName: "scripts") pod "5a22f004-7d84-4edc-86f7-d58adb131a45" (UID: "5a22f004-7d84-4edc-86f7-d58adb131a45"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.069082 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "86debe8b-5d02-4f2e-a311-6106609aeb1c" (UID: "86debe8b-5d02-4f2e-a311-6106609aeb1c"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.088999 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75119e97-b896-4b71-ab1f-28db45a4606d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "75119e97-b896-4b71-ab1f-28db45a4606d" (UID: "75119e97-b896-4b71-ab1f-28db45a4606d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.121326 4881 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86debe8b-5d02-4f2e-a311-6106609aeb1c-logs\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.121373 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86debe8b-5d02-4f2e-a311-6106609aeb1c-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.121400 4881 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.121415 4881 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/86debe8b-5d02-4f2e-a311-6106609aeb1c-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.121427 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75119e97-b896-4b71-ab1f-28db45a4606d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.121451 4881 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.121463 4881 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5a22f004-7d84-4edc-86f7-d58adb131a45-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.121474 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a22f004-7d84-4edc-86f7-d58adb131a45-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.121485 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6fqw\" (UniqueName: \"kubernetes.io/projected/86debe8b-5d02-4f2e-a311-6106609aeb1c-kube-api-access-v6fqw\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.121497 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xh585\" (UniqueName: \"kubernetes.io/projected/5a22f004-7d84-4edc-86f7-d58adb131a45-kube-api-access-xh585\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.121507 4881 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a22f004-7d84-4edc-86f7-d58adb131a45-logs\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.154511 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86debe8b-5d02-4f2e-a311-6106609aeb1c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "86debe8b-5d02-4f2e-a311-6106609aeb1c" (UID: "86debe8b-5d02-4f2e-a311-6106609aeb1c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.180102 4881 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.197699 4881 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.228367 4881 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.228405 4881 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.228415 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86debe8b-5d02-4f2e-a311-6106609aeb1c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.235317 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a22f004-7d84-4edc-86f7-d58adb131a45-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5a22f004-7d84-4edc-86f7-d58adb131a45" (UID: "5a22f004-7d84-4edc-86f7-d58adb131a45"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.303589 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.306350 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a22f004-7d84-4edc-86f7-d58adb131a45-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5a22f004-7d84-4edc-86f7-d58adb131a45" (UID: "5a22f004-7d84-4edc-86f7-d58adb131a45"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.315609 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86debe8b-5d02-4f2e-a311-6106609aeb1c-config-data" (OuterVolumeSpecName: "config-data") pod "86debe8b-5d02-4f2e-a311-6106609aeb1c" (UID: "86debe8b-5d02-4f2e-a311-6106609aeb1c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.317011 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.327383 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.329868 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a22f004-7d84-4edc-86f7-d58adb131a45-config-data" (OuterVolumeSpecName: "config-data") pod "5a22f004-7d84-4edc-86f7-d58adb131a45" (UID: "5a22f004-7d84-4edc-86f7-d58adb131a45"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.329946 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a22f004-7d84-4edc-86f7-d58adb131a45-config-data\") pod \"5a22f004-7d84-4edc-86f7-d58adb131a45\" (UID: \"5a22f004-7d84-4edc-86f7-d58adb131a45\") " Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.330632 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a22f004-7d84-4edc-86f7-d58adb131a45-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.330654 4881 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a22f004-7d84-4edc-86f7-d58adb131a45-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.330668 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86debe8b-5d02-4f2e-a311-6106609aeb1c-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:47 crc kubenswrapper[4881]: W0121 11:21:47.330800 4881 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/5a22f004-7d84-4edc-86f7-d58adb131a45/volumes/kubernetes.io~secret/config-data Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.330817 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a22f004-7d84-4edc-86f7-d58adb131a45-config-data" (OuterVolumeSpecName: "config-data") pod "5a22f004-7d84-4edc-86f7-d58adb131a45" (UID: "5a22f004-7d84-4edc-86f7-d58adb131a45"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.337211 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.343086 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75119e97-b896-4b71-ab1f-28db45a4606d","Type":"ContainerDied","Data":"9b7298fa3a3fcd477e8d84c1587f761e32e00a24d488249df9cca1ca349c7bc0"} Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.343153 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5a22f004-7d84-4edc-86f7-d58adb131a45","Type":"ContainerDied","Data":"c118bf221673b7075db16b12d92f917f44d316d1edbfb63816381a8a7fe9bfa7"} Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.343174 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"86debe8b-5d02-4f2e-a311-6106609aeb1c","Type":"ContainerDied","Data":"d67de62ed844d45b06b45329375dde0d59a63d15e298263c3618894b7576c1ba"} Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.343202 4881 scope.go:117] "RemoveContainer" containerID="80eb788c6d10eab27f68e4afaa093b8aa3a02ead209347f52848e0e84c80db9f" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.350415 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86debe8b-5d02-4f2e-a311-6106609aeb1c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "86debe8b-5d02-4f2e-a311-6106609aeb1c" (UID: "86debe8b-5d02-4f2e-a311-6106609aeb1c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.357531 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e","Type":"ContainerStarted","Data":"4ba0181030ceb68e7fdb5249d09391d40feea2fca13e45d6b4d9c7f3ba56c71d"} Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.360479 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"b0b6ce2c-5ae8-496f-9374-d3069bc3d149","Type":"ContainerStarted","Data":"68d1d3fbf220c6872fbb3ed3d2d8517f6217ec6ebfb2a0e3e14a3c8a97c0baab"} Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.375370 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75119e97-b896-4b71-ab1f-28db45a4606d-config-data" (OuterVolumeSpecName: "config-data") pod "75119e97-b896-4b71-ab1f-28db45a4606d" (UID: "75119e97-b896-4b71-ab1f-28db45a4606d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.392670 4881 scope.go:117] "RemoveContainer" containerID="899f70ee131f6e530963ca573a67921fd95a35fbdae76709308568e8f0b66d06" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.423054 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.527259183 podStartE2EDuration="23.423031059s" podCreationTimestamp="2026-01-21 11:21:24 +0000 UTC" firstStartedPulling="2026-01-21 11:21:26.162955815 +0000 UTC m=+1473.422912274" lastFinishedPulling="2026-01-21 11:21:46.058727681 +0000 UTC m=+1493.318684150" observedRunningTime="2026-01-21 11:21:47.405721209 +0000 UTC m=+1494.665677688" watchObservedRunningTime="2026-01-21 11:21:47.423031059 +0000 UTC m=+1494.682987528" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.431272 4881 scope.go:117] "RemoveContainer" containerID="53e2fe665bdaeb7b9eb972106db909c519d01d1c08141b3cb40de82bd0536330" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.439170 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75119e97-b896-4b71-ab1f-28db45a4606d-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.439476 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a22f004-7d84-4edc-86f7-d58adb131a45-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.439675 4881 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/86debe8b-5d02-4f2e-a311-6106609aeb1c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.458881 4881 scope.go:117] "RemoveContainer" containerID="bc7224d9bf84f344828f19a13fb8096ac19d517cb3bb70d8fce495b5aa46625b" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.492013 4881 scope.go:117] "RemoveContainer" containerID="9208d05b46bed633028f2197d2ac1411d6db48aa25317dd65e06acc08bb66328" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.524415 4881 scope.go:117] "RemoveContainer" containerID="9286d3d52dfda503e9a39d6bc904388c1d8fb7d48591cc6a081eaedbcac3451b" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.551079 4881 scope.go:117] "RemoveContainer" containerID="2fa6aa1996c6f4201fc93d5c8a39f33293aba78e3cc280dea3665101a00cd065" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.580883 4881 scope.go:117] "RemoveContainer" containerID="f3bc5d7bc188f1c4ac565e1d75e559e4a8e17c15c9ed4b157de750543aaa6b37" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.663664 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.682775 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.702685 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.719668 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.728829 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:21:47 crc kubenswrapper[4881]: E0121 11:21:47.729370 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86debe8b-5d02-4f2e-a311-6106609aeb1c" containerName="glance-httpd" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.729392 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="86debe8b-5d02-4f2e-a311-6106609aeb1c" containerName="glance-httpd" Jan 21 11:21:47 crc kubenswrapper[4881]: E0121 11:21:47.729404 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a22f004-7d84-4edc-86f7-d58adb131a45" containerName="glance-log" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.729411 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a22f004-7d84-4edc-86f7-d58adb131a45" containerName="glance-log" Jan 21 11:21:47 crc kubenswrapper[4881]: E0121 11:21:47.729424 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75119e97-b896-4b71-ab1f-28db45a4606d" containerName="ceilometer-central-agent" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.729433 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="75119e97-b896-4b71-ab1f-28db45a4606d" containerName="ceilometer-central-agent" Jan 21 11:21:47 crc kubenswrapper[4881]: E0121 11:21:47.729445 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75119e97-b896-4b71-ab1f-28db45a4606d" containerName="proxy-httpd" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.729453 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="75119e97-b896-4b71-ab1f-28db45a4606d" containerName="proxy-httpd" Jan 21 11:21:47 crc kubenswrapper[4881]: E0121 11:21:47.729476 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75119e97-b896-4b71-ab1f-28db45a4606d" containerName="ceilometer-notification-agent" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.729483 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="75119e97-b896-4b71-ab1f-28db45a4606d" containerName="ceilometer-notification-agent" Jan 21 11:21:47 crc kubenswrapper[4881]: E0121 11:21:47.729497 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75119e97-b896-4b71-ab1f-28db45a4606d" containerName="sg-core" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.729505 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="75119e97-b896-4b71-ab1f-28db45a4606d" containerName="sg-core" Jan 21 11:21:47 crc kubenswrapper[4881]: E0121 11:21:47.729519 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86debe8b-5d02-4f2e-a311-6106609aeb1c" containerName="glance-log" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.729524 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="86debe8b-5d02-4f2e-a311-6106609aeb1c" containerName="glance-log" Jan 21 11:21:47 crc kubenswrapper[4881]: E0121 11:21:47.729544 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a22f004-7d84-4edc-86f7-d58adb131a45" containerName="glance-httpd" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.729550 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a22f004-7d84-4edc-86f7-d58adb131a45" containerName="glance-httpd" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.729773 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="86debe8b-5d02-4f2e-a311-6106609aeb1c" containerName="glance-log" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.729813 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="75119e97-b896-4b71-ab1f-28db45a4606d" containerName="proxy-httpd" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.729829 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a22f004-7d84-4edc-86f7-d58adb131a45" containerName="glance-log" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.729843 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="86debe8b-5d02-4f2e-a311-6106609aeb1c" containerName="glance-httpd" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.729857 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="75119e97-b896-4b71-ab1f-28db45a4606d" containerName="sg-core" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.729872 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="75119e97-b896-4b71-ab1f-28db45a4606d" containerName="ceilometer-notification-agent" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.729884 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a22f004-7d84-4edc-86f7-d58adb131a45" containerName="glance-httpd" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.729894 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="75119e97-b896-4b71-ab1f-28db45a4606d" containerName="ceilometer-central-agent" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.732243 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.742392 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.742615 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.743337 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.758585 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.771536 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.780559 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.782501 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.793337 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.805609 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.807409 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.807612 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.808448 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.808521 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.814014 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.814318 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.819227 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-f8snw" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.832930 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.855247 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr9jz\" (UniqueName: \"kubernetes.io/projected/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-kube-api-access-hr9jz\") pod \"ceilometer-0\" (UID: \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\") " pod="openstack/ceilometer-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.855647 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-run-httpd\") pod \"ceilometer-0\" (UID: \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\") " pod="openstack/ceilometer-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.855824 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\") " pod="openstack/ceilometer-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.856022 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\") " pod="openstack/ceilometer-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.856173 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-log-httpd\") pod \"ceilometer-0\" (UID: \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\") " pod="openstack/ceilometer-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.856314 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-scripts\") pod \"ceilometer-0\" (UID: \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\") " pod="openstack/ceilometer-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.856435 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-config-data\") pod \"ceilometer-0\" (UID: \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\") " pod="openstack/ceilometer-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.958807 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-config-data\") pod \"ceilometer-0\" (UID: \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\") " pod="openstack/ceilometer-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.958874 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec8e0779-1552-4ebb-88d7-95a49e734b55-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ec8e0779-1552-4ebb-88d7-95a49e734b55\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.958908 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e7b52fc-b295-475c-bef6-074b1cb2a2f5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3e7b52fc-b295-475c-bef6-074b1cb2a2f5\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.958973 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e7b52fc-b295-475c-bef6-074b1cb2a2f5-scripts\") pod \"glance-default-external-api-0\" (UID: \"3e7b52fc-b295-475c-bef6-074b1cb2a2f5\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.959001 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njpb8\" (UniqueName: \"kubernetes.io/projected/ec8e0779-1552-4ebb-88d7-95a49e734b55-kube-api-access-njpb8\") pod \"glance-default-internal-api-0\" (UID: \"ec8e0779-1552-4ebb-88d7-95a49e734b55\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.959030 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec8e0779-1552-4ebb-88d7-95a49e734b55-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ec8e0779-1552-4ebb-88d7-95a49e734b55\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.959052 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec8e0779-1552-4ebb-88d7-95a49e734b55-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ec8e0779-1552-4ebb-88d7-95a49e734b55\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.959072 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d665\" (UniqueName: \"kubernetes.io/projected/3e7b52fc-b295-475c-bef6-074b1cb2a2f5-kube-api-access-6d665\") pod \"glance-default-external-api-0\" (UID: \"3e7b52fc-b295-475c-bef6-074b1cb2a2f5\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.959095 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ec8e0779-1552-4ebb-88d7-95a49e734b55-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ec8e0779-1552-4ebb-88d7-95a49e734b55\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.959132 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr9jz\" (UniqueName: \"kubernetes.io/projected/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-kube-api-access-hr9jz\") pod \"ceilometer-0\" (UID: \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\") " pod="openstack/ceilometer-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.959154 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e7b52fc-b295-475c-bef6-074b1cb2a2f5-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3e7b52fc-b295-475c-bef6-074b1cb2a2f5\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.959184 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec8e0779-1552-4ebb-88d7-95a49e734b55-logs\") pod \"glance-default-internal-api-0\" (UID: \"ec8e0779-1552-4ebb-88d7-95a49e734b55\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.959248 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"3e7b52fc-b295-475c-bef6-074b1cb2a2f5\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.959273 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e7b52fc-b295-475c-bef6-074b1cb2a2f5-logs\") pod \"glance-default-external-api-0\" (UID: \"3e7b52fc-b295-475c-bef6-074b1cb2a2f5\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.959293 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec8e0779-1552-4ebb-88d7-95a49e734b55-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ec8e0779-1552-4ebb-88d7-95a49e734b55\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.959315 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-run-httpd\") pod \"ceilometer-0\" (UID: \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\") " pod="openstack/ceilometer-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.959338 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\") " pod="openstack/ceilometer-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.959359 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e7b52fc-b295-475c-bef6-074b1cb2a2f5-config-data\") pod \"glance-default-external-api-0\" (UID: \"3e7b52fc-b295-475c-bef6-074b1cb2a2f5\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.959409 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3e7b52fc-b295-475c-bef6-074b1cb2a2f5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3e7b52fc-b295-475c-bef6-074b1cb2a2f5\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.959449 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\") " pod="openstack/ceilometer-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.959511 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"ec8e0779-1552-4ebb-88d7-95a49e734b55\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.959533 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-log-httpd\") pod \"ceilometer-0\" (UID: \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\") " pod="openstack/ceilometer-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.959584 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-scripts\") pod \"ceilometer-0\" (UID: \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\") " pod="openstack/ceilometer-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.965510 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-log-httpd\") pod \"ceilometer-0\" (UID: \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\") " pod="openstack/ceilometer-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.967008 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-run-httpd\") pod \"ceilometer-0\" (UID: \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\") " pod="openstack/ceilometer-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.972508 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\") " pod="openstack/ceilometer-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.973511 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\") " pod="openstack/ceilometer-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.974740 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-config-data\") pod \"ceilometer-0\" (UID: \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\") " pod="openstack/ceilometer-0" Jan 21 11:21:47 crc kubenswrapper[4881]: I0121 11:21:47.998847 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-scripts\") pod \"ceilometer-0\" (UID: \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\") " pod="openstack/ceilometer-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.008239 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr9jz\" (UniqueName: \"kubernetes.io/projected/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-kube-api-access-hr9jz\") pod \"ceilometer-0\" (UID: \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\") " pod="openstack/ceilometer-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.061069 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec8e0779-1552-4ebb-88d7-95a49e734b55-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ec8e0779-1552-4ebb-88d7-95a49e734b55\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.061135 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e7b52fc-b295-475c-bef6-074b1cb2a2f5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3e7b52fc-b295-475c-bef6-074b1cb2a2f5\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.061181 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e7b52fc-b295-475c-bef6-074b1cb2a2f5-scripts\") pod \"glance-default-external-api-0\" (UID: \"3e7b52fc-b295-475c-bef6-074b1cb2a2f5\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.061206 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njpb8\" (UniqueName: \"kubernetes.io/projected/ec8e0779-1552-4ebb-88d7-95a49e734b55-kube-api-access-njpb8\") pod \"glance-default-internal-api-0\" (UID: \"ec8e0779-1552-4ebb-88d7-95a49e734b55\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.061234 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec8e0779-1552-4ebb-88d7-95a49e734b55-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ec8e0779-1552-4ebb-88d7-95a49e734b55\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.061255 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec8e0779-1552-4ebb-88d7-95a49e734b55-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ec8e0779-1552-4ebb-88d7-95a49e734b55\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.061278 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6d665\" (UniqueName: \"kubernetes.io/projected/3e7b52fc-b295-475c-bef6-074b1cb2a2f5-kube-api-access-6d665\") pod \"glance-default-external-api-0\" (UID: \"3e7b52fc-b295-475c-bef6-074b1cb2a2f5\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.061298 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ec8e0779-1552-4ebb-88d7-95a49e734b55-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ec8e0779-1552-4ebb-88d7-95a49e734b55\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.061332 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e7b52fc-b295-475c-bef6-074b1cb2a2f5-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3e7b52fc-b295-475c-bef6-074b1cb2a2f5\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.061362 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec8e0779-1552-4ebb-88d7-95a49e734b55-logs\") pod \"glance-default-internal-api-0\" (UID: \"ec8e0779-1552-4ebb-88d7-95a49e734b55\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.061428 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"3e7b52fc-b295-475c-bef6-074b1cb2a2f5\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.061453 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e7b52fc-b295-475c-bef6-074b1cb2a2f5-logs\") pod \"glance-default-external-api-0\" (UID: \"3e7b52fc-b295-475c-bef6-074b1cb2a2f5\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.061474 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec8e0779-1552-4ebb-88d7-95a49e734b55-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ec8e0779-1552-4ebb-88d7-95a49e734b55\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.061504 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e7b52fc-b295-475c-bef6-074b1cb2a2f5-config-data\") pod \"glance-default-external-api-0\" (UID: \"3e7b52fc-b295-475c-bef6-074b1cb2a2f5\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.061559 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3e7b52fc-b295-475c-bef6-074b1cb2a2f5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3e7b52fc-b295-475c-bef6-074b1cb2a2f5\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.061640 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"ec8e0779-1552-4ebb-88d7-95a49e734b55\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.061919 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"ec8e0779-1552-4ebb-88d7-95a49e734b55\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.062037 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ec8e0779-1552-4ebb-88d7-95a49e734b55-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ec8e0779-1552-4ebb-88d7-95a49e734b55\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.062458 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e7b52fc-b295-475c-bef6-074b1cb2a2f5-logs\") pod \"glance-default-external-api-0\" (UID: \"3e7b52fc-b295-475c-bef6-074b1cb2a2f5\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.062551 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec8e0779-1552-4ebb-88d7-95a49e734b55-logs\") pod \"glance-default-internal-api-0\" (UID: \"ec8e0779-1552-4ebb-88d7-95a49e734b55\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.062570 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"3e7b52fc-b295-475c-bef6-074b1cb2a2f5\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-external-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.062918 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3e7b52fc-b295-475c-bef6-074b1cb2a2f5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3e7b52fc-b295-475c-bef6-074b1cb2a2f5\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.074897 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec8e0779-1552-4ebb-88d7-95a49e734b55-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ec8e0779-1552-4ebb-88d7-95a49e734b55\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.076714 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e7b52fc-b295-475c-bef6-074b1cb2a2f5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3e7b52fc-b295-475c-bef6-074b1cb2a2f5\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.076713 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec8e0779-1552-4ebb-88d7-95a49e734b55-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ec8e0779-1552-4ebb-88d7-95a49e734b55\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.077792 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e7b52fc-b295-475c-bef6-074b1cb2a2f5-config-data\") pod \"glance-default-external-api-0\" (UID: \"3e7b52fc-b295-475c-bef6-074b1cb2a2f5\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.078891 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e7b52fc-b295-475c-bef6-074b1cb2a2f5-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3e7b52fc-b295-475c-bef6-074b1cb2a2f5\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.079338 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e7b52fc-b295-475c-bef6-074b1cb2a2f5-scripts\") pod \"glance-default-external-api-0\" (UID: \"3e7b52fc-b295-475c-bef6-074b1cb2a2f5\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.079947 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec8e0779-1552-4ebb-88d7-95a49e734b55-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ec8e0779-1552-4ebb-88d7-95a49e734b55\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.081592 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec8e0779-1552-4ebb-88d7-95a49e734b55-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ec8e0779-1552-4ebb-88d7-95a49e734b55\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.083739 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njpb8\" (UniqueName: \"kubernetes.io/projected/ec8e0779-1552-4ebb-88d7-95a49e734b55-kube-api-access-njpb8\") pod \"glance-default-internal-api-0\" (UID: \"ec8e0779-1552-4ebb-88d7-95a49e734b55\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.086509 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6d665\" (UniqueName: \"kubernetes.io/projected/3e7b52fc-b295-475c-bef6-074b1cb2a2f5-kube-api-access-6d665\") pod \"glance-default-external-api-0\" (UID: \"3e7b52fc-b295-475c-bef6-074b1cb2a2f5\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.100840 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-external-api-0\" (UID: \"3e7b52fc-b295-475c-bef6-074b1cb2a2f5\") " pod="openstack/glance-default-external-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.120800 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"ec8e0779-1552-4ebb-88d7-95a49e734b55\") " pod="openstack/glance-default-internal-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.137607 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.157217 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.182359 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.785851 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:21:48 crc kubenswrapper[4881]: I0121 11:21:48.967745 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 11:21:48 crc kubenswrapper[4881]: W0121 11:21:48.970653 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec8e0779_1552_4ebb_88d7_95a49e734b55.slice/crio-39110a92c180d47914e6a9442ccb9e89aabc202538f5a509c78ad2619ec9a5f9 WatchSource:0}: Error finding container 39110a92c180d47914e6a9442ccb9e89aabc202538f5a509c78ad2619ec9a5f9: Status 404 returned error can't find the container with id 39110a92c180d47914e6a9442ccb9e89aabc202538f5a509c78ad2619ec9a5f9 Jan 21 11:21:49 crc kubenswrapper[4881]: I0121 11:21:49.324205 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a22f004-7d84-4edc-86f7-d58adb131a45" path="/var/lib/kubelet/pods/5a22f004-7d84-4edc-86f7-d58adb131a45/volumes" Jan 21 11:21:49 crc kubenswrapper[4881]: I0121 11:21:49.325393 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75119e97-b896-4b71-ab1f-28db45a4606d" path="/var/lib/kubelet/pods/75119e97-b896-4b71-ab1f-28db45a4606d/volumes" Jan 21 11:21:49 crc kubenswrapper[4881]: I0121 11:21:49.326766 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86debe8b-5d02-4f2e-a311-6106609aeb1c" path="/var/lib/kubelet/pods/86debe8b-5d02-4f2e-a311-6106609aeb1c/volumes" Jan 21 11:21:49 crc kubenswrapper[4881]: I0121 11:21:49.413633 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ec8e0779-1552-4ebb-88d7-95a49e734b55","Type":"ContainerStarted","Data":"39110a92c180d47914e6a9442ccb9e89aabc202538f5a509c78ad2619ec9a5f9"} Jan 21 11:21:49 crc kubenswrapper[4881]: I0121 11:21:49.423601 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d84ba548-9d82-44b7-bae5-bf8cf84ecc79","Type":"ContainerStarted","Data":"5ef74248d816cbba0967845a616d8ff93c71875da1f2537b3583d30494d188a0"} Jan 21 11:21:49 crc kubenswrapper[4881]: I0121 11:21:49.423654 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d84ba548-9d82-44b7-bae5-bf8cf84ecc79","Type":"ContainerStarted","Data":"95c906c4b339a07e39ec45c37bd23642eb30462373347c321f4ca0cc4f7e8653"} Jan 21 11:21:49 crc kubenswrapper[4881]: I0121 11:21:49.498119 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 21 11:21:49 crc kubenswrapper[4881]: I0121 11:21:49.545327 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 11:21:49 crc kubenswrapper[4881]: I0121 11:21:49.556863 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Jan 21 11:21:50 crc kubenswrapper[4881]: I0121 11:21:50.440380 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ec8e0779-1552-4ebb-88d7-95a49e734b55","Type":"ContainerStarted","Data":"8bcb045bcc62c4f01ca1a6052f969375e7aa0b8011729a55dd9e236ba89e4036"} Jan 21 11:21:50 crc kubenswrapper[4881]: I0121 11:21:50.450178 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d84ba548-9d82-44b7-bae5-bf8cf84ecc79","Type":"ContainerStarted","Data":"53f83f934fef330d755d320c983315d32feeaac6da62dbb78c115b45e16f216a"} Jan 21 11:21:50 crc kubenswrapper[4881]: I0121 11:21:50.452302 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3e7b52fc-b295-475c-bef6-074b1cb2a2f5","Type":"ContainerStarted","Data":"76bc3f20d39aef05146ba621d24aec9817e955bcea55e3efe174d033160d4c2f"} Jan 21 11:21:50 crc kubenswrapper[4881]: I0121 11:21:50.452568 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 21 11:21:50 crc kubenswrapper[4881]: I0121 11:21:50.517156 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Jan 21 11:21:51 crc kubenswrapper[4881]: I0121 11:21:51.069557 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 21 11:21:51 crc kubenswrapper[4881]: I0121 11:21:51.479225 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d84ba548-9d82-44b7-bae5-bf8cf84ecc79","Type":"ContainerStarted","Data":"0967e57a0feff48d2185c1e282e0585b131cee338ade45ea85673a62193b1f57"} Jan 21 11:21:51 crc kubenswrapper[4881]: I0121 11:21:51.481798 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3e7b52fc-b295-475c-bef6-074b1cb2a2f5","Type":"ContainerStarted","Data":"55dc30928b183d510f03cc70c0e25705360cd5d87786d3622db2ad0b70290c03"} Jan 21 11:21:51 crc kubenswrapper[4881]: I0121 11:21:51.491038 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ec8e0779-1552-4ebb-88d7-95a49e734b55","Type":"ContainerStarted","Data":"b1c6240f599b9b984ffca9fcfd23cfeb7e6e9f84572b199e0dc9b03860eae9e1"} Jan 21 11:21:51 crc kubenswrapper[4881]: I0121 11:21:51.515342 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.515319601 podStartE2EDuration="4.515319601s" podCreationTimestamp="2026-01-21 11:21:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:21:51.508331617 +0000 UTC m=+1498.768288106" watchObservedRunningTime="2026-01-21 11:21:51.515319601 +0000 UTC m=+1498.775276070" Jan 21 11:21:51 crc kubenswrapper[4881]: I0121 11:21:51.536339 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.536314773 podStartE2EDuration="4.536314773s" podCreationTimestamp="2026-01-21 11:21:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:21:51.531975855 +0000 UTC m=+1498.791932324" watchObservedRunningTime="2026-01-21 11:21:51.536314773 +0000 UTC m=+1498.796271242" Jan 21 11:21:52 crc kubenswrapper[4881]: I0121 11:21:52.504682 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3e7b52fc-b295-475c-bef6-074b1cb2a2f5","Type":"ContainerStarted","Data":"a297f0a54d599fae684fb0eb10035eee89893af8546f6e640b1500c94c2b065d"} Jan 21 11:21:52 crc kubenswrapper[4881]: I0121 11:21:52.504944 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="ee4e7116-c2cd-43d5-af6b-9f30b5053e0e" containerName="watcher-decision-engine" containerID="cri-o://4ba0181030ceb68e7fdb5249d09391d40feea2fca13e45d6b4d9c7f3ba56c71d" gracePeriod=30 Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.040286 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-b85xv"] Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.051493 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-b85xv" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.054926 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-b85xv"] Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.082218 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a601b0e-b326-4e55-901e-08a32fe24005-operator-scripts\") pod \"nova-api-db-create-b85xv\" (UID: \"2a601b0e-b326-4e55-901e-08a32fe24005\") " pod="openstack/nova-api-db-create-b85xv" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.082290 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4g8z\" (UniqueName: \"kubernetes.io/projected/2a601b0e-b326-4e55-901e-08a32fe24005-kube-api-access-s4g8z\") pod \"nova-api-db-create-b85xv\" (UID: \"2a601b0e-b326-4e55-901e-08a32fe24005\") " pod="openstack/nova-api-db-create-b85xv" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.160877 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-jdk2x"] Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.162543 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-jdk2x" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.182949 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-jdk2x"] Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.184973 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/502efce3-0d16-491d-b6fa-1b1d98f76d4b-operator-scripts\") pod \"nova-cell0-db-create-jdk2x\" (UID: \"502efce3-0d16-491d-b6fa-1b1d98f76d4b\") " pod="openstack/nova-cell0-db-create-jdk2x" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.185076 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a601b0e-b326-4e55-901e-08a32fe24005-operator-scripts\") pod \"nova-api-db-create-b85xv\" (UID: \"2a601b0e-b326-4e55-901e-08a32fe24005\") " pod="openstack/nova-api-db-create-b85xv" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.185116 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4g8z\" (UniqueName: \"kubernetes.io/projected/2a601b0e-b326-4e55-901e-08a32fe24005-kube-api-access-s4g8z\") pod \"nova-api-db-create-b85xv\" (UID: \"2a601b0e-b326-4e55-901e-08a32fe24005\") " pod="openstack/nova-api-db-create-b85xv" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.185227 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x9h2\" (UniqueName: \"kubernetes.io/projected/502efce3-0d16-491d-b6fa-1b1d98f76d4b-kube-api-access-5x9h2\") pod \"nova-cell0-db-create-jdk2x\" (UID: \"502efce3-0d16-491d-b6fa-1b1d98f76d4b\") " pod="openstack/nova-cell0-db-create-jdk2x" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.186287 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a601b0e-b326-4e55-901e-08a32fe24005-operator-scripts\") pod \"nova-api-db-create-b85xv\" (UID: \"2a601b0e-b326-4e55-901e-08a32fe24005\") " pod="openstack/nova-api-db-create-b85xv" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.219715 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4g8z\" (UniqueName: \"kubernetes.io/projected/2a601b0e-b326-4e55-901e-08a32fe24005-kube-api-access-s4g8z\") pod \"nova-api-db-create-b85xv\" (UID: \"2a601b0e-b326-4e55-901e-08a32fe24005\") " pod="openstack/nova-api-db-create-b85xv" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.287481 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/502efce3-0d16-491d-b6fa-1b1d98f76d4b-operator-scripts\") pod \"nova-cell0-db-create-jdk2x\" (UID: \"502efce3-0d16-491d-b6fa-1b1d98f76d4b\") " pod="openstack/nova-cell0-db-create-jdk2x" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.287936 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5x9h2\" (UniqueName: \"kubernetes.io/projected/502efce3-0d16-491d-b6fa-1b1d98f76d4b-kube-api-access-5x9h2\") pod \"nova-cell0-db-create-jdk2x\" (UID: \"502efce3-0d16-491d-b6fa-1b1d98f76d4b\") " pod="openstack/nova-cell0-db-create-jdk2x" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.288806 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/502efce3-0d16-491d-b6fa-1b1d98f76d4b-operator-scripts\") pod \"nova-cell0-db-create-jdk2x\" (UID: \"502efce3-0d16-491d-b6fa-1b1d98f76d4b\") " pod="openstack/nova-cell0-db-create-jdk2x" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.333749 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5x9h2\" (UniqueName: \"kubernetes.io/projected/502efce3-0d16-491d-b6fa-1b1d98f76d4b-kube-api-access-5x9h2\") pod \"nova-cell0-db-create-jdk2x\" (UID: \"502efce3-0d16-491d-b6fa-1b1d98f76d4b\") " pod="openstack/nova-cell0-db-create-jdk2x" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.377182 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-fb46-account-create-update-xxwmq"] Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.378466 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-fb46-account-create-update-xxwmq" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.391020 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-b85xv" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.410162 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f9n8\" (UniqueName: \"kubernetes.io/projected/29487dae-24e9-4d5b-9819-99516df78630-kube-api-access-6f9n8\") pod \"nova-api-fb46-account-create-update-xxwmq\" (UID: \"29487dae-24e9-4d5b-9819-99516df78630\") " pod="openstack/nova-api-fb46-account-create-update-xxwmq" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.410594 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29487dae-24e9-4d5b-9819-99516df78630-operator-scripts\") pod \"nova-api-fb46-account-create-update-xxwmq\" (UID: \"29487dae-24e9-4d5b-9819-99516df78630\") " pod="openstack/nova-api-fb46-account-create-update-xxwmq" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.424927 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.492912 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-f99bl"] Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.516490 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29487dae-24e9-4d5b-9819-99516df78630-operator-scripts\") pod \"nova-api-fb46-account-create-update-xxwmq\" (UID: \"29487dae-24e9-4d5b-9819-99516df78630\") " pod="openstack/nova-api-fb46-account-create-update-xxwmq" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.519997 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f9n8\" (UniqueName: \"kubernetes.io/projected/29487dae-24e9-4d5b-9819-99516df78630-kube-api-access-6f9n8\") pod \"nova-api-fb46-account-create-update-xxwmq\" (UID: \"29487dae-24e9-4d5b-9819-99516df78630\") " pod="openstack/nova-api-fb46-account-create-update-xxwmq" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.527248 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29487dae-24e9-4d5b-9819-99516df78630-operator-scripts\") pod \"nova-api-fb46-account-create-update-xxwmq\" (UID: \"29487dae-24e9-4d5b-9819-99516df78630\") " pod="openstack/nova-api-fb46-account-create-update-xxwmq" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.551111 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-fb46-account-create-update-xxwmq"] Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.551248 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-f99bl" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.612482 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-jdk2x" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.629733 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f9n8\" (UniqueName: \"kubernetes.io/projected/29487dae-24e9-4d5b-9819-99516df78630-kube-api-access-6f9n8\") pod \"nova-api-fb46-account-create-update-xxwmq\" (UID: \"29487dae-24e9-4d5b-9819-99516df78630\") " pod="openstack/nova-api-fb46-account-create-update-xxwmq" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.670284 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d84ba548-9d82-44b7-bae5-bf8cf84ecc79","Type":"ContainerStarted","Data":"786551fea0a0b08ed4797eaa4ac0bd544644fed6b4135ad7593d1cf541bbe884"} Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.670378 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.727420 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-fb46-account-create-update-xxwmq" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.731768 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2c35a47-0e6e-4760-9026-617ca187b066-operator-scripts\") pod \"nova-cell1-db-create-f99bl\" (UID: \"f2c35a47-0e6e-4760-9026-617ca187b066\") " pod="openstack/nova-cell1-db-create-f99bl" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.732077 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhnvm\" (UniqueName: \"kubernetes.io/projected/f2c35a47-0e6e-4760-9026-617ca187b066-kube-api-access-lhnvm\") pod \"nova-cell1-db-create-f99bl\" (UID: \"f2c35a47-0e6e-4760-9026-617ca187b066\") " pod="openstack/nova-cell1-db-create-f99bl" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.741180 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-f99bl"] Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.834237 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhnvm\" (UniqueName: \"kubernetes.io/projected/f2c35a47-0e6e-4760-9026-617ca187b066-kube-api-access-lhnvm\") pod \"nova-cell1-db-create-f99bl\" (UID: \"f2c35a47-0e6e-4760-9026-617ca187b066\") " pod="openstack/nova-cell1-db-create-f99bl" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.834319 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2c35a47-0e6e-4760-9026-617ca187b066-operator-scripts\") pod \"nova-cell1-db-create-f99bl\" (UID: \"f2c35a47-0e6e-4760-9026-617ca187b066\") " pod="openstack/nova-cell1-db-create-f99bl" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.842451 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2c35a47-0e6e-4760-9026-617ca187b066-operator-scripts\") pod \"nova-cell1-db-create-f99bl\" (UID: \"f2c35a47-0e6e-4760-9026-617ca187b066\") " pod="openstack/nova-cell1-db-create-f99bl" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.897299 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhnvm\" (UniqueName: \"kubernetes.io/projected/f2c35a47-0e6e-4760-9026-617ca187b066-kube-api-access-lhnvm\") pod \"nova-cell1-db-create-f99bl\" (UID: \"f2c35a47-0e6e-4760-9026-617ca187b066\") " pod="openstack/nova-cell1-db-create-f99bl" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.933083 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-5627-account-create-update-mbnwf"] Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.934540 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5627-account-create-update-mbnwf" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.939563 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 21 11:21:53 crc kubenswrapper[4881]: I0121 11:21:53.989926 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-5627-account-create-update-mbnwf"] Jan 21 11:21:54 crc kubenswrapper[4881]: I0121 11:21:54.016513 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.556144986 podStartE2EDuration="7.016489137s" podCreationTimestamp="2026-01-21 11:21:47 +0000 UTC" firstStartedPulling="2026-01-21 11:21:48.80186767 +0000 UTC m=+1496.061824129" lastFinishedPulling="2026-01-21 11:21:52.262211811 +0000 UTC m=+1499.522168280" observedRunningTime="2026-01-21 11:21:53.723420122 +0000 UTC m=+1500.983376591" watchObservedRunningTime="2026-01-21 11:21:54.016489137 +0000 UTC m=+1501.276445606" Jan 21 11:21:54 crc kubenswrapper[4881]: I0121 11:21:54.040329 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dmdm\" (UniqueName: \"kubernetes.io/projected/de50b4a3-643f-4e4a-9853-b794eae5c08c-kube-api-access-4dmdm\") pod \"nova-cell0-5627-account-create-update-mbnwf\" (UID: \"de50b4a3-643f-4e4a-9853-b794eae5c08c\") " pod="openstack/nova-cell0-5627-account-create-update-mbnwf" Jan 21 11:21:54 crc kubenswrapper[4881]: I0121 11:21:54.040553 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de50b4a3-643f-4e4a-9853-b794eae5c08c-operator-scripts\") pod \"nova-cell0-5627-account-create-update-mbnwf\" (UID: \"de50b4a3-643f-4e4a-9853-b794eae5c08c\") " pod="openstack/nova-cell0-5627-account-create-update-mbnwf" Jan 21 11:21:54 crc kubenswrapper[4881]: I0121 11:21:54.055091 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-b4dc-account-create-update-46bk2"] Jan 21 11:21:54 crc kubenswrapper[4881]: I0121 11:21:54.059246 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-b4dc-account-create-update-46bk2" Jan 21 11:21:54 crc kubenswrapper[4881]: I0121 11:21:54.064562 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-b4dc-account-create-update-46bk2"] Jan 21 11:21:54 crc kubenswrapper[4881]: I0121 11:21:54.064943 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 21 11:21:54 crc kubenswrapper[4881]: I0121 11:21:54.083807 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-f99bl" Jan 21 11:21:54 crc kubenswrapper[4881]: I0121 11:21:54.144279 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de50b4a3-643f-4e4a-9853-b794eae5c08c-operator-scripts\") pod \"nova-cell0-5627-account-create-update-mbnwf\" (UID: \"de50b4a3-643f-4e4a-9853-b794eae5c08c\") " pod="openstack/nova-cell0-5627-account-create-update-mbnwf" Jan 21 11:21:54 crc kubenswrapper[4881]: I0121 11:21:54.144654 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d8a04fd-1a86-454f-bd69-64ad270b8357-operator-scripts\") pod \"nova-cell1-b4dc-account-create-update-46bk2\" (UID: \"4d8a04fd-1a86-454f-bd69-64ad270b8357\") " pod="openstack/nova-cell1-b4dc-account-create-update-46bk2" Jan 21 11:21:54 crc kubenswrapper[4881]: I0121 11:21:54.144712 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzfnm\" (UniqueName: \"kubernetes.io/projected/4d8a04fd-1a86-454f-bd69-64ad270b8357-kube-api-access-qzfnm\") pod \"nova-cell1-b4dc-account-create-update-46bk2\" (UID: \"4d8a04fd-1a86-454f-bd69-64ad270b8357\") " pod="openstack/nova-cell1-b4dc-account-create-update-46bk2" Jan 21 11:21:54 crc kubenswrapper[4881]: I0121 11:21:54.146244 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dmdm\" (UniqueName: \"kubernetes.io/projected/de50b4a3-643f-4e4a-9853-b794eae5c08c-kube-api-access-4dmdm\") pod \"nova-cell0-5627-account-create-update-mbnwf\" (UID: \"de50b4a3-643f-4e4a-9853-b794eae5c08c\") " pod="openstack/nova-cell0-5627-account-create-update-mbnwf" Jan 21 11:21:54 crc kubenswrapper[4881]: I0121 11:21:54.147336 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de50b4a3-643f-4e4a-9853-b794eae5c08c-operator-scripts\") pod \"nova-cell0-5627-account-create-update-mbnwf\" (UID: \"de50b4a3-643f-4e4a-9853-b794eae5c08c\") " pod="openstack/nova-cell0-5627-account-create-update-mbnwf" Jan 21 11:21:54 crc kubenswrapper[4881]: I0121 11:21:54.188442 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dmdm\" (UniqueName: \"kubernetes.io/projected/de50b4a3-643f-4e4a-9853-b794eae5c08c-kube-api-access-4dmdm\") pod \"nova-cell0-5627-account-create-update-mbnwf\" (UID: \"de50b4a3-643f-4e4a-9853-b794eae5c08c\") " pod="openstack/nova-cell0-5627-account-create-update-mbnwf" Jan 21 11:21:54 crc kubenswrapper[4881]: I0121 11:21:54.248191 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d8a04fd-1a86-454f-bd69-64ad270b8357-operator-scripts\") pod \"nova-cell1-b4dc-account-create-update-46bk2\" (UID: \"4d8a04fd-1a86-454f-bd69-64ad270b8357\") " pod="openstack/nova-cell1-b4dc-account-create-update-46bk2" Jan 21 11:21:54 crc kubenswrapper[4881]: I0121 11:21:54.248251 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzfnm\" (UniqueName: \"kubernetes.io/projected/4d8a04fd-1a86-454f-bd69-64ad270b8357-kube-api-access-qzfnm\") pod \"nova-cell1-b4dc-account-create-update-46bk2\" (UID: \"4d8a04fd-1a86-454f-bd69-64ad270b8357\") " pod="openstack/nova-cell1-b4dc-account-create-update-46bk2" Jan 21 11:21:54 crc kubenswrapper[4881]: I0121 11:21:54.249031 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d8a04fd-1a86-454f-bd69-64ad270b8357-operator-scripts\") pod \"nova-cell1-b4dc-account-create-update-46bk2\" (UID: \"4d8a04fd-1a86-454f-bd69-64ad270b8357\") " pod="openstack/nova-cell1-b4dc-account-create-update-46bk2" Jan 21 11:21:54 crc kubenswrapper[4881]: I0121 11:21:54.277284 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzfnm\" (UniqueName: \"kubernetes.io/projected/4d8a04fd-1a86-454f-bd69-64ad270b8357-kube-api-access-qzfnm\") pod \"nova-cell1-b4dc-account-create-update-46bk2\" (UID: \"4d8a04fd-1a86-454f-bd69-64ad270b8357\") " pod="openstack/nova-cell1-b4dc-account-create-update-46bk2" Jan 21 11:21:54 crc kubenswrapper[4881]: I0121 11:21:54.293177 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5627-account-create-update-mbnwf" Jan 21 11:21:54 crc kubenswrapper[4881]: I0121 11:21:54.468883 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-b4dc-account-create-update-46bk2" Jan 21 11:21:54 crc kubenswrapper[4881]: I0121 11:21:54.502476 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:21:54 crc kubenswrapper[4881]: I0121 11:21:54.661434 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-b85xv"] Jan 21 11:21:54 crc kubenswrapper[4881]: I0121 11:21:54.809868 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-jdk2x"] Jan 21 11:21:54 crc kubenswrapper[4881]: I0121 11:21:54.825946 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-fb46-account-create-update-xxwmq"] Jan 21 11:21:54 crc kubenswrapper[4881]: W0121 11:21:54.830029 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod502efce3_0d16_491d_b6fa_1b1d98f76d4b.slice/crio-35f860e151295e5ea65fab1c5b7e59d1d8a5061680486380408ebd5dc537484b WatchSource:0}: Error finding container 35f860e151295e5ea65fab1c5b7e59d1d8a5061680486380408ebd5dc537484b: Status 404 returned error can't find the container with id 35f860e151295e5ea65fab1c5b7e59d1d8a5061680486380408ebd5dc537484b Jan 21 11:21:54 crc kubenswrapper[4881]: W0121 11:21:54.843697 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29487dae_24e9_4d5b_9819_99516df78630.slice/crio-6cb58542cb5769c92ce7a580725af8d619f54b42ee691161a9bc1aa7508fcb9c WatchSource:0}: Error finding container 6cb58542cb5769c92ce7a580725af8d619f54b42ee691161a9bc1aa7508fcb9c: Status 404 returned error can't find the container with id 6cb58542cb5769c92ce7a580725af8d619f54b42ee691161a9bc1aa7508fcb9c Jan 21 11:21:54 crc kubenswrapper[4881]: I0121 11:21:54.853066 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 21 11:21:55 crc kubenswrapper[4881]: I0121 11:21:55.147343 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-f99bl"] Jan 21 11:21:55 crc kubenswrapper[4881]: W0121 11:21:55.209664 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde50b4a3_643f_4e4a_9853_b794eae5c08c.slice/crio-f365bdc014f876728f82cd5bd3495274a14cd4e992642927c9b972bc8d3b5964 WatchSource:0}: Error finding container f365bdc014f876728f82cd5bd3495274a14cd4e992642927c9b972bc8d3b5964: Status 404 returned error can't find the container with id f365bdc014f876728f82cd5bd3495274a14cd4e992642927c9b972bc8d3b5964 Jan 21 11:21:55 crc kubenswrapper[4881]: I0121 11:21:55.230991 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-5627-account-create-update-mbnwf"] Jan 21 11:21:55 crc kubenswrapper[4881]: I0121 11:21:55.272664 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-b4dc-account-create-update-46bk2"] Jan 21 11:21:55 crc kubenswrapper[4881]: I0121 11:21:55.716046 4881 generic.go:334] "Generic (PLEG): container finished" podID="502efce3-0d16-491d-b6fa-1b1d98f76d4b" containerID="3e8735972d4959fbfdcc07dada19674d2a9110125d71fdfe160979bcc5be0481" exitCode=0 Jan 21 11:21:55 crc kubenswrapper[4881]: I0121 11:21:55.716162 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-jdk2x" event={"ID":"502efce3-0d16-491d-b6fa-1b1d98f76d4b","Type":"ContainerDied","Data":"3e8735972d4959fbfdcc07dada19674d2a9110125d71fdfe160979bcc5be0481"} Jan 21 11:21:55 crc kubenswrapper[4881]: I0121 11:21:55.716198 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-jdk2x" event={"ID":"502efce3-0d16-491d-b6fa-1b1d98f76d4b","Type":"ContainerStarted","Data":"35f860e151295e5ea65fab1c5b7e59d1d8a5061680486380408ebd5dc537484b"} Jan 21 11:21:55 crc kubenswrapper[4881]: I0121 11:21:55.718795 4881 generic.go:334] "Generic (PLEG): container finished" podID="2a601b0e-b326-4e55-901e-08a32fe24005" containerID="5d3f34869256c4d21e6b17d94ceaa6baf87aefe4c608982c7e1561bfc3b81de2" exitCode=0 Jan 21 11:21:55 crc kubenswrapper[4881]: I0121 11:21:55.718927 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-b85xv" event={"ID":"2a601b0e-b326-4e55-901e-08a32fe24005","Type":"ContainerDied","Data":"5d3f34869256c4d21e6b17d94ceaa6baf87aefe4c608982c7e1561bfc3b81de2"} Jan 21 11:21:55 crc kubenswrapper[4881]: I0121 11:21:55.718949 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-b85xv" event={"ID":"2a601b0e-b326-4e55-901e-08a32fe24005","Type":"ContainerStarted","Data":"a7ef229f2fb104b9e8cc424559b0f8a908033c5487165445292865d3e0cdb0fb"} Jan 21 11:21:55 crc kubenswrapper[4881]: I0121 11:21:55.722270 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-fb46-account-create-update-xxwmq" event={"ID":"29487dae-24e9-4d5b-9819-99516df78630","Type":"ContainerStarted","Data":"dccd9ebbabd2787629df88e189e045b4233f9efdaa17a33f088ad8c951d3530a"} Jan 21 11:21:55 crc kubenswrapper[4881]: I0121 11:21:55.722329 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-fb46-account-create-update-xxwmq" event={"ID":"29487dae-24e9-4d5b-9819-99516df78630","Type":"ContainerStarted","Data":"6cb58542cb5769c92ce7a580725af8d619f54b42ee691161a9bc1aa7508fcb9c"} Jan 21 11:21:55 crc kubenswrapper[4881]: I0121 11:21:55.727881 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-b4dc-account-create-update-46bk2" event={"ID":"4d8a04fd-1a86-454f-bd69-64ad270b8357","Type":"ContainerStarted","Data":"27659f5aab69bf4af66ab9aeb1d61a07fd49c77e8daa35d08cb33096b28e9074"} Jan 21 11:21:55 crc kubenswrapper[4881]: I0121 11:21:55.727942 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-b4dc-account-create-update-46bk2" event={"ID":"4d8a04fd-1a86-454f-bd69-64ad270b8357","Type":"ContainerStarted","Data":"5af4b877aa6f4206f95841c9ad3225a13be2d82d1149e72ace1f40c99f028477"} Jan 21 11:21:55 crc kubenswrapper[4881]: I0121 11:21:55.731772 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-f99bl" event={"ID":"f2c35a47-0e6e-4760-9026-617ca187b066","Type":"ContainerStarted","Data":"e072378bb8b79adf91d2701f6ed4a0743a1956ccf92868309d50c74d1a40ff46"} Jan 21 11:21:55 crc kubenswrapper[4881]: I0121 11:21:55.731847 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-f99bl" event={"ID":"f2c35a47-0e6e-4760-9026-617ca187b066","Type":"ContainerStarted","Data":"c609580b7b4676d9f33d5da30b233c4958836e02e51a3088b77cdd78db145b29"} Jan 21 11:21:55 crc kubenswrapper[4881]: I0121 11:21:55.741492 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d84ba548-9d82-44b7-bae5-bf8cf84ecc79" containerName="ceilometer-central-agent" containerID="cri-o://5ef74248d816cbba0967845a616d8ff93c71875da1f2537b3583d30494d188a0" gracePeriod=30 Jan 21 11:21:55 crc kubenswrapper[4881]: I0121 11:21:55.742884 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5627-account-create-update-mbnwf" event={"ID":"de50b4a3-643f-4e4a-9853-b794eae5c08c","Type":"ContainerStarted","Data":"22038197b765a72901f7e4d04d0bebb17e8d3bca09464adc6dc75e99375c24ab"} Jan 21 11:21:55 crc kubenswrapper[4881]: I0121 11:21:55.742918 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5627-account-create-update-mbnwf" event={"ID":"de50b4a3-643f-4e4a-9853-b794eae5c08c","Type":"ContainerStarted","Data":"f365bdc014f876728f82cd5bd3495274a14cd4e992642927c9b972bc8d3b5964"} Jan 21 11:21:55 crc kubenswrapper[4881]: I0121 11:21:55.742994 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d84ba548-9d82-44b7-bae5-bf8cf84ecc79" containerName="proxy-httpd" containerID="cri-o://786551fea0a0b08ed4797eaa4ac0bd544644fed6b4135ad7593d1cf541bbe884" gracePeriod=30 Jan 21 11:21:55 crc kubenswrapper[4881]: I0121 11:21:55.743058 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d84ba548-9d82-44b7-bae5-bf8cf84ecc79" containerName="sg-core" containerID="cri-o://0967e57a0feff48d2185c1e282e0585b131cee338ade45ea85673a62193b1f57" gracePeriod=30 Jan 21 11:21:55 crc kubenswrapper[4881]: I0121 11:21:55.743115 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d84ba548-9d82-44b7-bae5-bf8cf84ecc79" containerName="ceilometer-notification-agent" containerID="cri-o://53f83f934fef330d755d320c983315d32feeaac6da62dbb78c115b45e16f216a" gracePeriod=30 Jan 21 11:21:55 crc kubenswrapper[4881]: I0121 11:21:55.890924 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-f99bl" podStartSLOduration=2.890902713 podStartE2EDuration="2.890902713s" podCreationTimestamp="2026-01-21 11:21:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:21:55.825639648 +0000 UTC m=+1503.085596117" watchObservedRunningTime="2026-01-21 11:21:55.890902713 +0000 UTC m=+1503.150859182" Jan 21 11:21:55 crc kubenswrapper[4881]: I0121 11:21:55.908106 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-b4dc-account-create-update-46bk2" podStartSLOduration=2.9080798 podStartE2EDuration="2.9080798s" podCreationTimestamp="2026-01-21 11:21:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:21:55.85862998 +0000 UTC m=+1503.118586459" watchObservedRunningTime="2026-01-21 11:21:55.9080798 +0000 UTC m=+1503.168036269" Jan 21 11:21:55 crc kubenswrapper[4881]: I0121 11:21:55.909916 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-fb46-account-create-update-xxwmq" podStartSLOduration=2.909905777 podStartE2EDuration="2.909905777s" podCreationTimestamp="2026-01-21 11:21:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:21:55.898714268 +0000 UTC m=+1503.158670737" watchObservedRunningTime="2026-01-21 11:21:55.909905777 +0000 UTC m=+1503.169862246" Jan 21 11:21:55 crc kubenswrapper[4881]: I0121 11:21:55.939638 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-5627-account-create-update-mbnwf" podStartSLOduration=2.939610916 podStartE2EDuration="2.939610916s" podCreationTimestamp="2026-01-21 11:21:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:21:55.929304929 +0000 UTC m=+1503.189261398" watchObservedRunningTime="2026-01-21 11:21:55.939610916 +0000 UTC m=+1503.199567385" Jan 21 11:21:56 crc kubenswrapper[4881]: I0121 11:21:56.766688 4881 generic.go:334] "Generic (PLEG): container finished" podID="d84ba548-9d82-44b7-bae5-bf8cf84ecc79" containerID="786551fea0a0b08ed4797eaa4ac0bd544644fed6b4135ad7593d1cf541bbe884" exitCode=0 Jan 21 11:21:56 crc kubenswrapper[4881]: I0121 11:21:56.767753 4881 generic.go:334] "Generic (PLEG): container finished" podID="d84ba548-9d82-44b7-bae5-bf8cf84ecc79" containerID="0967e57a0feff48d2185c1e282e0585b131cee338ade45ea85673a62193b1f57" exitCode=2 Jan 21 11:21:56 crc kubenswrapper[4881]: I0121 11:21:56.767895 4881 generic.go:334] "Generic (PLEG): container finished" podID="d84ba548-9d82-44b7-bae5-bf8cf84ecc79" containerID="53f83f934fef330d755d320c983315d32feeaac6da62dbb78c115b45e16f216a" exitCode=0 Jan 21 11:21:56 crc kubenswrapper[4881]: I0121 11:21:56.767924 4881 generic.go:334] "Generic (PLEG): container finished" podID="d84ba548-9d82-44b7-bae5-bf8cf84ecc79" containerID="5ef74248d816cbba0967845a616d8ff93c71875da1f2537b3583d30494d188a0" exitCode=0 Jan 21 11:21:56 crc kubenswrapper[4881]: I0121 11:21:56.766911 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d84ba548-9d82-44b7-bae5-bf8cf84ecc79","Type":"ContainerDied","Data":"786551fea0a0b08ed4797eaa4ac0bd544644fed6b4135ad7593d1cf541bbe884"} Jan 21 11:21:56 crc kubenswrapper[4881]: I0121 11:21:56.768153 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d84ba548-9d82-44b7-bae5-bf8cf84ecc79","Type":"ContainerDied","Data":"0967e57a0feff48d2185c1e282e0585b131cee338ade45ea85673a62193b1f57"} Jan 21 11:21:56 crc kubenswrapper[4881]: I0121 11:21:56.768185 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d84ba548-9d82-44b7-bae5-bf8cf84ecc79","Type":"ContainerDied","Data":"53f83f934fef330d755d320c983315d32feeaac6da62dbb78c115b45e16f216a"} Jan 21 11:21:56 crc kubenswrapper[4881]: I0121 11:21:56.768197 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d84ba548-9d82-44b7-bae5-bf8cf84ecc79","Type":"ContainerDied","Data":"5ef74248d816cbba0967845a616d8ff93c71875da1f2537b3583d30494d188a0"} Jan 21 11:21:56 crc kubenswrapper[4881]: I0121 11:21:56.777103 4881 generic.go:334] "Generic (PLEG): container finished" podID="29487dae-24e9-4d5b-9819-99516df78630" containerID="dccd9ebbabd2787629df88e189e045b4233f9efdaa17a33f088ad8c951d3530a" exitCode=0 Jan 21 11:21:56 crc kubenswrapper[4881]: I0121 11:21:56.777171 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-fb46-account-create-update-xxwmq" event={"ID":"29487dae-24e9-4d5b-9819-99516df78630","Type":"ContainerDied","Data":"dccd9ebbabd2787629df88e189e045b4233f9efdaa17a33f088ad8c951d3530a"} Jan 21 11:21:56 crc kubenswrapper[4881]: I0121 11:21:56.784314 4881 generic.go:334] "Generic (PLEG): container finished" podID="4d8a04fd-1a86-454f-bd69-64ad270b8357" containerID="27659f5aab69bf4af66ab9aeb1d61a07fd49c77e8daa35d08cb33096b28e9074" exitCode=0 Jan 21 11:21:56 crc kubenswrapper[4881]: I0121 11:21:56.784482 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-b4dc-account-create-update-46bk2" event={"ID":"4d8a04fd-1a86-454f-bd69-64ad270b8357","Type":"ContainerDied","Data":"27659f5aab69bf4af66ab9aeb1d61a07fd49c77e8daa35d08cb33096b28e9074"} Jan 21 11:21:56 crc kubenswrapper[4881]: I0121 11:21:56.788408 4881 generic.go:334] "Generic (PLEG): container finished" podID="f2c35a47-0e6e-4760-9026-617ca187b066" containerID="e072378bb8b79adf91d2701f6ed4a0743a1956ccf92868309d50c74d1a40ff46" exitCode=0 Jan 21 11:21:56 crc kubenswrapper[4881]: I0121 11:21:56.788634 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-f99bl" event={"ID":"f2c35a47-0e6e-4760-9026-617ca187b066","Type":"ContainerDied","Data":"e072378bb8b79adf91d2701f6ed4a0743a1956ccf92868309d50c74d1a40ff46"} Jan 21 11:21:56 crc kubenswrapper[4881]: I0121 11:21:56.790767 4881 generic.go:334] "Generic (PLEG): container finished" podID="de50b4a3-643f-4e4a-9853-b794eae5c08c" containerID="22038197b765a72901f7e4d04d0bebb17e8d3bca09464adc6dc75e99375c24ab" exitCode=0 Jan 21 11:21:56 crc kubenswrapper[4881]: I0121 11:21:56.791099 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5627-account-create-update-mbnwf" event={"ID":"de50b4a3-643f-4e4a-9853-b794eae5c08c","Type":"ContainerDied","Data":"22038197b765a72901f7e4d04d0bebb17e8d3bca09464adc6dc75e99375c24ab"} Jan 21 11:21:57 crc kubenswrapper[4881]: I0121 11:21:57.160303 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:21:57 crc kubenswrapper[4881]: I0121 11:21:57.253630 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-combined-ca-bundle\") pod \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\" (UID: \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\") " Jan 21 11:21:57 crc kubenswrapper[4881]: I0121 11:21:57.253743 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-sg-core-conf-yaml\") pod \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\" (UID: \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\") " Jan 21 11:21:57 crc kubenswrapper[4881]: I0121 11:21:57.253871 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-log-httpd\") pod \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\" (UID: \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\") " Jan 21 11:21:57 crc kubenswrapper[4881]: I0121 11:21:57.253925 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-scripts\") pod \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\" (UID: \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\") " Jan 21 11:21:57 crc kubenswrapper[4881]: I0121 11:21:57.254010 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-config-data\") pod \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\" (UID: \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\") " Jan 21 11:21:57 crc kubenswrapper[4881]: I0121 11:21:57.254124 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-run-httpd\") pod \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\" (UID: \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\") " Jan 21 11:21:57 crc kubenswrapper[4881]: I0121 11:21:57.254178 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hr9jz\" (UniqueName: \"kubernetes.io/projected/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-kube-api-access-hr9jz\") pod \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\" (UID: \"d84ba548-9d82-44b7-bae5-bf8cf84ecc79\") " Jan 21 11:21:57 crc kubenswrapper[4881]: I0121 11:21:57.259840 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d84ba548-9d82-44b7-bae5-bf8cf84ecc79" (UID: "d84ba548-9d82-44b7-bae5-bf8cf84ecc79"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:21:57 crc kubenswrapper[4881]: I0121 11:21:57.269134 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d84ba548-9d82-44b7-bae5-bf8cf84ecc79" (UID: "d84ba548-9d82-44b7-bae5-bf8cf84ecc79"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:21:57 crc kubenswrapper[4881]: I0121 11:21:57.275168 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-kube-api-access-hr9jz" (OuterVolumeSpecName: "kube-api-access-hr9jz") pod "d84ba548-9d82-44b7-bae5-bf8cf84ecc79" (UID: "d84ba548-9d82-44b7-bae5-bf8cf84ecc79"). InnerVolumeSpecName "kube-api-access-hr9jz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:21:57 crc kubenswrapper[4881]: I0121 11:21:57.287072 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-scripts" (OuterVolumeSpecName: "scripts") pod "d84ba548-9d82-44b7-bae5-bf8cf84ecc79" (UID: "d84ba548-9d82-44b7-bae5-bf8cf84ecc79"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:57 crc kubenswrapper[4881]: I0121 11:21:57.363137 4881 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:57 crc kubenswrapper[4881]: I0121 11:21:57.363170 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hr9jz\" (UniqueName: \"kubernetes.io/projected/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-kube-api-access-hr9jz\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:57 crc kubenswrapper[4881]: I0121 11:21:57.363196 4881 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:57 crc kubenswrapper[4881]: I0121 11:21:57.363205 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:57 crc kubenswrapper[4881]: I0121 11:21:57.396275 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d84ba548-9d82-44b7-bae5-bf8cf84ecc79" (UID: "d84ba548-9d82-44b7-bae5-bf8cf84ecc79"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:57 crc kubenswrapper[4881]: I0121 11:21:57.600897 4881 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:57 crc kubenswrapper[4881]: I0121 11:21:57.674691 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d84ba548-9d82-44b7-bae5-bf8cf84ecc79" (UID: "d84ba548-9d82-44b7-bae5-bf8cf84ecc79"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:57 crc kubenswrapper[4881]: I0121 11:21:57.704087 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:57 crc kubenswrapper[4881]: I0121 11:21:57.730485 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-config-data" (OuterVolumeSpecName: "config-data") pod "d84ba548-9d82-44b7-bae5-bf8cf84ecc79" (UID: "d84ba548-9d82-44b7-bae5-bf8cf84ecc79"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:21:57 crc kubenswrapper[4881]: I0121 11:21:57.845178 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:21:57 crc kubenswrapper[4881]: I0121 11:21:57.845377 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d84ba548-9d82-44b7-bae5-bf8cf84ecc79","Type":"ContainerDied","Data":"95c906c4b339a07e39ec45c37bd23642eb30462373347c321f4ca0cc4f7e8653"} Jan 21 11:21:57 crc kubenswrapper[4881]: I0121 11:21:57.845452 4881 scope.go:117] "RemoveContainer" containerID="786551fea0a0b08ed4797eaa4ac0bd544644fed6b4135ad7593d1cf541bbe884" Jan 21 11:21:57 crc kubenswrapper[4881]: I0121 11:21:57.893262 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d84ba548-9d82-44b7-bae5-bf8cf84ecc79-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:57 crc kubenswrapper[4881]: I0121 11:21:57.956428 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-b85xv" Jan 21 11:21:57 crc kubenswrapper[4881]: I0121 11:21:57.967630 4881 scope.go:117] "RemoveContainer" containerID="0967e57a0feff48d2185c1e282e0585b131cee338ade45ea85673a62193b1f57" Jan 21 11:21:57 crc kubenswrapper[4881]: I0121 11:21:57.972859 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-jdk2x" Jan 21 11:21:57 crc kubenswrapper[4881]: I0121 11:21:57.982117 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.049963 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.076835 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:21:58 crc kubenswrapper[4881]: E0121 11:21:58.077335 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="502efce3-0d16-491d-b6fa-1b1d98f76d4b" containerName="mariadb-database-create" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.077355 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="502efce3-0d16-491d-b6fa-1b1d98f76d4b" containerName="mariadb-database-create" Jan 21 11:21:58 crc kubenswrapper[4881]: E0121 11:21:58.077376 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a601b0e-b326-4e55-901e-08a32fe24005" containerName="mariadb-database-create" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.077383 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a601b0e-b326-4e55-901e-08a32fe24005" containerName="mariadb-database-create" Jan 21 11:21:58 crc kubenswrapper[4881]: E0121 11:21:58.077428 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d84ba548-9d82-44b7-bae5-bf8cf84ecc79" containerName="sg-core" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.077437 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="d84ba548-9d82-44b7-bae5-bf8cf84ecc79" containerName="sg-core" Jan 21 11:21:58 crc kubenswrapper[4881]: E0121 11:21:58.077447 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d84ba548-9d82-44b7-bae5-bf8cf84ecc79" containerName="ceilometer-notification-agent" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.077453 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="d84ba548-9d82-44b7-bae5-bf8cf84ecc79" containerName="ceilometer-notification-agent" Jan 21 11:21:58 crc kubenswrapper[4881]: E0121 11:21:58.077463 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d84ba548-9d82-44b7-bae5-bf8cf84ecc79" containerName="ceilometer-central-agent" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.077469 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="d84ba548-9d82-44b7-bae5-bf8cf84ecc79" containerName="ceilometer-central-agent" Jan 21 11:21:58 crc kubenswrapper[4881]: E0121 11:21:58.077488 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d84ba548-9d82-44b7-bae5-bf8cf84ecc79" containerName="proxy-httpd" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.077494 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="d84ba548-9d82-44b7-bae5-bf8cf84ecc79" containerName="proxy-httpd" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.077663 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="502efce3-0d16-491d-b6fa-1b1d98f76d4b" containerName="mariadb-database-create" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.077674 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="d84ba548-9d82-44b7-bae5-bf8cf84ecc79" containerName="ceilometer-central-agent" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.077689 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="d84ba548-9d82-44b7-bae5-bf8cf84ecc79" containerName="sg-core" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.077698 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="d84ba548-9d82-44b7-bae5-bf8cf84ecc79" containerName="proxy-httpd" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.077705 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a601b0e-b326-4e55-901e-08a32fe24005" containerName="mariadb-database-create" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.077714 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="d84ba548-9d82-44b7-bae5-bf8cf84ecc79" containerName="ceilometer-notification-agent" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.079522 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.082623 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.082901 4881 scope.go:117] "RemoveContainer" containerID="53f83f934fef330d755d320c983315d32feeaac6da62dbb78c115b45e16f216a" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.083094 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.096755 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5x9h2\" (UniqueName: \"kubernetes.io/projected/502efce3-0d16-491d-b6fa-1b1d98f76d4b-kube-api-access-5x9h2\") pod \"502efce3-0d16-491d-b6fa-1b1d98f76d4b\" (UID: \"502efce3-0d16-491d-b6fa-1b1d98f76d4b\") " Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.096883 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4g8z\" (UniqueName: \"kubernetes.io/projected/2a601b0e-b326-4e55-901e-08a32fe24005-kube-api-access-s4g8z\") pod \"2a601b0e-b326-4e55-901e-08a32fe24005\" (UID: \"2a601b0e-b326-4e55-901e-08a32fe24005\") " Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.097056 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a601b0e-b326-4e55-901e-08a32fe24005-operator-scripts\") pod \"2a601b0e-b326-4e55-901e-08a32fe24005\" (UID: \"2a601b0e-b326-4e55-901e-08a32fe24005\") " Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.097180 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/502efce3-0d16-491d-b6fa-1b1d98f76d4b-operator-scripts\") pod \"502efce3-0d16-491d-b6fa-1b1d98f76d4b\" (UID: \"502efce3-0d16-491d-b6fa-1b1d98f76d4b\") " Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.098612 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a601b0e-b326-4e55-901e-08a32fe24005-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2a601b0e-b326-4e55-901e-08a32fe24005" (UID: "2a601b0e-b326-4e55-901e-08a32fe24005"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.098721 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/502efce3-0d16-491d-b6fa-1b1d98f76d4b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "502efce3-0d16-491d-b6fa-1b1d98f76d4b" (UID: "502efce3-0d16-491d-b6fa-1b1d98f76d4b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.101254 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.104418 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a601b0e-b326-4e55-901e-08a32fe24005-kube-api-access-s4g8z" (OuterVolumeSpecName: "kube-api-access-s4g8z") pod "2a601b0e-b326-4e55-901e-08a32fe24005" (UID: "2a601b0e-b326-4e55-901e-08a32fe24005"). InnerVolumeSpecName "kube-api-access-s4g8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.114232 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/502efce3-0d16-491d-b6fa-1b1d98f76d4b-kube-api-access-5x9h2" (OuterVolumeSpecName: "kube-api-access-5x9h2") pod "502efce3-0d16-491d-b6fa-1b1d98f76d4b" (UID: "502efce3-0d16-491d-b6fa-1b1d98f76d4b"). InnerVolumeSpecName "kube-api-access-5x9h2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.144859 4881 scope.go:117] "RemoveContainer" containerID="5ef74248d816cbba0967845a616d8ff93c71875da1f2537b3583d30494d188a0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.159219 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.159262 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.183698 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.183874 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.200864 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28ca8213-9b24-4785-9570-d2973570fbdc-log-httpd\") pod \"ceilometer-0\" (UID: \"28ca8213-9b24-4785-9570-d2973570fbdc\") " pod="openstack/ceilometer-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.200940 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28ca8213-9b24-4785-9570-d2973570fbdc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"28ca8213-9b24-4785-9570-d2973570fbdc\") " pod="openstack/ceilometer-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.201015 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28ca8213-9b24-4785-9570-d2973570fbdc-scripts\") pod \"ceilometer-0\" (UID: \"28ca8213-9b24-4785-9570-d2973570fbdc\") " pod="openstack/ceilometer-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.201045 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28ca8213-9b24-4785-9570-d2973570fbdc-config-data\") pod \"ceilometer-0\" (UID: \"28ca8213-9b24-4785-9570-d2973570fbdc\") " pod="openstack/ceilometer-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.201091 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzvfr\" (UniqueName: \"kubernetes.io/projected/28ca8213-9b24-4785-9570-d2973570fbdc-kube-api-access-gzvfr\") pod \"ceilometer-0\" (UID: \"28ca8213-9b24-4785-9570-d2973570fbdc\") " pod="openstack/ceilometer-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.201115 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/28ca8213-9b24-4785-9570-d2973570fbdc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"28ca8213-9b24-4785-9570-d2973570fbdc\") " pod="openstack/ceilometer-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.201130 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28ca8213-9b24-4785-9570-d2973570fbdc-run-httpd\") pod \"ceilometer-0\" (UID: \"28ca8213-9b24-4785-9570-d2973570fbdc\") " pod="openstack/ceilometer-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.201226 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a601b0e-b326-4e55-901e-08a32fe24005-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.201240 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/502efce3-0d16-491d-b6fa-1b1d98f76d4b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.201252 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5x9h2\" (UniqueName: \"kubernetes.io/projected/502efce3-0d16-491d-b6fa-1b1d98f76d4b-kube-api-access-5x9h2\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.201264 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4g8z\" (UniqueName: \"kubernetes.io/projected/2a601b0e-b326-4e55-901e-08a32fe24005-kube-api-access-s4g8z\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.212567 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.230259 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.250621 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.263313 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.277324 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-b4dc-account-create-update-46bk2" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.305279 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28ca8213-9b24-4785-9570-d2973570fbdc-log-httpd\") pod \"ceilometer-0\" (UID: \"28ca8213-9b24-4785-9570-d2973570fbdc\") " pod="openstack/ceilometer-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.305407 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28ca8213-9b24-4785-9570-d2973570fbdc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"28ca8213-9b24-4785-9570-d2973570fbdc\") " pod="openstack/ceilometer-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.305561 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28ca8213-9b24-4785-9570-d2973570fbdc-scripts\") pod \"ceilometer-0\" (UID: \"28ca8213-9b24-4785-9570-d2973570fbdc\") " pod="openstack/ceilometer-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.305645 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28ca8213-9b24-4785-9570-d2973570fbdc-config-data\") pod \"ceilometer-0\" (UID: \"28ca8213-9b24-4785-9570-d2973570fbdc\") " pod="openstack/ceilometer-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.305835 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzvfr\" (UniqueName: \"kubernetes.io/projected/28ca8213-9b24-4785-9570-d2973570fbdc-kube-api-access-gzvfr\") pod \"ceilometer-0\" (UID: \"28ca8213-9b24-4785-9570-d2973570fbdc\") " pod="openstack/ceilometer-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.305899 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/28ca8213-9b24-4785-9570-d2973570fbdc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"28ca8213-9b24-4785-9570-d2973570fbdc\") " pod="openstack/ceilometer-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.305918 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28ca8213-9b24-4785-9570-d2973570fbdc-run-httpd\") pod \"ceilometer-0\" (UID: \"28ca8213-9b24-4785-9570-d2973570fbdc\") " pod="openstack/ceilometer-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.306474 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28ca8213-9b24-4785-9570-d2973570fbdc-run-httpd\") pod \"ceilometer-0\" (UID: \"28ca8213-9b24-4785-9570-d2973570fbdc\") " pod="openstack/ceilometer-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.312273 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28ca8213-9b24-4785-9570-d2973570fbdc-log-httpd\") pod \"ceilometer-0\" (UID: \"28ca8213-9b24-4785-9570-d2973570fbdc\") " pod="openstack/ceilometer-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.313807 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28ca8213-9b24-4785-9570-d2973570fbdc-config-data\") pod \"ceilometer-0\" (UID: \"28ca8213-9b24-4785-9570-d2973570fbdc\") " pod="openstack/ceilometer-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.320419 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28ca8213-9b24-4785-9570-d2973570fbdc-scripts\") pod \"ceilometer-0\" (UID: \"28ca8213-9b24-4785-9570-d2973570fbdc\") " pod="openstack/ceilometer-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.333643 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28ca8213-9b24-4785-9570-d2973570fbdc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"28ca8213-9b24-4785-9570-d2973570fbdc\") " pod="openstack/ceilometer-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.338666 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzvfr\" (UniqueName: \"kubernetes.io/projected/28ca8213-9b24-4785-9570-d2973570fbdc-kube-api-access-gzvfr\") pod \"ceilometer-0\" (UID: \"28ca8213-9b24-4785-9570-d2973570fbdc\") " pod="openstack/ceilometer-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.338960 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/28ca8213-9b24-4785-9570-d2973570fbdc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"28ca8213-9b24-4785-9570-d2973570fbdc\") " pod="openstack/ceilometer-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.409662 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzfnm\" (UniqueName: \"kubernetes.io/projected/4d8a04fd-1a86-454f-bd69-64ad270b8357-kube-api-access-qzfnm\") pod \"4d8a04fd-1a86-454f-bd69-64ad270b8357\" (UID: \"4d8a04fd-1a86-454f-bd69-64ad270b8357\") " Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.409740 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d8a04fd-1a86-454f-bd69-64ad270b8357-operator-scripts\") pod \"4d8a04fd-1a86-454f-bd69-64ad270b8357\" (UID: \"4d8a04fd-1a86-454f-bd69-64ad270b8357\") " Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.411684 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d8a04fd-1a86-454f-bd69-64ad270b8357-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4d8a04fd-1a86-454f-bd69-64ad270b8357" (UID: "4d8a04fd-1a86-454f-bd69-64ad270b8357"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.413582 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d8a04fd-1a86-454f-bd69-64ad270b8357-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.415227 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d8a04fd-1a86-454f-bd69-64ad270b8357-kube-api-access-qzfnm" (OuterVolumeSpecName: "kube-api-access-qzfnm") pod "4d8a04fd-1a86-454f-bd69-64ad270b8357" (UID: "4d8a04fd-1a86-454f-bd69-64ad270b8357"). InnerVolumeSpecName "kube-api-access-qzfnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.415445 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.515531 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzfnm\" (UniqueName: \"kubernetes.io/projected/4d8a04fd-1a86-454f-bd69-64ad270b8357-kube-api-access-qzfnm\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.888313 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-jdk2x" event={"ID":"502efce3-0d16-491d-b6fa-1b1d98f76d4b","Type":"ContainerDied","Data":"35f860e151295e5ea65fab1c5b7e59d1d8a5061680486380408ebd5dc537484b"} Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.888657 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35f860e151295e5ea65fab1c5b7e59d1d8a5061680486380408ebd5dc537484b" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.888744 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-jdk2x" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.894097 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-b85xv" event={"ID":"2a601b0e-b326-4e55-901e-08a32fe24005","Type":"ContainerDied","Data":"a7ef229f2fb104b9e8cc424559b0f8a908033c5487165445292865d3e0cdb0fb"} Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.894141 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7ef229f2fb104b9e8cc424559b0f8a908033c5487165445292865d3e0cdb0fb" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.894165 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-b85xv" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.899736 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-fb46-account-create-update-xxwmq" event={"ID":"29487dae-24e9-4d5b-9819-99516df78630","Type":"ContainerDied","Data":"6cb58542cb5769c92ce7a580725af8d619f54b42ee691161a9bc1aa7508fcb9c"} Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.899798 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cb58542cb5769c92ce7a580725af8d619f54b42ee691161a9bc1aa7508fcb9c" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.904230 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-b4dc-account-create-update-46bk2" event={"ID":"4d8a04fd-1a86-454f-bd69-64ad270b8357","Type":"ContainerDied","Data":"5af4b877aa6f4206f95841c9ad3225a13be2d82d1149e72ace1f40c99f028477"} Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.904284 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5af4b877aa6f4206f95841c9ad3225a13be2d82d1149e72ace1f40c99f028477" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.904363 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-b4dc-account-create-update-46bk2" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.911466 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-f99bl" event={"ID":"f2c35a47-0e6e-4760-9026-617ca187b066","Type":"ContainerDied","Data":"c609580b7b4676d9f33d5da30b233c4958836e02e51a3088b77cdd78db145b29"} Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.911516 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c609580b7b4676d9f33d5da30b233c4958836e02e51a3088b77cdd78db145b29" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.917210 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-5627-account-create-update-mbnwf" event={"ID":"de50b4a3-643f-4e4a-9853-b794eae5c08c","Type":"ContainerDied","Data":"f365bdc014f876728f82cd5bd3495274a14cd4e992642927c9b972bc8d3b5964"} Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.917282 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f365bdc014f876728f82cd5bd3495274a14cd4e992642927c9b972bc8d3b5964" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.917316 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.917333 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.917521 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.917573 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.973385 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-fb46-account-create-update-xxwmq" Jan 21 11:21:58 crc kubenswrapper[4881]: I0121 11:21:58.992543 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5627-account-create-update-mbnwf" Jan 21 11:21:59 crc kubenswrapper[4881]: I0121 11:21:59.005155 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-f99bl" Jan 21 11:21:59 crc kubenswrapper[4881]: I0121 11:21:59.040249 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dmdm\" (UniqueName: \"kubernetes.io/projected/de50b4a3-643f-4e4a-9853-b794eae5c08c-kube-api-access-4dmdm\") pod \"de50b4a3-643f-4e4a-9853-b794eae5c08c\" (UID: \"de50b4a3-643f-4e4a-9853-b794eae5c08c\") " Jan 21 11:21:59 crc kubenswrapper[4881]: I0121 11:21:59.040675 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29487dae-24e9-4d5b-9819-99516df78630-operator-scripts\") pod \"29487dae-24e9-4d5b-9819-99516df78630\" (UID: \"29487dae-24e9-4d5b-9819-99516df78630\") " Jan 21 11:21:59 crc kubenswrapper[4881]: I0121 11:21:59.040833 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de50b4a3-643f-4e4a-9853-b794eae5c08c-operator-scripts\") pod \"de50b4a3-643f-4e4a-9853-b794eae5c08c\" (UID: \"de50b4a3-643f-4e4a-9853-b794eae5c08c\") " Jan 21 11:21:59 crc kubenswrapper[4881]: I0121 11:21:59.041108 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6f9n8\" (UniqueName: \"kubernetes.io/projected/29487dae-24e9-4d5b-9819-99516df78630-kube-api-access-6f9n8\") pod \"29487dae-24e9-4d5b-9819-99516df78630\" (UID: \"29487dae-24e9-4d5b-9819-99516df78630\") " Jan 21 11:21:59 crc kubenswrapper[4881]: I0121 11:21:59.042344 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29487dae-24e9-4d5b-9819-99516df78630-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "29487dae-24e9-4d5b-9819-99516df78630" (UID: "29487dae-24e9-4d5b-9819-99516df78630"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:21:59 crc kubenswrapper[4881]: I0121 11:21:59.045568 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de50b4a3-643f-4e4a-9853-b794eae5c08c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "de50b4a3-643f-4e4a-9853-b794eae5c08c" (UID: "de50b4a3-643f-4e4a-9853-b794eae5c08c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:21:59 crc kubenswrapper[4881]: I0121 11:21:59.051270 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de50b4a3-643f-4e4a-9853-b794eae5c08c-kube-api-access-4dmdm" (OuterVolumeSpecName: "kube-api-access-4dmdm") pod "de50b4a3-643f-4e4a-9853-b794eae5c08c" (UID: "de50b4a3-643f-4e4a-9853-b794eae5c08c"). InnerVolumeSpecName "kube-api-access-4dmdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:21:59 crc kubenswrapper[4881]: I0121 11:21:59.053990 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29487dae-24e9-4d5b-9819-99516df78630-kube-api-access-6f9n8" (OuterVolumeSpecName: "kube-api-access-6f9n8") pod "29487dae-24e9-4d5b-9819-99516df78630" (UID: "29487dae-24e9-4d5b-9819-99516df78630"). InnerVolumeSpecName "kube-api-access-6f9n8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:21:59 crc kubenswrapper[4881]: I0121 11:21:59.130620 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:21:59 crc kubenswrapper[4881]: I0121 11:21:59.147151 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhnvm\" (UniqueName: \"kubernetes.io/projected/f2c35a47-0e6e-4760-9026-617ca187b066-kube-api-access-lhnvm\") pod \"f2c35a47-0e6e-4760-9026-617ca187b066\" (UID: \"f2c35a47-0e6e-4760-9026-617ca187b066\") " Jan 21 11:21:59 crc kubenswrapper[4881]: I0121 11:21:59.147298 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2c35a47-0e6e-4760-9026-617ca187b066-operator-scripts\") pod \"f2c35a47-0e6e-4760-9026-617ca187b066\" (UID: \"f2c35a47-0e6e-4760-9026-617ca187b066\") " Jan 21 11:21:59 crc kubenswrapper[4881]: I0121 11:21:59.148091 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dmdm\" (UniqueName: \"kubernetes.io/projected/de50b4a3-643f-4e4a-9853-b794eae5c08c-kube-api-access-4dmdm\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:59 crc kubenswrapper[4881]: I0121 11:21:59.148124 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29487dae-24e9-4d5b-9819-99516df78630-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:59 crc kubenswrapper[4881]: I0121 11:21:59.148138 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de50b4a3-643f-4e4a-9853-b794eae5c08c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:59 crc kubenswrapper[4881]: I0121 11:21:59.148149 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6f9n8\" (UniqueName: \"kubernetes.io/projected/29487dae-24e9-4d5b-9819-99516df78630-kube-api-access-6f9n8\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:59 crc kubenswrapper[4881]: I0121 11:21:59.148648 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2c35a47-0e6e-4760-9026-617ca187b066-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f2c35a47-0e6e-4760-9026-617ca187b066" (UID: "f2c35a47-0e6e-4760-9026-617ca187b066"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:21:59 crc kubenswrapper[4881]: W0121 11:21:59.148964 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28ca8213_9b24_4785_9570_d2973570fbdc.slice/crio-84387e9f1eda4be1e2e13f245e7866daad306dd7bc81eda92adfe5267e83ba52 WatchSource:0}: Error finding container 84387e9f1eda4be1e2e13f245e7866daad306dd7bc81eda92adfe5267e83ba52: Status 404 returned error can't find the container with id 84387e9f1eda4be1e2e13f245e7866daad306dd7bc81eda92adfe5267e83ba52 Jan 21 11:21:59 crc kubenswrapper[4881]: I0121 11:21:59.157372 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2c35a47-0e6e-4760-9026-617ca187b066-kube-api-access-lhnvm" (OuterVolumeSpecName: "kube-api-access-lhnvm") pod "f2c35a47-0e6e-4760-9026-617ca187b066" (UID: "f2c35a47-0e6e-4760-9026-617ca187b066"). InnerVolumeSpecName "kube-api-access-lhnvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:21:59 crc kubenswrapper[4881]: I0121 11:21:59.251270 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhnvm\" (UniqueName: \"kubernetes.io/projected/f2c35a47-0e6e-4760-9026-617ca187b066-kube-api-access-lhnvm\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:59 crc kubenswrapper[4881]: I0121 11:21:59.251650 4881 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2c35a47-0e6e-4760-9026-617ca187b066-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:21:59 crc kubenswrapper[4881]: I0121 11:21:59.325608 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d84ba548-9d82-44b7-bae5-bf8cf84ecc79" path="/var/lib/kubelet/pods/d84ba548-9d82-44b7-bae5-bf8cf84ecc79/volumes" Jan 21 11:21:59 crc kubenswrapper[4881]: E0121 11:21:59.501458 4881 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4ba0181030ceb68e7fdb5249d09391d40feea2fca13e45d6b4d9c7f3ba56c71d" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Jan 21 11:21:59 crc kubenswrapper[4881]: E0121 11:21:59.503545 4881 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4ba0181030ceb68e7fdb5249d09391d40feea2fca13e45d6b4d9c7f3ba56c71d" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Jan 21 11:21:59 crc kubenswrapper[4881]: E0121 11:21:59.504974 4881 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4ba0181030ceb68e7fdb5249d09391d40feea2fca13e45d6b4d9c7f3ba56c71d" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Jan 21 11:21:59 crc kubenswrapper[4881]: E0121 11:21:59.505011 4881 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-decision-engine-0" podUID="ee4e7116-c2cd-43d5-af6b-9f30b5053e0e" containerName="watcher-decision-engine" Jan 21 11:21:59 crc kubenswrapper[4881]: I0121 11:21:59.933217 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-5627-account-create-update-mbnwf" Jan 21 11:21:59 crc kubenswrapper[4881]: I0121 11:21:59.933247 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28ca8213-9b24-4785-9570-d2973570fbdc","Type":"ContainerStarted","Data":"84387e9f1eda4be1e2e13f245e7866daad306dd7bc81eda92adfe5267e83ba52"} Jan 21 11:21:59 crc kubenswrapper[4881]: I0121 11:21:59.933301 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-f99bl" Jan 21 11:21:59 crc kubenswrapper[4881]: I0121 11:21:59.933404 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-fb46-account-create-update-xxwmq" Jan 21 11:22:00 crc kubenswrapper[4881]: I0121 11:22:00.945876 4881 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 11:22:00 crc kubenswrapper[4881]: I0121 11:22:00.946231 4881 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 11:22:01 crc kubenswrapper[4881]: I0121 11:22:01.001487 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:22:02 crc kubenswrapper[4881]: I0121 11:22:02.981259 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28ca8213-9b24-4785-9570-d2973570fbdc","Type":"ContainerStarted","Data":"461544715f4a3f154544e0f37c4e4bbc147310a0bd62815eae5302504de75f07"} Jan 21 11:22:02 crc kubenswrapper[4881]: I0121 11:22:02.981839 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28ca8213-9b24-4785-9570-d2973570fbdc","Type":"ContainerStarted","Data":"05eebaca7eead0950dd873a8603c6201a9b2dc1e384271cdb00b8530ee218101"} Jan 21 11:22:03 crc kubenswrapper[4881]: I0121 11:22:03.998682 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28ca8213-9b24-4785-9570-d2973570fbdc","Type":"ContainerStarted","Data":"bf7c6034e2c42d9e693656ae69979f8a5455f71ca251857c2ffd4e50430c4b59"} Jan 21 11:22:04 crc kubenswrapper[4881]: I0121 11:22:04.166377 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-f7mmp"] Jan 21 11:22:04 crc kubenswrapper[4881]: E0121 11:22:04.166887 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de50b4a3-643f-4e4a-9853-b794eae5c08c" containerName="mariadb-account-create-update" Jan 21 11:22:04 crc kubenswrapper[4881]: I0121 11:22:04.166908 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="de50b4a3-643f-4e4a-9853-b794eae5c08c" containerName="mariadb-account-create-update" Jan 21 11:22:04 crc kubenswrapper[4881]: E0121 11:22:04.166927 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2c35a47-0e6e-4760-9026-617ca187b066" containerName="mariadb-database-create" Jan 21 11:22:04 crc kubenswrapper[4881]: I0121 11:22:04.166934 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2c35a47-0e6e-4760-9026-617ca187b066" containerName="mariadb-database-create" Jan 21 11:22:04 crc kubenswrapper[4881]: E0121 11:22:04.166949 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d8a04fd-1a86-454f-bd69-64ad270b8357" containerName="mariadb-account-create-update" Jan 21 11:22:04 crc kubenswrapper[4881]: I0121 11:22:04.166955 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d8a04fd-1a86-454f-bd69-64ad270b8357" containerName="mariadb-account-create-update" Jan 21 11:22:04 crc kubenswrapper[4881]: E0121 11:22:04.166966 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29487dae-24e9-4d5b-9819-99516df78630" containerName="mariadb-account-create-update" Jan 21 11:22:04 crc kubenswrapper[4881]: I0121 11:22:04.166972 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="29487dae-24e9-4d5b-9819-99516df78630" containerName="mariadb-account-create-update" Jan 21 11:22:04 crc kubenswrapper[4881]: I0121 11:22:04.167154 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="de50b4a3-643f-4e4a-9853-b794eae5c08c" containerName="mariadb-account-create-update" Jan 21 11:22:04 crc kubenswrapper[4881]: I0121 11:22:04.167170 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d8a04fd-1a86-454f-bd69-64ad270b8357" containerName="mariadb-account-create-update" Jan 21 11:22:04 crc kubenswrapper[4881]: I0121 11:22:04.167191 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="29487dae-24e9-4d5b-9819-99516df78630" containerName="mariadb-account-create-update" Jan 21 11:22:04 crc kubenswrapper[4881]: I0121 11:22:04.167211 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2c35a47-0e6e-4760-9026-617ca187b066" containerName="mariadb-database-create" Jan 21 11:22:04 crc kubenswrapper[4881]: I0121 11:22:04.168424 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-f7mmp" Jan 21 11:22:04 crc kubenswrapper[4881]: I0121 11:22:04.174808 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 21 11:22:04 crc kubenswrapper[4881]: I0121 11:22:04.175006 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-fjj24" Jan 21 11:22:04 crc kubenswrapper[4881]: I0121 11:22:04.175223 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 21 11:22:04 crc kubenswrapper[4881]: I0121 11:22:04.198350 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-f7mmp"] Jan 21 11:22:04 crc kubenswrapper[4881]: I0121 11:22:04.326110 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16c22e38-1b3d-44b8-9519-0769200d708b-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-f7mmp\" (UID: \"16c22e38-1b3d-44b8-9519-0769200d708b\") " pod="openstack/nova-cell0-conductor-db-sync-f7mmp" Jan 21 11:22:04 crc kubenswrapper[4881]: I0121 11:22:04.326192 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfw75\" (UniqueName: \"kubernetes.io/projected/16c22e38-1b3d-44b8-9519-0769200d708b-kube-api-access-vfw75\") pod \"nova-cell0-conductor-db-sync-f7mmp\" (UID: \"16c22e38-1b3d-44b8-9519-0769200d708b\") " pod="openstack/nova-cell0-conductor-db-sync-f7mmp" Jan 21 11:22:04 crc kubenswrapper[4881]: I0121 11:22:04.326266 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16c22e38-1b3d-44b8-9519-0769200d708b-scripts\") pod \"nova-cell0-conductor-db-sync-f7mmp\" (UID: \"16c22e38-1b3d-44b8-9519-0769200d708b\") " pod="openstack/nova-cell0-conductor-db-sync-f7mmp" Jan 21 11:22:04 crc kubenswrapper[4881]: I0121 11:22:04.326304 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16c22e38-1b3d-44b8-9519-0769200d708b-config-data\") pod \"nova-cell0-conductor-db-sync-f7mmp\" (UID: \"16c22e38-1b3d-44b8-9519-0769200d708b\") " pod="openstack/nova-cell0-conductor-db-sync-f7mmp" Jan 21 11:22:04 crc kubenswrapper[4881]: I0121 11:22:04.432296 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16c22e38-1b3d-44b8-9519-0769200d708b-scripts\") pod \"nova-cell0-conductor-db-sync-f7mmp\" (UID: \"16c22e38-1b3d-44b8-9519-0769200d708b\") " pod="openstack/nova-cell0-conductor-db-sync-f7mmp" Jan 21 11:22:04 crc kubenswrapper[4881]: I0121 11:22:04.432410 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16c22e38-1b3d-44b8-9519-0769200d708b-config-data\") pod \"nova-cell0-conductor-db-sync-f7mmp\" (UID: \"16c22e38-1b3d-44b8-9519-0769200d708b\") " pod="openstack/nova-cell0-conductor-db-sync-f7mmp" Jan 21 11:22:04 crc kubenswrapper[4881]: I0121 11:22:04.432578 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16c22e38-1b3d-44b8-9519-0769200d708b-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-f7mmp\" (UID: \"16c22e38-1b3d-44b8-9519-0769200d708b\") " pod="openstack/nova-cell0-conductor-db-sync-f7mmp" Jan 21 11:22:04 crc kubenswrapper[4881]: I0121 11:22:04.432658 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfw75\" (UniqueName: \"kubernetes.io/projected/16c22e38-1b3d-44b8-9519-0769200d708b-kube-api-access-vfw75\") pod \"nova-cell0-conductor-db-sync-f7mmp\" (UID: \"16c22e38-1b3d-44b8-9519-0769200d708b\") " pod="openstack/nova-cell0-conductor-db-sync-f7mmp" Jan 21 11:22:04 crc kubenswrapper[4881]: I0121 11:22:04.448035 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16c22e38-1b3d-44b8-9519-0769200d708b-scripts\") pod \"nova-cell0-conductor-db-sync-f7mmp\" (UID: \"16c22e38-1b3d-44b8-9519-0769200d708b\") " pod="openstack/nova-cell0-conductor-db-sync-f7mmp" Jan 21 11:22:04 crc kubenswrapper[4881]: I0121 11:22:04.449457 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16c22e38-1b3d-44b8-9519-0769200d708b-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-f7mmp\" (UID: \"16c22e38-1b3d-44b8-9519-0769200d708b\") " pod="openstack/nova-cell0-conductor-db-sync-f7mmp" Jan 21 11:22:04 crc kubenswrapper[4881]: I0121 11:22:04.459387 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16c22e38-1b3d-44b8-9519-0769200d708b-config-data\") pod \"nova-cell0-conductor-db-sync-f7mmp\" (UID: \"16c22e38-1b3d-44b8-9519-0769200d708b\") " pod="openstack/nova-cell0-conductor-db-sync-f7mmp" Jan 21 11:22:04 crc kubenswrapper[4881]: I0121 11:22:04.463506 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfw75\" (UniqueName: \"kubernetes.io/projected/16c22e38-1b3d-44b8-9519-0769200d708b-kube-api-access-vfw75\") pod \"nova-cell0-conductor-db-sync-f7mmp\" (UID: \"16c22e38-1b3d-44b8-9519-0769200d708b\") " pod="openstack/nova-cell0-conductor-db-sync-f7mmp" Jan 21 11:22:04 crc kubenswrapper[4881]: I0121 11:22:04.494088 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-f7mmp" Jan 21 11:22:05 crc kubenswrapper[4881]: I0121 11:22:05.399805 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-f7mmp"] Jan 21 11:22:05 crc kubenswrapper[4881]: W0121 11:22:05.430106 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16c22e38_1b3d_44b8_9519_0769200d708b.slice/crio-6a75d9ea9e41983b4baba3e71a4e5dcc957acdbd7dcf5242117832a4b32a615c WatchSource:0}: Error finding container 6a75d9ea9e41983b4baba3e71a4e5dcc957acdbd7dcf5242117832a4b32a615c: Status 404 returned error can't find the container with id 6a75d9ea9e41983b4baba3e71a4e5dcc957acdbd7dcf5242117832a4b32a615c Jan 21 11:22:06 crc kubenswrapper[4881]: I0121 11:22:06.502058 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-f7mmp" event={"ID":"16c22e38-1b3d-44b8-9519-0769200d708b","Type":"ContainerStarted","Data":"6a75d9ea9e41983b4baba3e71a4e5dcc957acdbd7dcf5242117832a4b32a615c"} Jan 21 11:22:06 crc kubenswrapper[4881]: I0121 11:22:06.979337 4881 trace.go:236] Trace[768352876]: "Calculate volume metrics of catalog-content for pod openshift-marketplace/community-operators-bn24k" (21-Jan-2026 11:22:05.606) (total time: 1372ms): Jan 21 11:22:06 crc kubenswrapper[4881]: Trace[768352876]: [1.372653766s] [1.372653766s] END Jan 21 11:22:07 crc kubenswrapper[4881]: I0121 11:22:07.720504 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28ca8213-9b24-4785-9570-d2973570fbdc","Type":"ContainerStarted","Data":"47cb2e9443fbe79dc10dfaee5ff0983a904efe0dfa8880c83f37fe646f71a44c"} Jan 21 11:22:07 crc kubenswrapper[4881]: I0121 11:22:07.720948 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="28ca8213-9b24-4785-9570-d2973570fbdc" containerName="ceilometer-central-agent" containerID="cri-o://05eebaca7eead0950dd873a8603c6201a9b2dc1e384271cdb00b8530ee218101" gracePeriod=30 Jan 21 11:22:07 crc kubenswrapper[4881]: I0121 11:22:07.721229 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 11:22:07 crc kubenswrapper[4881]: I0121 11:22:07.721447 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="28ca8213-9b24-4785-9570-d2973570fbdc" containerName="proxy-httpd" containerID="cri-o://47cb2e9443fbe79dc10dfaee5ff0983a904efe0dfa8880c83f37fe646f71a44c" gracePeriod=30 Jan 21 11:22:07 crc kubenswrapper[4881]: I0121 11:22:07.721536 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="28ca8213-9b24-4785-9570-d2973570fbdc" containerName="sg-core" containerID="cri-o://bf7c6034e2c42d9e693656ae69979f8a5455f71ca251857c2ffd4e50430c4b59" gracePeriod=30 Jan 21 11:22:07 crc kubenswrapper[4881]: I0121 11:22:07.721553 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="28ca8213-9b24-4785-9570-d2973570fbdc" containerName="ceilometer-notification-agent" containerID="cri-o://461544715f4a3f154544e0f37c4e4bbc147310a0bd62815eae5302504de75f07" gracePeriod=30 Jan 21 11:22:07 crc kubenswrapper[4881]: I0121 11:22:07.753332 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=5.022889993 podStartE2EDuration="10.75330795s" podCreationTimestamp="2026-01-21 11:21:57 +0000 UTC" firstStartedPulling="2026-01-21 11:21:59.162053775 +0000 UTC m=+1506.422010244" lastFinishedPulling="2026-01-21 11:22:04.892471732 +0000 UTC m=+1512.152428201" observedRunningTime="2026-01-21 11:22:07.751433714 +0000 UTC m=+1515.011390183" watchObservedRunningTime="2026-01-21 11:22:07.75330795 +0000 UTC m=+1515.013264419" Jan 21 11:22:08 crc kubenswrapper[4881]: I0121 11:22:08.156197 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 21 11:22:08 crc kubenswrapper[4881]: I0121 11:22:08.156502 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 21 11:22:08 crc kubenswrapper[4881]: I0121 11:22:08.156605 4881 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 11:22:08 crc kubenswrapper[4881]: I0121 11:22:08.156676 4881 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 11:22:08 crc kubenswrapper[4881]: I0121 11:22:08.159084 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 21 11:22:08 crc kubenswrapper[4881]: I0121 11:22:08.356094 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 21 11:22:08 crc kubenswrapper[4881]: I0121 11:22:08.737529 4881 generic.go:334] "Generic (PLEG): container finished" podID="28ca8213-9b24-4785-9570-d2973570fbdc" containerID="47cb2e9443fbe79dc10dfaee5ff0983a904efe0dfa8880c83f37fe646f71a44c" exitCode=0 Jan 21 11:22:08 crc kubenswrapper[4881]: I0121 11:22:08.737569 4881 generic.go:334] "Generic (PLEG): container finished" podID="28ca8213-9b24-4785-9570-d2973570fbdc" containerID="bf7c6034e2c42d9e693656ae69979f8a5455f71ca251857c2ffd4e50430c4b59" exitCode=2 Jan 21 11:22:08 crc kubenswrapper[4881]: I0121 11:22:08.737579 4881 generic.go:334] "Generic (PLEG): container finished" podID="28ca8213-9b24-4785-9570-d2973570fbdc" containerID="461544715f4a3f154544e0f37c4e4bbc147310a0bd62815eae5302504de75f07" exitCode=0 Jan 21 11:22:08 crc kubenswrapper[4881]: I0121 11:22:08.738937 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28ca8213-9b24-4785-9570-d2973570fbdc","Type":"ContainerDied","Data":"47cb2e9443fbe79dc10dfaee5ff0983a904efe0dfa8880c83f37fe646f71a44c"} Jan 21 11:22:08 crc kubenswrapper[4881]: I0121 11:22:08.739025 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28ca8213-9b24-4785-9570-d2973570fbdc","Type":"ContainerDied","Data":"bf7c6034e2c42d9e693656ae69979f8a5455f71ca251857c2ffd4e50430c4b59"} Jan 21 11:22:08 crc kubenswrapper[4881]: I0121 11:22:08.739046 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28ca8213-9b24-4785-9570-d2973570fbdc","Type":"ContainerDied","Data":"461544715f4a3f154544e0f37c4e4bbc147310a0bd62815eae5302504de75f07"} Jan 21 11:22:10 crc kubenswrapper[4881]: I0121 11:22:10.791937 4881 generic.go:334] "Generic (PLEG): container finished" podID="28ca8213-9b24-4785-9570-d2973570fbdc" containerID="05eebaca7eead0950dd873a8603c6201a9b2dc1e384271cdb00b8530ee218101" exitCode=0 Jan 21 11:22:10 crc kubenswrapper[4881]: I0121 11:22:10.792469 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28ca8213-9b24-4785-9570-d2973570fbdc","Type":"ContainerDied","Data":"05eebaca7eead0950dd873a8603c6201a9b2dc1e384271cdb00b8530ee218101"} Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.078574 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.165873 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28ca8213-9b24-4785-9570-d2973570fbdc-combined-ca-bundle\") pod \"28ca8213-9b24-4785-9570-d2973570fbdc\" (UID: \"28ca8213-9b24-4785-9570-d2973570fbdc\") " Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.166028 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28ca8213-9b24-4785-9570-d2973570fbdc-config-data\") pod \"28ca8213-9b24-4785-9570-d2973570fbdc\" (UID: \"28ca8213-9b24-4785-9570-d2973570fbdc\") " Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.166060 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28ca8213-9b24-4785-9570-d2973570fbdc-run-httpd\") pod \"28ca8213-9b24-4785-9570-d2973570fbdc\" (UID: \"28ca8213-9b24-4785-9570-d2973570fbdc\") " Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.166129 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28ca8213-9b24-4785-9570-d2973570fbdc-scripts\") pod \"28ca8213-9b24-4785-9570-d2973570fbdc\" (UID: \"28ca8213-9b24-4785-9570-d2973570fbdc\") " Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.166164 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzvfr\" (UniqueName: \"kubernetes.io/projected/28ca8213-9b24-4785-9570-d2973570fbdc-kube-api-access-gzvfr\") pod \"28ca8213-9b24-4785-9570-d2973570fbdc\" (UID: \"28ca8213-9b24-4785-9570-d2973570fbdc\") " Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.166270 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28ca8213-9b24-4785-9570-d2973570fbdc-log-httpd\") pod \"28ca8213-9b24-4785-9570-d2973570fbdc\" (UID: \"28ca8213-9b24-4785-9570-d2973570fbdc\") " Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.166319 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/28ca8213-9b24-4785-9570-d2973570fbdc-sg-core-conf-yaml\") pod \"28ca8213-9b24-4785-9570-d2973570fbdc\" (UID: \"28ca8213-9b24-4785-9570-d2973570fbdc\") " Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.168040 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28ca8213-9b24-4785-9570-d2973570fbdc-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "28ca8213-9b24-4785-9570-d2973570fbdc" (UID: "28ca8213-9b24-4785-9570-d2973570fbdc"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.177210 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28ca8213-9b24-4785-9570-d2973570fbdc-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "28ca8213-9b24-4785-9570-d2973570fbdc" (UID: "28ca8213-9b24-4785-9570-d2973570fbdc"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.191047 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28ca8213-9b24-4785-9570-d2973570fbdc-scripts" (OuterVolumeSpecName: "scripts") pod "28ca8213-9b24-4785-9570-d2973570fbdc" (UID: "28ca8213-9b24-4785-9570-d2973570fbdc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.191288 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28ca8213-9b24-4785-9570-d2973570fbdc-kube-api-access-gzvfr" (OuterVolumeSpecName: "kube-api-access-gzvfr") pod "28ca8213-9b24-4785-9570-d2973570fbdc" (UID: "28ca8213-9b24-4785-9570-d2973570fbdc"). InnerVolumeSpecName "kube-api-access-gzvfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.265823 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28ca8213-9b24-4785-9570-d2973570fbdc-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "28ca8213-9b24-4785-9570-d2973570fbdc" (UID: "28ca8213-9b24-4785-9570-d2973570fbdc"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.269570 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28ca8213-9b24-4785-9570-d2973570fbdc-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.269614 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzvfr\" (UniqueName: \"kubernetes.io/projected/28ca8213-9b24-4785-9570-d2973570fbdc-kube-api-access-gzvfr\") on node \"crc\" DevicePath \"\"" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.269629 4881 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28ca8213-9b24-4785-9570-d2973570fbdc-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.269644 4881 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/28ca8213-9b24-4785-9570-d2973570fbdc-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.269657 4881 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/28ca8213-9b24-4785-9570-d2973570fbdc-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.308147 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28ca8213-9b24-4785-9570-d2973570fbdc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "28ca8213-9b24-4785-9570-d2973570fbdc" (UID: "28ca8213-9b24-4785-9570-d2973570fbdc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.364406 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28ca8213-9b24-4785-9570-d2973570fbdc-config-data" (OuterVolumeSpecName: "config-data") pod "28ca8213-9b24-4785-9570-d2973570fbdc" (UID: "28ca8213-9b24-4785-9570-d2973570fbdc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.372278 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28ca8213-9b24-4785-9570-d2973570fbdc-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.372312 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28ca8213-9b24-4785-9570-d2973570fbdc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.808530 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"28ca8213-9b24-4785-9570-d2973570fbdc","Type":"ContainerDied","Data":"84387e9f1eda4be1e2e13f245e7866daad306dd7bc81eda92adfe5267e83ba52"} Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.808596 4881 scope.go:117] "RemoveContainer" containerID="47cb2e9443fbe79dc10dfaee5ff0983a904efe0dfa8880c83f37fe646f71a44c" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.808607 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.860849 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.878660 4881 scope.go:117] "RemoveContainer" containerID="bf7c6034e2c42d9e693656ae69979f8a5455f71ca251857c2ffd4e50430c4b59" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.886869 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.900639 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:22:11 crc kubenswrapper[4881]: E0121 11:22:11.901341 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28ca8213-9b24-4785-9570-d2973570fbdc" containerName="ceilometer-notification-agent" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.901359 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="28ca8213-9b24-4785-9570-d2973570fbdc" containerName="ceilometer-notification-agent" Jan 21 11:22:11 crc kubenswrapper[4881]: E0121 11:22:11.901373 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28ca8213-9b24-4785-9570-d2973570fbdc" containerName="proxy-httpd" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.901381 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="28ca8213-9b24-4785-9570-d2973570fbdc" containerName="proxy-httpd" Jan 21 11:22:11 crc kubenswrapper[4881]: E0121 11:22:11.901407 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28ca8213-9b24-4785-9570-d2973570fbdc" containerName="sg-core" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.901415 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="28ca8213-9b24-4785-9570-d2973570fbdc" containerName="sg-core" Jan 21 11:22:11 crc kubenswrapper[4881]: E0121 11:22:11.901439 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28ca8213-9b24-4785-9570-d2973570fbdc" containerName="ceilometer-central-agent" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.901447 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="28ca8213-9b24-4785-9570-d2973570fbdc" containerName="ceilometer-central-agent" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.901704 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="28ca8213-9b24-4785-9570-d2973570fbdc" containerName="sg-core" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.901729 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="28ca8213-9b24-4785-9570-d2973570fbdc" containerName="ceilometer-central-agent" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.901744 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="28ca8213-9b24-4785-9570-d2973570fbdc" containerName="ceilometer-notification-agent" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.901753 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="28ca8213-9b24-4785-9570-d2973570fbdc" containerName="proxy-httpd" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.907006 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.910885 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.911145 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.915058 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.922423 4881 scope.go:117] "RemoveContainer" containerID="461544715f4a3f154544e0f37c4e4bbc147310a0bd62815eae5302504de75f07" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.964662 4881 scope.go:117] "RemoveContainer" containerID="05eebaca7eead0950dd873a8603c6201a9b2dc1e384271cdb00b8530ee218101" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.989330 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/864daf3b-9b84-4a77-b70d-7574975a1759-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"864daf3b-9b84-4a77-b70d-7574975a1759\") " pod="openstack/ceilometer-0" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.989609 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/864daf3b-9b84-4a77-b70d-7574975a1759-log-httpd\") pod \"ceilometer-0\" (UID: \"864daf3b-9b84-4a77-b70d-7574975a1759\") " pod="openstack/ceilometer-0" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.989804 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/864daf3b-9b84-4a77-b70d-7574975a1759-run-httpd\") pod \"ceilometer-0\" (UID: \"864daf3b-9b84-4a77-b70d-7574975a1759\") " pod="openstack/ceilometer-0" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.989930 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/864daf3b-9b84-4a77-b70d-7574975a1759-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"864daf3b-9b84-4a77-b70d-7574975a1759\") " pod="openstack/ceilometer-0" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.990207 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/864daf3b-9b84-4a77-b70d-7574975a1759-scripts\") pod \"ceilometer-0\" (UID: \"864daf3b-9b84-4a77-b70d-7574975a1759\") " pod="openstack/ceilometer-0" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.990374 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8lc5\" (UniqueName: \"kubernetes.io/projected/864daf3b-9b84-4a77-b70d-7574975a1759-kube-api-access-h8lc5\") pod \"ceilometer-0\" (UID: \"864daf3b-9b84-4a77-b70d-7574975a1759\") " pod="openstack/ceilometer-0" Jan 21 11:22:11 crc kubenswrapper[4881]: I0121 11:22:11.990523 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/864daf3b-9b84-4a77-b70d-7574975a1759-config-data\") pod \"ceilometer-0\" (UID: \"864daf3b-9b84-4a77-b70d-7574975a1759\") " pod="openstack/ceilometer-0" Jan 21 11:22:12 crc kubenswrapper[4881]: I0121 11:22:12.093205 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/864daf3b-9b84-4a77-b70d-7574975a1759-log-httpd\") pod \"ceilometer-0\" (UID: \"864daf3b-9b84-4a77-b70d-7574975a1759\") " pod="openstack/ceilometer-0" Jan 21 11:22:12 crc kubenswrapper[4881]: I0121 11:22:12.093287 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/864daf3b-9b84-4a77-b70d-7574975a1759-run-httpd\") pod \"ceilometer-0\" (UID: \"864daf3b-9b84-4a77-b70d-7574975a1759\") " pod="openstack/ceilometer-0" Jan 21 11:22:12 crc kubenswrapper[4881]: I0121 11:22:12.093324 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/864daf3b-9b84-4a77-b70d-7574975a1759-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"864daf3b-9b84-4a77-b70d-7574975a1759\") " pod="openstack/ceilometer-0" Jan 21 11:22:12 crc kubenswrapper[4881]: I0121 11:22:12.093395 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/864daf3b-9b84-4a77-b70d-7574975a1759-scripts\") pod \"ceilometer-0\" (UID: \"864daf3b-9b84-4a77-b70d-7574975a1759\") " pod="openstack/ceilometer-0" Jan 21 11:22:12 crc kubenswrapper[4881]: I0121 11:22:12.093443 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8lc5\" (UniqueName: \"kubernetes.io/projected/864daf3b-9b84-4a77-b70d-7574975a1759-kube-api-access-h8lc5\") pod \"ceilometer-0\" (UID: \"864daf3b-9b84-4a77-b70d-7574975a1759\") " pod="openstack/ceilometer-0" Jan 21 11:22:12 crc kubenswrapper[4881]: I0121 11:22:12.093535 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/864daf3b-9b84-4a77-b70d-7574975a1759-config-data\") pod \"ceilometer-0\" (UID: \"864daf3b-9b84-4a77-b70d-7574975a1759\") " pod="openstack/ceilometer-0" Jan 21 11:22:12 crc kubenswrapper[4881]: I0121 11:22:12.094001 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/864daf3b-9b84-4a77-b70d-7574975a1759-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"864daf3b-9b84-4a77-b70d-7574975a1759\") " pod="openstack/ceilometer-0" Jan 21 11:22:12 crc kubenswrapper[4881]: I0121 11:22:12.094164 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/864daf3b-9b84-4a77-b70d-7574975a1759-run-httpd\") pod \"ceilometer-0\" (UID: \"864daf3b-9b84-4a77-b70d-7574975a1759\") " pod="openstack/ceilometer-0" Jan 21 11:22:12 crc kubenswrapper[4881]: I0121 11:22:12.095476 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/864daf3b-9b84-4a77-b70d-7574975a1759-log-httpd\") pod \"ceilometer-0\" (UID: \"864daf3b-9b84-4a77-b70d-7574975a1759\") " pod="openstack/ceilometer-0" Jan 21 11:22:12 crc kubenswrapper[4881]: I0121 11:22:12.099220 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/864daf3b-9b84-4a77-b70d-7574975a1759-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"864daf3b-9b84-4a77-b70d-7574975a1759\") " pod="openstack/ceilometer-0" Jan 21 11:22:12 crc kubenswrapper[4881]: I0121 11:22:12.099480 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/864daf3b-9b84-4a77-b70d-7574975a1759-scripts\") pod \"ceilometer-0\" (UID: \"864daf3b-9b84-4a77-b70d-7574975a1759\") " pod="openstack/ceilometer-0" Jan 21 11:22:12 crc kubenswrapper[4881]: I0121 11:22:12.100387 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/864daf3b-9b84-4a77-b70d-7574975a1759-config-data\") pod \"ceilometer-0\" (UID: \"864daf3b-9b84-4a77-b70d-7574975a1759\") " pod="openstack/ceilometer-0" Jan 21 11:22:12 crc kubenswrapper[4881]: I0121 11:22:12.112520 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/864daf3b-9b84-4a77-b70d-7574975a1759-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"864daf3b-9b84-4a77-b70d-7574975a1759\") " pod="openstack/ceilometer-0" Jan 21 11:22:12 crc kubenswrapper[4881]: I0121 11:22:12.116163 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8lc5\" (UniqueName: \"kubernetes.io/projected/864daf3b-9b84-4a77-b70d-7574975a1759-kube-api-access-h8lc5\") pod \"ceilometer-0\" (UID: \"864daf3b-9b84-4a77-b70d-7574975a1759\") " pod="openstack/ceilometer-0" Jan 21 11:22:12 crc kubenswrapper[4881]: I0121 11:22:12.245960 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:22:13 crc kubenswrapper[4881]: I0121 11:22:13.186885 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:22:13 crc kubenswrapper[4881]: I0121 11:22:13.346857 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28ca8213-9b24-4785-9570-d2973570fbdc" path="/var/lib/kubelet/pods/28ca8213-9b24-4785-9570-d2973570fbdc/volumes" Jan 21 11:22:13 crc kubenswrapper[4881]: I0121 11:22:13.839821 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"864daf3b-9b84-4a77-b70d-7574975a1759","Type":"ContainerStarted","Data":"503a25d56c550049491832816edbc48c05afa818af9138db9e45c13fbbda3c04"} Jan 21 11:22:14 crc kubenswrapper[4881]: I0121 11:22:14.362271 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:22:22 crc kubenswrapper[4881]: I0121 11:22:22.049297 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-f7mmp" event={"ID":"16c22e38-1b3d-44b8-9519-0769200d708b","Type":"ContainerStarted","Data":"45d2c9cf95b1e6ab35e425681a61a8e4775263f35ab1c8463912de139e00b535"} Jan 21 11:22:22 crc kubenswrapper[4881]: I0121 11:22:22.052164 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"864daf3b-9b84-4a77-b70d-7574975a1759","Type":"ContainerStarted","Data":"c587b5f1d4ce6bd63009ab70ac3c2d60e9a361552ad74baf6eee5e9cbaf12b08"} Jan 21 11:22:22 crc kubenswrapper[4881]: I0121 11:22:22.067192 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-f7mmp" podStartSLOduration=1.874513886 podStartE2EDuration="18.067170307s" podCreationTimestamp="2026-01-21 11:22:04 +0000 UTC" firstStartedPulling="2026-01-21 11:22:05.43555418 +0000 UTC m=+1512.695510649" lastFinishedPulling="2026-01-21 11:22:21.628210601 +0000 UTC m=+1528.888167070" observedRunningTime="2026-01-21 11:22:22.062621634 +0000 UTC m=+1529.322578103" watchObservedRunningTime="2026-01-21 11:22:22.067170307 +0000 UTC m=+1529.327126776" Jan 21 11:22:23 crc kubenswrapper[4881]: I0121 11:22:23.118174 4881 generic.go:334] "Generic (PLEG): container finished" podID="ee4e7116-c2cd-43d5-af6b-9f30b5053e0e" containerID="4ba0181030ceb68e7fdb5249d09391d40feea2fca13e45d6b4d9c7f3ba56c71d" exitCode=137 Jan 21 11:22:23 crc kubenswrapper[4881]: I0121 11:22:23.118862 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e","Type":"ContainerDied","Data":"4ba0181030ceb68e7fdb5249d09391d40feea2fca13e45d6b4d9c7f3ba56c71d"} Jan 21 11:22:23 crc kubenswrapper[4881]: I0121 11:22:23.118924 4881 scope.go:117] "RemoveContainer" containerID="5ccae223d32b8d30267f4d247c29e77d1942427c122a26bc75e9b00b89fa3bc0" Jan 21 11:22:23 crc kubenswrapper[4881]: I0121 11:22:23.130885 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"864daf3b-9b84-4a77-b70d-7574975a1759","Type":"ContainerStarted","Data":"9f19d662dd7c7d2e019ff9b54fc69e7ca9f3be17c295e4af48f920e1e9ca9860"} Jan 21 11:22:23 crc kubenswrapper[4881]: I0121 11:22:23.292092 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 21 11:22:23 crc kubenswrapper[4881]: I0121 11:22:23.493049 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee4e7116-c2cd-43d5-af6b-9f30b5053e0e-logs\") pod \"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e\" (UID: \"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e\") " Jan 21 11:22:23 crc kubenswrapper[4881]: I0121 11:22:23.493456 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wffxr\" (UniqueName: \"kubernetes.io/projected/ee4e7116-c2cd-43d5-af6b-9f30b5053e0e-kube-api-access-wffxr\") pod \"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e\" (UID: \"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e\") " Jan 21 11:22:23 crc kubenswrapper[4881]: I0121 11:22:23.493523 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ee4e7116-c2cd-43d5-af6b-9f30b5053e0e-custom-prometheus-ca\") pod \"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e\" (UID: \"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e\") " Jan 21 11:22:23 crc kubenswrapper[4881]: I0121 11:22:23.493605 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee4e7116-c2cd-43d5-af6b-9f30b5053e0e-combined-ca-bundle\") pod \"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e\" (UID: \"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e\") " Jan 21 11:22:23 crc kubenswrapper[4881]: I0121 11:22:23.493800 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee4e7116-c2cd-43d5-af6b-9f30b5053e0e-config-data\") pod \"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e\" (UID: \"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e\") " Jan 21 11:22:23 crc kubenswrapper[4881]: I0121 11:22:23.495074 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee4e7116-c2cd-43d5-af6b-9f30b5053e0e-logs" (OuterVolumeSpecName: "logs") pod "ee4e7116-c2cd-43d5-af6b-9f30b5053e0e" (UID: "ee4e7116-c2cd-43d5-af6b-9f30b5053e0e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:22:23 crc kubenswrapper[4881]: I0121 11:22:23.502958 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee4e7116-c2cd-43d5-af6b-9f30b5053e0e-kube-api-access-wffxr" (OuterVolumeSpecName: "kube-api-access-wffxr") pod "ee4e7116-c2cd-43d5-af6b-9f30b5053e0e" (UID: "ee4e7116-c2cd-43d5-af6b-9f30b5053e0e"). InnerVolumeSpecName "kube-api-access-wffxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:22:23 crc kubenswrapper[4881]: I0121 11:22:23.538894 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee4e7116-c2cd-43d5-af6b-9f30b5053e0e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ee4e7116-c2cd-43d5-af6b-9f30b5053e0e" (UID: "ee4e7116-c2cd-43d5-af6b-9f30b5053e0e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:22:23 crc kubenswrapper[4881]: I0121 11:22:23.542373 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee4e7116-c2cd-43d5-af6b-9f30b5053e0e-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "ee4e7116-c2cd-43d5-af6b-9f30b5053e0e" (UID: "ee4e7116-c2cd-43d5-af6b-9f30b5053e0e"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:22:23 crc kubenswrapper[4881]: I0121 11:22:23.577168 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee4e7116-c2cd-43d5-af6b-9f30b5053e0e-config-data" (OuterVolumeSpecName: "config-data") pod "ee4e7116-c2cd-43d5-af6b-9f30b5053e0e" (UID: "ee4e7116-c2cd-43d5-af6b-9f30b5053e0e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:22:23 crc kubenswrapper[4881]: I0121 11:22:23.599296 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wffxr\" (UniqueName: \"kubernetes.io/projected/ee4e7116-c2cd-43d5-af6b-9f30b5053e0e-kube-api-access-wffxr\") on node \"crc\" DevicePath \"\"" Jan 21 11:22:23 crc kubenswrapper[4881]: I0121 11:22:23.599589 4881 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ee4e7116-c2cd-43d5-af6b-9f30b5053e0e-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 21 11:22:23 crc kubenswrapper[4881]: I0121 11:22:23.599657 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee4e7116-c2cd-43d5-af6b-9f30b5053e0e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:22:23 crc kubenswrapper[4881]: I0121 11:22:23.599725 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee4e7116-c2cd-43d5-af6b-9f30b5053e0e-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:22:23 crc kubenswrapper[4881]: I0121 11:22:23.599830 4881 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee4e7116-c2cd-43d5-af6b-9f30b5053e0e-logs\") on node \"crc\" DevicePath \"\"" Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.144734 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"864daf3b-9b84-4a77-b70d-7574975a1759","Type":"ContainerStarted","Data":"e43c16a8d49069db18e2f00c6f35aa7e319b33e147379724b98cc6a207964853"} Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.146862 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"ee4e7116-c2cd-43d5-af6b-9f30b5053e0e","Type":"ContainerDied","Data":"29d3adbd836eae43fe470435c7cc82a51d0ed6187ef1f30da41d37c41cb401fb"} Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.146918 4881 scope.go:117] "RemoveContainer" containerID="4ba0181030ceb68e7fdb5249d09391d40feea2fca13e45d6b4d9c7f3ba56c71d" Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.147059 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.198003 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.212877 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.228136 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 21 11:22:24 crc kubenswrapper[4881]: E0121 11:22:24.229856 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee4e7116-c2cd-43d5-af6b-9f30b5053e0e" containerName="watcher-decision-engine" Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.229884 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee4e7116-c2cd-43d5-af6b-9f30b5053e0e" containerName="watcher-decision-engine" Jan 21 11:22:24 crc kubenswrapper[4881]: E0121 11:22:24.229905 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee4e7116-c2cd-43d5-af6b-9f30b5053e0e" containerName="watcher-decision-engine" Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.229913 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee4e7116-c2cd-43d5-af6b-9f30b5053e0e" containerName="watcher-decision-engine" Jan 21 11:22:24 crc kubenswrapper[4881]: E0121 11:22:24.229930 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee4e7116-c2cd-43d5-af6b-9f30b5053e0e" containerName="watcher-decision-engine" Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.229938 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee4e7116-c2cd-43d5-af6b-9f30b5053e0e" containerName="watcher-decision-engine" Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.230248 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee4e7116-c2cd-43d5-af6b-9f30b5053e0e" containerName="watcher-decision-engine" Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.230260 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee4e7116-c2cd-43d5-af6b-9f30b5053e0e" containerName="watcher-decision-engine" Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.230278 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee4e7116-c2cd-43d5-af6b-9f30b5053e0e" containerName="watcher-decision-engine" Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.231035 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.234626 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.248583 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.317231 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a227ee4-7a4c-4cb6-991c-d137119a2a6e-logs\") pod \"watcher-decision-engine-0\" (UID: \"1a227ee4-7a4c-4cb6-991c-d137119a2a6e\") " pod="openstack/watcher-decision-engine-0" Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.318035 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ns2m\" (UniqueName: \"kubernetes.io/projected/1a227ee4-7a4c-4cb6-991c-d137119a2a6e-kube-api-access-7ns2m\") pod \"watcher-decision-engine-0\" (UID: \"1a227ee4-7a4c-4cb6-991c-d137119a2a6e\") " pod="openstack/watcher-decision-engine-0" Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.318385 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a227ee4-7a4c-4cb6-991c-d137119a2a6e-config-data\") pod \"watcher-decision-engine-0\" (UID: \"1a227ee4-7a4c-4cb6-991c-d137119a2a6e\") " pod="openstack/watcher-decision-engine-0" Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.318545 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1a227ee4-7a4c-4cb6-991c-d137119a2a6e-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"1a227ee4-7a4c-4cb6-991c-d137119a2a6e\") " pod="openstack/watcher-decision-engine-0" Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.318704 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a227ee4-7a4c-4cb6-991c-d137119a2a6e-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"1a227ee4-7a4c-4cb6-991c-d137119a2a6e\") " pod="openstack/watcher-decision-engine-0" Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.421407 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a227ee4-7a4c-4cb6-991c-d137119a2a6e-logs\") pod \"watcher-decision-engine-0\" (UID: \"1a227ee4-7a4c-4cb6-991c-d137119a2a6e\") " pod="openstack/watcher-decision-engine-0" Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.421521 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ns2m\" (UniqueName: \"kubernetes.io/projected/1a227ee4-7a4c-4cb6-991c-d137119a2a6e-kube-api-access-7ns2m\") pod \"watcher-decision-engine-0\" (UID: \"1a227ee4-7a4c-4cb6-991c-d137119a2a6e\") " pod="openstack/watcher-decision-engine-0" Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.421622 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a227ee4-7a4c-4cb6-991c-d137119a2a6e-config-data\") pod \"watcher-decision-engine-0\" (UID: \"1a227ee4-7a4c-4cb6-991c-d137119a2a6e\") " pod="openstack/watcher-decision-engine-0" Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.421648 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1a227ee4-7a4c-4cb6-991c-d137119a2a6e-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"1a227ee4-7a4c-4cb6-991c-d137119a2a6e\") " pod="openstack/watcher-decision-engine-0" Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.421676 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a227ee4-7a4c-4cb6-991c-d137119a2a6e-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"1a227ee4-7a4c-4cb6-991c-d137119a2a6e\") " pod="openstack/watcher-decision-engine-0" Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.425504 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a227ee4-7a4c-4cb6-991c-d137119a2a6e-logs\") pod \"watcher-decision-engine-0\" (UID: \"1a227ee4-7a4c-4cb6-991c-d137119a2a6e\") " pod="openstack/watcher-decision-engine-0" Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.438430 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/1a227ee4-7a4c-4cb6-991c-d137119a2a6e-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"1a227ee4-7a4c-4cb6-991c-d137119a2a6e\") " pod="openstack/watcher-decision-engine-0" Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.438603 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a227ee4-7a4c-4cb6-991c-d137119a2a6e-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"1a227ee4-7a4c-4cb6-991c-d137119a2a6e\") " pod="openstack/watcher-decision-engine-0" Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.439249 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a227ee4-7a4c-4cb6-991c-d137119a2a6e-config-data\") pod \"watcher-decision-engine-0\" (UID: \"1a227ee4-7a4c-4cb6-991c-d137119a2a6e\") " pod="openstack/watcher-decision-engine-0" Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.444843 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ns2m\" (UniqueName: \"kubernetes.io/projected/1a227ee4-7a4c-4cb6-991c-d137119a2a6e-kube-api-access-7ns2m\") pod \"watcher-decision-engine-0\" (UID: \"1a227ee4-7a4c-4cb6-991c-d137119a2a6e\") " pod="openstack/watcher-decision-engine-0" Jan 21 11:22:24 crc kubenswrapper[4881]: I0121 11:22:24.553898 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Jan 21 11:22:25 crc kubenswrapper[4881]: I0121 11:22:25.152615 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Jan 21 11:22:26 crc kubenswrapper[4881]: I0121 11:22:26.034939 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee4e7116-c2cd-43d5-af6b-9f30b5053e0e" path="/var/lib/kubelet/pods/ee4e7116-c2cd-43d5-af6b-9f30b5053e0e/volumes" Jan 21 11:22:26 crc kubenswrapper[4881]: I0121 11:22:26.178729 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"1a227ee4-7a4c-4cb6-991c-d137119a2a6e","Type":"ContainerStarted","Data":"6fad4b4fe9a8836c203f47f9b07542d89a464d477f7736896f152c617459d659"} Jan 21 11:22:27 crc kubenswrapper[4881]: I0121 11:22:27.190735 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"1a227ee4-7a4c-4cb6-991c-d137119a2a6e","Type":"ContainerStarted","Data":"856f738a7852caad106da5e207aa3fbda01bc189067e48decf62dedbc4c6c6c1"} Jan 21 11:22:27 crc kubenswrapper[4881]: I0121 11:22:27.194178 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"864daf3b-9b84-4a77-b70d-7574975a1759","Type":"ContainerStarted","Data":"c4d1c7b32460d66f1d454b2f673559cd15c9520eb920941f2f0afa5d440392f4"} Jan 21 11:22:27 crc kubenswrapper[4881]: I0121 11:22:27.194379 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="864daf3b-9b84-4a77-b70d-7574975a1759" containerName="proxy-httpd" containerID="cri-o://c4d1c7b32460d66f1d454b2f673559cd15c9520eb920941f2f0afa5d440392f4" gracePeriod=30 Jan 21 11:22:27 crc kubenswrapper[4881]: I0121 11:22:27.194404 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="864daf3b-9b84-4a77-b70d-7574975a1759" containerName="ceilometer-notification-agent" containerID="cri-o://9f19d662dd7c7d2e019ff9b54fc69e7ca9f3be17c295e4af48f920e1e9ca9860" gracePeriod=30 Jan 21 11:22:27 crc kubenswrapper[4881]: I0121 11:22:27.194509 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="864daf3b-9b84-4a77-b70d-7574975a1759" containerName="sg-core" containerID="cri-o://e43c16a8d49069db18e2f00c6f35aa7e319b33e147379724b98cc6a207964853" gracePeriod=30 Jan 21 11:22:27 crc kubenswrapper[4881]: I0121 11:22:27.194595 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 11:22:27 crc kubenswrapper[4881]: I0121 11:22:27.194324 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="864daf3b-9b84-4a77-b70d-7574975a1759" containerName="ceilometer-central-agent" containerID="cri-o://c587b5f1d4ce6bd63009ab70ac3c2d60e9a361552ad74baf6eee5e9cbaf12b08" gracePeriod=30 Jan 21 11:22:27 crc kubenswrapper[4881]: I0121 11:22:27.234600 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=3.234578709 podStartE2EDuration="3.234578709s" podCreationTimestamp="2026-01-21 11:22:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:22:27.227958854 +0000 UTC m=+1534.487915323" watchObservedRunningTime="2026-01-21 11:22:27.234578709 +0000 UTC m=+1534.494535198" Jan 21 11:22:27 crc kubenswrapper[4881]: I0121 11:22:27.287554 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.7941580550000005 podStartE2EDuration="16.287534467s" podCreationTimestamp="2026-01-21 11:22:11 +0000 UTC" firstStartedPulling="2026-01-21 11:22:13.196539078 +0000 UTC m=+1520.456495547" lastFinishedPulling="2026-01-21 11:22:24.68991549 +0000 UTC m=+1531.949871959" observedRunningTime="2026-01-21 11:22:27.283023205 +0000 UTC m=+1534.542979664" watchObservedRunningTime="2026-01-21 11:22:27.287534467 +0000 UTC m=+1534.547490936" Jan 21 11:22:28 crc kubenswrapper[4881]: I0121 11:22:28.207838 4881 generic.go:334] "Generic (PLEG): container finished" podID="864daf3b-9b84-4a77-b70d-7574975a1759" containerID="c4d1c7b32460d66f1d454b2f673559cd15c9520eb920941f2f0afa5d440392f4" exitCode=0 Jan 21 11:22:28 crc kubenswrapper[4881]: I0121 11:22:28.208132 4881 generic.go:334] "Generic (PLEG): container finished" podID="864daf3b-9b84-4a77-b70d-7574975a1759" containerID="e43c16a8d49069db18e2f00c6f35aa7e319b33e147379724b98cc6a207964853" exitCode=2 Jan 21 11:22:28 crc kubenswrapper[4881]: I0121 11:22:28.208141 4881 generic.go:334] "Generic (PLEG): container finished" podID="864daf3b-9b84-4a77-b70d-7574975a1759" containerID="9f19d662dd7c7d2e019ff9b54fc69e7ca9f3be17c295e4af48f920e1e9ca9860" exitCode=0 Jan 21 11:22:28 crc kubenswrapper[4881]: I0121 11:22:28.208054 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"864daf3b-9b84-4a77-b70d-7574975a1759","Type":"ContainerDied","Data":"c4d1c7b32460d66f1d454b2f673559cd15c9520eb920941f2f0afa5d440392f4"} Jan 21 11:22:28 crc kubenswrapper[4881]: I0121 11:22:28.208241 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"864daf3b-9b84-4a77-b70d-7574975a1759","Type":"ContainerDied","Data":"e43c16a8d49069db18e2f00c6f35aa7e319b33e147379724b98cc6a207964853"} Jan 21 11:22:28 crc kubenswrapper[4881]: I0121 11:22:28.208267 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"864daf3b-9b84-4a77-b70d-7574975a1759","Type":"ContainerDied","Data":"9f19d662dd7c7d2e019ff9b54fc69e7ca9f3be17c295e4af48f920e1e9ca9860"} Jan 21 11:22:34 crc kubenswrapper[4881]: I0121 11:22:34.555148 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Jan 21 11:22:34 crc kubenswrapper[4881]: I0121 11:22:34.586360 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Jan 21 11:22:35 crc kubenswrapper[4881]: I0121 11:22:35.352366 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Jan 21 11:22:35 crc kubenswrapper[4881]: I0121 11:22:35.382363 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Jan 21 11:22:36 crc kubenswrapper[4881]: I0121 11:22:36.365729 4881 generic.go:334] "Generic (PLEG): container finished" podID="864daf3b-9b84-4a77-b70d-7574975a1759" containerID="c587b5f1d4ce6bd63009ab70ac3c2d60e9a361552ad74baf6eee5e9cbaf12b08" exitCode=0 Jan 21 11:22:36 crc kubenswrapper[4881]: I0121 11:22:36.365963 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"864daf3b-9b84-4a77-b70d-7574975a1759","Type":"ContainerDied","Data":"c587b5f1d4ce6bd63009ab70ac3c2d60e9a361552ad74baf6eee5e9cbaf12b08"} Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.074857 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.134738 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/864daf3b-9b84-4a77-b70d-7574975a1759-run-httpd\") pod \"864daf3b-9b84-4a77-b70d-7574975a1759\" (UID: \"864daf3b-9b84-4a77-b70d-7574975a1759\") " Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.134824 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/864daf3b-9b84-4a77-b70d-7574975a1759-log-httpd\") pod \"864daf3b-9b84-4a77-b70d-7574975a1759\" (UID: \"864daf3b-9b84-4a77-b70d-7574975a1759\") " Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.134905 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/864daf3b-9b84-4a77-b70d-7574975a1759-sg-core-conf-yaml\") pod \"864daf3b-9b84-4a77-b70d-7574975a1759\" (UID: \"864daf3b-9b84-4a77-b70d-7574975a1759\") " Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.134997 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/864daf3b-9b84-4a77-b70d-7574975a1759-combined-ca-bundle\") pod \"864daf3b-9b84-4a77-b70d-7574975a1759\" (UID: \"864daf3b-9b84-4a77-b70d-7574975a1759\") " Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.135088 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/864daf3b-9b84-4a77-b70d-7574975a1759-scripts\") pod \"864daf3b-9b84-4a77-b70d-7574975a1759\" (UID: \"864daf3b-9b84-4a77-b70d-7574975a1759\") " Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.135288 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/864daf3b-9b84-4a77-b70d-7574975a1759-config-data\") pod \"864daf3b-9b84-4a77-b70d-7574975a1759\" (UID: \"864daf3b-9b84-4a77-b70d-7574975a1759\") " Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.135322 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8lc5\" (UniqueName: \"kubernetes.io/projected/864daf3b-9b84-4a77-b70d-7574975a1759-kube-api-access-h8lc5\") pod \"864daf3b-9b84-4a77-b70d-7574975a1759\" (UID: \"864daf3b-9b84-4a77-b70d-7574975a1759\") " Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.139525 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/864daf3b-9b84-4a77-b70d-7574975a1759-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "864daf3b-9b84-4a77-b70d-7574975a1759" (UID: "864daf3b-9b84-4a77-b70d-7574975a1759"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.141197 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/864daf3b-9b84-4a77-b70d-7574975a1759-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "864daf3b-9b84-4a77-b70d-7574975a1759" (UID: "864daf3b-9b84-4a77-b70d-7574975a1759"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.141770 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/864daf3b-9b84-4a77-b70d-7574975a1759-scripts" (OuterVolumeSpecName: "scripts") pod "864daf3b-9b84-4a77-b70d-7574975a1759" (UID: "864daf3b-9b84-4a77-b70d-7574975a1759"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.146009 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/864daf3b-9b84-4a77-b70d-7574975a1759-kube-api-access-h8lc5" (OuterVolumeSpecName: "kube-api-access-h8lc5") pod "864daf3b-9b84-4a77-b70d-7574975a1759" (UID: "864daf3b-9b84-4a77-b70d-7574975a1759"). InnerVolumeSpecName "kube-api-access-h8lc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.166704 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/864daf3b-9b84-4a77-b70d-7574975a1759-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "864daf3b-9b84-4a77-b70d-7574975a1759" (UID: "864daf3b-9b84-4a77-b70d-7574975a1759"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.238302 4881 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/864daf3b-9b84-4a77-b70d-7574975a1759-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.238345 4881 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/864daf3b-9b84-4a77-b70d-7574975a1759-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.238359 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/864daf3b-9b84-4a77-b70d-7574975a1759-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.238371 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8lc5\" (UniqueName: \"kubernetes.io/projected/864daf3b-9b84-4a77-b70d-7574975a1759-kube-api-access-h8lc5\") on node \"crc\" DevicePath \"\"" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.238384 4881 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/864daf3b-9b84-4a77-b70d-7574975a1759-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.240533 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/864daf3b-9b84-4a77-b70d-7574975a1759-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "864daf3b-9b84-4a77-b70d-7574975a1759" (UID: "864daf3b-9b84-4a77-b70d-7574975a1759"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.273028 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/864daf3b-9b84-4a77-b70d-7574975a1759-config-data" (OuterVolumeSpecName: "config-data") pod "864daf3b-9b84-4a77-b70d-7574975a1759" (UID: "864daf3b-9b84-4a77-b70d-7574975a1759"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.340885 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/864daf3b-9b84-4a77-b70d-7574975a1759-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.340918 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/864daf3b-9b84-4a77-b70d-7574975a1759-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.398918 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"864daf3b-9b84-4a77-b70d-7574975a1759","Type":"ContainerDied","Data":"503a25d56c550049491832816edbc48c05afa818af9138db9e45c13fbbda3c04"} Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.398980 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.398997 4881 scope.go:117] "RemoveContainer" containerID="c4d1c7b32460d66f1d454b2f673559cd15c9520eb920941f2f0afa5d440392f4" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.440982 4881 scope.go:117] "RemoveContainer" containerID="e43c16a8d49069db18e2f00c6f35aa7e319b33e147379724b98cc6a207964853" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.444325 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.476910 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.490103 4881 scope.go:117] "RemoveContainer" containerID="9f19d662dd7c7d2e019ff9b54fc69e7ca9f3be17c295e4af48f920e1e9ca9860" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.494572 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:22:37 crc kubenswrapper[4881]: E0121 11:22:37.495186 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="864daf3b-9b84-4a77-b70d-7574975a1759" containerName="ceilometer-central-agent" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.495215 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="864daf3b-9b84-4a77-b70d-7574975a1759" containerName="ceilometer-central-agent" Jan 21 11:22:37 crc kubenswrapper[4881]: E0121 11:22:37.495241 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="864daf3b-9b84-4a77-b70d-7574975a1759" containerName="sg-core" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.495250 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="864daf3b-9b84-4a77-b70d-7574975a1759" containerName="sg-core" Jan 21 11:22:37 crc kubenswrapper[4881]: E0121 11:22:37.495302 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="864daf3b-9b84-4a77-b70d-7574975a1759" containerName="ceilometer-notification-agent" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.495313 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="864daf3b-9b84-4a77-b70d-7574975a1759" containerName="ceilometer-notification-agent" Jan 21 11:22:37 crc kubenswrapper[4881]: E0121 11:22:37.495346 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee4e7116-c2cd-43d5-af6b-9f30b5053e0e" containerName="watcher-decision-engine" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.495382 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee4e7116-c2cd-43d5-af6b-9f30b5053e0e" containerName="watcher-decision-engine" Jan 21 11:22:37 crc kubenswrapper[4881]: E0121 11:22:37.495404 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="864daf3b-9b84-4a77-b70d-7574975a1759" containerName="proxy-httpd" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.495412 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="864daf3b-9b84-4a77-b70d-7574975a1759" containerName="proxy-httpd" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.496555 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="864daf3b-9b84-4a77-b70d-7574975a1759" containerName="proxy-httpd" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.496582 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee4e7116-c2cd-43d5-af6b-9f30b5053e0e" containerName="watcher-decision-engine" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.496809 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="864daf3b-9b84-4a77-b70d-7574975a1759" containerName="ceilometer-central-agent" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.496844 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="864daf3b-9b84-4a77-b70d-7574975a1759" containerName="ceilometer-notification-agent" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.496862 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="864daf3b-9b84-4a77-b70d-7574975a1759" containerName="sg-core" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.499726 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.503883 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.504157 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.506590 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.524675 4881 scope.go:117] "RemoveContainer" containerID="c587b5f1d4ce6bd63009ab70ac3c2d60e9a361552ad74baf6eee5e9cbaf12b08" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.546411 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20eeb602-9c98-48ed-a9c9-22121156e8cb-scripts\") pod \"ceilometer-0\" (UID: \"20eeb602-9c98-48ed-a9c9-22121156e8cb\") " pod="openstack/ceilometer-0" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.546570 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20eeb602-9c98-48ed-a9c9-22121156e8cb-config-data\") pod \"ceilometer-0\" (UID: \"20eeb602-9c98-48ed-a9c9-22121156e8cb\") " pod="openstack/ceilometer-0" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.546708 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20eeb602-9c98-48ed-a9c9-22121156e8cb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"20eeb602-9c98-48ed-a9c9-22121156e8cb\") " pod="openstack/ceilometer-0" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.546842 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgzxk\" (UniqueName: \"kubernetes.io/projected/20eeb602-9c98-48ed-a9c9-22121156e8cb-kube-api-access-zgzxk\") pod \"ceilometer-0\" (UID: \"20eeb602-9c98-48ed-a9c9-22121156e8cb\") " pod="openstack/ceilometer-0" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.546987 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20eeb602-9c98-48ed-a9c9-22121156e8cb-log-httpd\") pod \"ceilometer-0\" (UID: \"20eeb602-9c98-48ed-a9c9-22121156e8cb\") " pod="openstack/ceilometer-0" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.547132 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20eeb602-9c98-48ed-a9c9-22121156e8cb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"20eeb602-9c98-48ed-a9c9-22121156e8cb\") " pod="openstack/ceilometer-0" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.547272 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20eeb602-9c98-48ed-a9c9-22121156e8cb-run-httpd\") pod \"ceilometer-0\" (UID: \"20eeb602-9c98-48ed-a9c9-22121156e8cb\") " pod="openstack/ceilometer-0" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.650073 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20eeb602-9c98-48ed-a9c9-22121156e8cb-log-httpd\") pod \"ceilometer-0\" (UID: \"20eeb602-9c98-48ed-a9c9-22121156e8cb\") " pod="openstack/ceilometer-0" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.650189 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20eeb602-9c98-48ed-a9c9-22121156e8cb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"20eeb602-9c98-48ed-a9c9-22121156e8cb\") " pod="openstack/ceilometer-0" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.650237 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20eeb602-9c98-48ed-a9c9-22121156e8cb-run-httpd\") pod \"ceilometer-0\" (UID: \"20eeb602-9c98-48ed-a9c9-22121156e8cb\") " pod="openstack/ceilometer-0" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.650282 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20eeb602-9c98-48ed-a9c9-22121156e8cb-scripts\") pod \"ceilometer-0\" (UID: \"20eeb602-9c98-48ed-a9c9-22121156e8cb\") " pod="openstack/ceilometer-0" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.650312 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20eeb602-9c98-48ed-a9c9-22121156e8cb-config-data\") pod \"ceilometer-0\" (UID: \"20eeb602-9c98-48ed-a9c9-22121156e8cb\") " pod="openstack/ceilometer-0" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.650385 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20eeb602-9c98-48ed-a9c9-22121156e8cb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"20eeb602-9c98-48ed-a9c9-22121156e8cb\") " pod="openstack/ceilometer-0" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.650426 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgzxk\" (UniqueName: \"kubernetes.io/projected/20eeb602-9c98-48ed-a9c9-22121156e8cb-kube-api-access-zgzxk\") pod \"ceilometer-0\" (UID: \"20eeb602-9c98-48ed-a9c9-22121156e8cb\") " pod="openstack/ceilometer-0" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.650890 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20eeb602-9c98-48ed-a9c9-22121156e8cb-log-httpd\") pod \"ceilometer-0\" (UID: \"20eeb602-9c98-48ed-a9c9-22121156e8cb\") " pod="openstack/ceilometer-0" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.652008 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20eeb602-9c98-48ed-a9c9-22121156e8cb-run-httpd\") pod \"ceilometer-0\" (UID: \"20eeb602-9c98-48ed-a9c9-22121156e8cb\") " pod="openstack/ceilometer-0" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.655355 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20eeb602-9c98-48ed-a9c9-22121156e8cb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"20eeb602-9c98-48ed-a9c9-22121156e8cb\") " pod="openstack/ceilometer-0" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.655666 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20eeb602-9c98-48ed-a9c9-22121156e8cb-scripts\") pod \"ceilometer-0\" (UID: \"20eeb602-9c98-48ed-a9c9-22121156e8cb\") " pod="openstack/ceilometer-0" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.656704 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20eeb602-9c98-48ed-a9c9-22121156e8cb-config-data\") pod \"ceilometer-0\" (UID: \"20eeb602-9c98-48ed-a9c9-22121156e8cb\") " pod="openstack/ceilometer-0" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.656984 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20eeb602-9c98-48ed-a9c9-22121156e8cb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"20eeb602-9c98-48ed-a9c9-22121156e8cb\") " pod="openstack/ceilometer-0" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.672217 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgzxk\" (UniqueName: \"kubernetes.io/projected/20eeb602-9c98-48ed-a9c9-22121156e8cb-kube-api-access-zgzxk\") pod \"ceilometer-0\" (UID: \"20eeb602-9c98-48ed-a9c9-22121156e8cb\") " pod="openstack/ceilometer-0" Jan 21 11:22:37 crc kubenswrapper[4881]: I0121 11:22:37.822905 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:22:38 crc kubenswrapper[4881]: I0121 11:22:38.299701 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:22:38 crc kubenswrapper[4881]: W0121 11:22:38.311922 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20eeb602_9c98_48ed_a9c9_22121156e8cb.slice/crio-98b63a4387f707fe8989f7007a02efb416a3ce182b681d864a6fffaef05cd43d WatchSource:0}: Error finding container 98b63a4387f707fe8989f7007a02efb416a3ce182b681d864a6fffaef05cd43d: Status 404 returned error can't find the container with id 98b63a4387f707fe8989f7007a02efb416a3ce182b681d864a6fffaef05cd43d Jan 21 11:22:38 crc kubenswrapper[4881]: I0121 11:22:38.415089 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20eeb602-9c98-48ed-a9c9-22121156e8cb","Type":"ContainerStarted","Data":"98b63a4387f707fe8989f7007a02efb416a3ce182b681d864a6fffaef05cd43d"} Jan 21 11:22:39 crc kubenswrapper[4881]: I0121 11:22:39.711379 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="864daf3b-9b84-4a77-b70d-7574975a1759" path="/var/lib/kubelet/pods/864daf3b-9b84-4a77-b70d-7574975a1759/volumes" Jan 21 11:22:39 crc kubenswrapper[4881]: I0121 11:22:39.747176 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20eeb602-9c98-48ed-a9c9-22121156e8cb","Type":"ContainerStarted","Data":"f833baf807f57255c45be1ba58cccaca032385ccba346e4fc3846694862bc6ee"} Jan 21 11:22:39 crc kubenswrapper[4881]: I0121 11:22:39.747220 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20eeb602-9c98-48ed-a9c9-22121156e8cb","Type":"ContainerStarted","Data":"8256e63406ff9c5a7c526341a649b275e3f5ab402c57f45ac53e47b1d11393f9"} Jan 21 11:22:40 crc kubenswrapper[4881]: I0121 11:22:40.760774 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20eeb602-9c98-48ed-a9c9-22121156e8cb","Type":"ContainerStarted","Data":"19d2c0708e63a625c9564d43bfbff6b4bf382eb29c4f5fe75600d774080fe1d6"} Jan 21 11:22:40 crc kubenswrapper[4881]: I0121 11:22:40.766205 4881 generic.go:334] "Generic (PLEG): container finished" podID="16c22e38-1b3d-44b8-9519-0769200d708b" containerID="45d2c9cf95b1e6ab35e425681a61a8e4775263f35ab1c8463912de139e00b535" exitCode=0 Jan 21 11:22:40 crc kubenswrapper[4881]: I0121 11:22:40.766254 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-f7mmp" event={"ID":"16c22e38-1b3d-44b8-9519-0769200d708b","Type":"ContainerDied","Data":"45d2c9cf95b1e6ab35e425681a61a8e4775263f35ab1c8463912de139e00b535"} Jan 21 11:22:41 crc kubenswrapper[4881]: I0121 11:22:41.780897 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20eeb602-9c98-48ed-a9c9-22121156e8cb","Type":"ContainerStarted","Data":"ebf63005cec886f7073127e6f8a1b1d91309382b4d83ebbd9aca189eabae9b37"} Jan 21 11:22:41 crc kubenswrapper[4881]: I0121 11:22:41.781207 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 11:22:41 crc kubenswrapper[4881]: I0121 11:22:41.821722 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.896080456 podStartE2EDuration="4.821702997s" podCreationTimestamp="2026-01-21 11:22:37 +0000 UTC" firstStartedPulling="2026-01-21 11:22:38.315015642 +0000 UTC m=+1545.574972111" lastFinishedPulling="2026-01-21 11:22:41.240638193 +0000 UTC m=+1548.500594652" observedRunningTime="2026-01-21 11:22:41.815247257 +0000 UTC m=+1549.075203736" watchObservedRunningTime="2026-01-21 11:22:41.821702997 +0000 UTC m=+1549.081659456" Jan 21 11:22:42 crc kubenswrapper[4881]: I0121 11:22:42.213153 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-f7mmp" Jan 21 11:22:42 crc kubenswrapper[4881]: I0121 11:22:42.371577 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16c22e38-1b3d-44b8-9519-0769200d708b-config-data\") pod \"16c22e38-1b3d-44b8-9519-0769200d708b\" (UID: \"16c22e38-1b3d-44b8-9519-0769200d708b\") " Jan 21 11:22:42 crc kubenswrapper[4881]: I0121 11:22:42.371993 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16c22e38-1b3d-44b8-9519-0769200d708b-scripts\") pod \"16c22e38-1b3d-44b8-9519-0769200d708b\" (UID: \"16c22e38-1b3d-44b8-9519-0769200d708b\") " Jan 21 11:22:42 crc kubenswrapper[4881]: I0121 11:22:42.372045 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfw75\" (UniqueName: \"kubernetes.io/projected/16c22e38-1b3d-44b8-9519-0769200d708b-kube-api-access-vfw75\") pod \"16c22e38-1b3d-44b8-9519-0769200d708b\" (UID: \"16c22e38-1b3d-44b8-9519-0769200d708b\") " Jan 21 11:22:42 crc kubenswrapper[4881]: I0121 11:22:42.372076 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16c22e38-1b3d-44b8-9519-0769200d708b-combined-ca-bundle\") pod \"16c22e38-1b3d-44b8-9519-0769200d708b\" (UID: \"16c22e38-1b3d-44b8-9519-0769200d708b\") " Jan 21 11:22:42 crc kubenswrapper[4881]: I0121 11:22:42.377988 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16c22e38-1b3d-44b8-9519-0769200d708b-scripts" (OuterVolumeSpecName: "scripts") pod "16c22e38-1b3d-44b8-9519-0769200d708b" (UID: "16c22e38-1b3d-44b8-9519-0769200d708b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:22:42 crc kubenswrapper[4881]: I0121 11:22:42.378828 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16c22e38-1b3d-44b8-9519-0769200d708b-kube-api-access-vfw75" (OuterVolumeSpecName: "kube-api-access-vfw75") pod "16c22e38-1b3d-44b8-9519-0769200d708b" (UID: "16c22e38-1b3d-44b8-9519-0769200d708b"). InnerVolumeSpecName "kube-api-access-vfw75". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:22:42 crc kubenswrapper[4881]: I0121 11:22:42.405737 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16c22e38-1b3d-44b8-9519-0769200d708b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "16c22e38-1b3d-44b8-9519-0769200d708b" (UID: "16c22e38-1b3d-44b8-9519-0769200d708b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:22:42 crc kubenswrapper[4881]: I0121 11:22:42.410660 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16c22e38-1b3d-44b8-9519-0769200d708b-config-data" (OuterVolumeSpecName: "config-data") pod "16c22e38-1b3d-44b8-9519-0769200d708b" (UID: "16c22e38-1b3d-44b8-9519-0769200d708b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:22:42 crc kubenswrapper[4881]: I0121 11:22:42.475148 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16c22e38-1b3d-44b8-9519-0769200d708b-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:22:42 crc kubenswrapper[4881]: I0121 11:22:42.475193 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16c22e38-1b3d-44b8-9519-0769200d708b-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:22:42 crc kubenswrapper[4881]: I0121 11:22:42.475208 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfw75\" (UniqueName: \"kubernetes.io/projected/16c22e38-1b3d-44b8-9519-0769200d708b-kube-api-access-vfw75\") on node \"crc\" DevicePath \"\"" Jan 21 11:22:42 crc kubenswrapper[4881]: I0121 11:22:42.475222 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16c22e38-1b3d-44b8-9519-0769200d708b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:22:42 crc kubenswrapper[4881]: I0121 11:22:42.794708 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-f7mmp" Jan 21 11:22:42 crc kubenswrapper[4881]: I0121 11:22:42.794768 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-f7mmp" event={"ID":"16c22e38-1b3d-44b8-9519-0769200d708b","Type":"ContainerDied","Data":"6a75d9ea9e41983b4baba3e71a4e5dcc957acdbd7dcf5242117832a4b32a615c"} Jan 21 11:22:42 crc kubenswrapper[4881]: I0121 11:22:42.794818 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a75d9ea9e41983b4baba3e71a4e5dcc957acdbd7dcf5242117832a4b32a615c" Jan 21 11:22:42 crc kubenswrapper[4881]: I0121 11:22:42.921365 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 11:22:42 crc kubenswrapper[4881]: E0121 11:22:42.922006 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16c22e38-1b3d-44b8-9519-0769200d708b" containerName="nova-cell0-conductor-db-sync" Jan 21 11:22:42 crc kubenswrapper[4881]: I0121 11:22:42.922030 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="16c22e38-1b3d-44b8-9519-0769200d708b" containerName="nova-cell0-conductor-db-sync" Jan 21 11:22:42 crc kubenswrapper[4881]: I0121 11:22:42.922283 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="16c22e38-1b3d-44b8-9519-0769200d708b" containerName="nova-cell0-conductor-db-sync" Jan 21 11:22:42 crc kubenswrapper[4881]: I0121 11:22:42.923299 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 21 11:22:42 crc kubenswrapper[4881]: I0121 11:22:42.928453 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 21 11:22:42 crc kubenswrapper[4881]: I0121 11:22:42.928543 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-fjj24" Jan 21 11:22:42 crc kubenswrapper[4881]: I0121 11:22:42.933996 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 11:22:43 crc kubenswrapper[4881]: I0121 11:22:43.089196 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bm5d\" (UniqueName: \"kubernetes.io/projected/dc5fb029-b5fa-4065-adb2-af2e634785fc-kube-api-access-5bm5d\") pod \"nova-cell0-conductor-0\" (UID: \"dc5fb029-b5fa-4065-adb2-af2e634785fc\") " pod="openstack/nova-cell0-conductor-0" Jan 21 11:22:43 crc kubenswrapper[4881]: I0121 11:22:43.089586 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc5fb029-b5fa-4065-adb2-af2e634785fc-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"dc5fb029-b5fa-4065-adb2-af2e634785fc\") " pod="openstack/nova-cell0-conductor-0" Jan 21 11:22:43 crc kubenswrapper[4881]: I0121 11:22:43.089669 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc5fb029-b5fa-4065-adb2-af2e634785fc-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"dc5fb029-b5fa-4065-adb2-af2e634785fc\") " pod="openstack/nova-cell0-conductor-0" Jan 21 11:22:43 crc kubenswrapper[4881]: I0121 11:22:43.192771 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc5fb029-b5fa-4065-adb2-af2e634785fc-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"dc5fb029-b5fa-4065-adb2-af2e634785fc\") " pod="openstack/nova-cell0-conductor-0" Jan 21 11:22:43 crc kubenswrapper[4881]: I0121 11:22:43.193186 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc5fb029-b5fa-4065-adb2-af2e634785fc-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"dc5fb029-b5fa-4065-adb2-af2e634785fc\") " pod="openstack/nova-cell0-conductor-0" Jan 21 11:22:43 crc kubenswrapper[4881]: I0121 11:22:43.193431 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bm5d\" (UniqueName: \"kubernetes.io/projected/dc5fb029-b5fa-4065-adb2-af2e634785fc-kube-api-access-5bm5d\") pod \"nova-cell0-conductor-0\" (UID: \"dc5fb029-b5fa-4065-adb2-af2e634785fc\") " pod="openstack/nova-cell0-conductor-0" Jan 21 11:22:43 crc kubenswrapper[4881]: I0121 11:22:43.196934 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc5fb029-b5fa-4065-adb2-af2e634785fc-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"dc5fb029-b5fa-4065-adb2-af2e634785fc\") " pod="openstack/nova-cell0-conductor-0" Jan 21 11:22:43 crc kubenswrapper[4881]: I0121 11:22:43.197152 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc5fb029-b5fa-4065-adb2-af2e634785fc-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"dc5fb029-b5fa-4065-adb2-af2e634785fc\") " pod="openstack/nova-cell0-conductor-0" Jan 21 11:22:43 crc kubenswrapper[4881]: I0121 11:22:43.210622 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bm5d\" (UniqueName: \"kubernetes.io/projected/dc5fb029-b5fa-4065-adb2-af2e634785fc-kube-api-access-5bm5d\") pod \"nova-cell0-conductor-0\" (UID: \"dc5fb029-b5fa-4065-adb2-af2e634785fc\") " pod="openstack/nova-cell0-conductor-0" Jan 21 11:22:43 crc kubenswrapper[4881]: I0121 11:22:43.251515 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 21 11:22:44 crc kubenswrapper[4881]: I0121 11:22:44.031722 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 11:22:44 crc kubenswrapper[4881]: I0121 11:22:44.815434 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"dc5fb029-b5fa-4065-adb2-af2e634785fc","Type":"ContainerStarted","Data":"78e76eaf0f1c596c93d80443dc862532d9aec8c20fa4611433d0d4e887f066ae"} Jan 21 11:22:46 crc kubenswrapper[4881]: I0121 11:22:46.907200 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"dc5fb029-b5fa-4065-adb2-af2e634785fc","Type":"ContainerStarted","Data":"f1bfaecb54264853b5148d400e3526c63e010da6d27ad91e1985d00445cde11c"} Jan 21 11:22:46 crc kubenswrapper[4881]: I0121 11:22:46.908576 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 21 11:22:46 crc kubenswrapper[4881]: I0121 11:22:46.934702 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=4.934683193 podStartE2EDuration="4.934683193s" podCreationTimestamp="2026-01-21 11:22:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:22:46.932600552 +0000 UTC m=+1554.192557021" watchObservedRunningTime="2026-01-21 11:22:46.934683193 +0000 UTC m=+1554.194639662" Jan 21 11:22:53 crc kubenswrapper[4881]: I0121 11:22:53.283675 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 21 11:22:53 crc kubenswrapper[4881]: I0121 11:22:53.829977 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-qgqh7"] Jan 21 11:22:53 crc kubenswrapper[4881]: I0121 11:22:53.831438 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qgqh7" Jan 21 11:22:53 crc kubenswrapper[4881]: I0121 11:22:53.833550 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 21 11:22:53 crc kubenswrapper[4881]: I0121 11:22:53.834695 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 21 11:22:53 crc kubenswrapper[4881]: I0121 11:22:53.855117 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-qgqh7\" (UID: \"9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad\") " pod="openstack/nova-cell0-cell-mapping-qgqh7" Jan 21 11:22:53 crc kubenswrapper[4881]: I0121 11:22:53.855206 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad-config-data\") pod \"nova-cell0-cell-mapping-qgqh7\" (UID: \"9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad\") " pod="openstack/nova-cell0-cell-mapping-qgqh7" Jan 21 11:22:53 crc kubenswrapper[4881]: I0121 11:22:53.855288 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5c6l8\" (UniqueName: \"kubernetes.io/projected/9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad-kube-api-access-5c6l8\") pod \"nova-cell0-cell-mapping-qgqh7\" (UID: \"9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad\") " pod="openstack/nova-cell0-cell-mapping-qgqh7" Jan 21 11:22:53 crc kubenswrapper[4881]: I0121 11:22:53.855358 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad-scripts\") pod \"nova-cell0-cell-mapping-qgqh7\" (UID: \"9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad\") " pod="openstack/nova-cell0-cell-mapping-qgqh7" Jan 21 11:22:53 crc kubenswrapper[4881]: I0121 11:22:53.856998 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-qgqh7"] Jan 21 11:22:53 crc kubenswrapper[4881]: I0121 11:22:53.956206 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5c6l8\" (UniqueName: \"kubernetes.io/projected/9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad-kube-api-access-5c6l8\") pod \"nova-cell0-cell-mapping-qgqh7\" (UID: \"9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad\") " pod="openstack/nova-cell0-cell-mapping-qgqh7" Jan 21 11:22:53 crc kubenswrapper[4881]: I0121 11:22:53.956492 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad-scripts\") pod \"nova-cell0-cell-mapping-qgqh7\" (UID: \"9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad\") " pod="openstack/nova-cell0-cell-mapping-qgqh7" Jan 21 11:22:53 crc kubenswrapper[4881]: I0121 11:22:53.956566 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-qgqh7\" (UID: \"9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad\") " pod="openstack/nova-cell0-cell-mapping-qgqh7" Jan 21 11:22:53 crc kubenswrapper[4881]: I0121 11:22:53.956625 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad-config-data\") pod \"nova-cell0-cell-mapping-qgqh7\" (UID: \"9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad\") " pod="openstack/nova-cell0-cell-mapping-qgqh7" Jan 21 11:22:53 crc kubenswrapper[4881]: I0121 11:22:53.962680 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad-config-data\") pod \"nova-cell0-cell-mapping-qgqh7\" (UID: \"9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad\") " pod="openstack/nova-cell0-cell-mapping-qgqh7" Jan 21 11:22:53 crc kubenswrapper[4881]: I0121 11:22:53.963471 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-qgqh7\" (UID: \"9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad\") " pod="openstack/nova-cell0-cell-mapping-qgqh7" Jan 21 11:22:53 crc kubenswrapper[4881]: I0121 11:22:53.966109 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad-scripts\") pod \"nova-cell0-cell-mapping-qgqh7\" (UID: \"9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad\") " pod="openstack/nova-cell0-cell-mapping-qgqh7" Jan 21 11:22:53 crc kubenswrapper[4881]: I0121 11:22:53.987432 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5c6l8\" (UniqueName: \"kubernetes.io/projected/9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad-kube-api-access-5c6l8\") pod \"nova-cell0-cell-mapping-qgqh7\" (UID: \"9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad\") " pod="openstack/nova-cell0-cell-mapping-qgqh7" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.038219 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.040669 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.042948 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.058602 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f39b23f8-2c7e-46d6-8e59-7980b1d2c27c-config-data\") pod \"nova-api-0\" (UID: \"f39b23f8-2c7e-46d6-8e59-7980b1d2c27c\") " pod="openstack/nova-api-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.058641 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lqtx\" (UniqueName: \"kubernetes.io/projected/f39b23f8-2c7e-46d6-8e59-7980b1d2c27c-kube-api-access-2lqtx\") pod \"nova-api-0\" (UID: \"f39b23f8-2c7e-46d6-8e59-7980b1d2c27c\") " pod="openstack/nova-api-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.058667 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f39b23f8-2c7e-46d6-8e59-7980b1d2c27c-logs\") pod \"nova-api-0\" (UID: \"f39b23f8-2c7e-46d6-8e59-7980b1d2c27c\") " pod="openstack/nova-api-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.058802 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f39b23f8-2c7e-46d6-8e59-7980b1d2c27c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f39b23f8-2c7e-46d6-8e59-7980b1d2c27c\") " pod="openstack/nova-api-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.075888 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.092397 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.094435 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.098158 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.134690 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.136590 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.146328 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.152620 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qgqh7" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.161069 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.165212 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f39b23f8-2c7e-46d6-8e59-7980b1d2c27c-config-data\") pod \"nova-api-0\" (UID: \"f39b23f8-2c7e-46d6-8e59-7980b1d2c27c\") " pod="openstack/nova-api-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.165269 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lqtx\" (UniqueName: \"kubernetes.io/projected/f39b23f8-2c7e-46d6-8e59-7980b1d2c27c-kube-api-access-2lqtx\") pod \"nova-api-0\" (UID: \"f39b23f8-2c7e-46d6-8e59-7980b1d2c27c\") " pod="openstack/nova-api-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.165337 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f39b23f8-2c7e-46d6-8e59-7980b1d2c27c-logs\") pod \"nova-api-0\" (UID: \"f39b23f8-2c7e-46d6-8e59-7980b1d2c27c\") " pod="openstack/nova-api-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.165425 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f39b23f8-2c7e-46d6-8e59-7980b1d2c27c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f39b23f8-2c7e-46d6-8e59-7980b1d2c27c\") " pod="openstack/nova-api-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.166040 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f39b23f8-2c7e-46d6-8e59-7980b1d2c27c-logs\") pod \"nova-api-0\" (UID: \"f39b23f8-2c7e-46d6-8e59-7980b1d2c27c\") " pod="openstack/nova-api-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.175008 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.188510 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f39b23f8-2c7e-46d6-8e59-7980b1d2c27c-config-data\") pod \"nova-api-0\" (UID: \"f39b23f8-2c7e-46d6-8e59-7980b1d2c27c\") " pod="openstack/nova-api-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.203371 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lqtx\" (UniqueName: \"kubernetes.io/projected/f39b23f8-2c7e-46d6-8e59-7980b1d2c27c-kube-api-access-2lqtx\") pod \"nova-api-0\" (UID: \"f39b23f8-2c7e-46d6-8e59-7980b1d2c27c\") " pod="openstack/nova-api-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.204645 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f39b23f8-2c7e-46d6-8e59-7980b1d2c27c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f39b23f8-2c7e-46d6-8e59-7980b1d2c27c\") " pod="openstack/nova-api-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.267480 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3345073b-8907-4de9-829f-73d8e79a01bb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3345073b-8907-4de9-829f-73d8e79a01bb\") " pod="openstack/nova-scheduler-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.267524 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ff1a29-d6ee-4911-bb22-165aca6d8605-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"50ff1a29-d6ee-4911-bb22-165aca6d8605\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.267556 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz64s\" (UniqueName: \"kubernetes.io/projected/50ff1a29-d6ee-4911-bb22-165aca6d8605-kube-api-access-xz64s\") pod \"nova-cell1-novncproxy-0\" (UID: \"50ff1a29-d6ee-4911-bb22-165aca6d8605\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.267587 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50ff1a29-d6ee-4911-bb22-165aca6d8605-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"50ff1a29-d6ee-4911-bb22-165aca6d8605\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.268470 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3345073b-8907-4de9-829f-73d8e79a01bb-config-data\") pod \"nova-scheduler-0\" (UID: \"3345073b-8907-4de9-829f-73d8e79a01bb\") " pod="openstack/nova-scheduler-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.268578 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vb4w\" (UniqueName: \"kubernetes.io/projected/3345073b-8907-4de9-829f-73d8e79a01bb-kube-api-access-5vb4w\") pod \"nova-scheduler-0\" (UID: \"3345073b-8907-4de9-829f-73d8e79a01bb\") " pod="openstack/nova-scheduler-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.370347 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xz64s\" (UniqueName: \"kubernetes.io/projected/50ff1a29-d6ee-4911-bb22-165aca6d8605-kube-api-access-xz64s\") pod \"nova-cell1-novncproxy-0\" (UID: \"50ff1a29-d6ee-4911-bb22-165aca6d8605\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.370701 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50ff1a29-d6ee-4911-bb22-165aca6d8605-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"50ff1a29-d6ee-4911-bb22-165aca6d8605\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.370893 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3345073b-8907-4de9-829f-73d8e79a01bb-config-data\") pod \"nova-scheduler-0\" (UID: \"3345073b-8907-4de9-829f-73d8e79a01bb\") " pod="openstack/nova-scheduler-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.370925 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vb4w\" (UniqueName: \"kubernetes.io/projected/3345073b-8907-4de9-829f-73d8e79a01bb-kube-api-access-5vb4w\") pod \"nova-scheduler-0\" (UID: \"3345073b-8907-4de9-829f-73d8e79a01bb\") " pod="openstack/nova-scheduler-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.371017 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3345073b-8907-4de9-829f-73d8e79a01bb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3345073b-8907-4de9-829f-73d8e79a01bb\") " pod="openstack/nova-scheduler-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.371047 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ff1a29-d6ee-4911-bb22-165aca6d8605-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"50ff1a29-d6ee-4911-bb22-165aca6d8605\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.374632 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.380205 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.382219 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50ff1a29-d6ee-4911-bb22-165aca6d8605-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"50ff1a29-d6ee-4911-bb22-165aca6d8605\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.402388 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.406091 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3345073b-8907-4de9-829f-73d8e79a01bb-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3345073b-8907-4de9-829f-73d8e79a01bb\") " pod="openstack/nova-scheduler-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.419858 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.425936 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3345073b-8907-4de9-829f-73d8e79a01bb-config-data\") pod \"nova-scheduler-0\" (UID: \"3345073b-8907-4de9-829f-73d8e79a01bb\") " pod="openstack/nova-scheduler-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.464150 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xz64s\" (UniqueName: \"kubernetes.io/projected/50ff1a29-d6ee-4911-bb22-165aca6d8605-kube-api-access-xz64s\") pod \"nova-cell1-novncproxy-0\" (UID: \"50ff1a29-d6ee-4911-bb22-165aca6d8605\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.464654 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ff1a29-d6ee-4911-bb22-165aca6d8605-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"50ff1a29-d6ee-4911-bb22-165aca6d8605\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.465265 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vb4w\" (UniqueName: \"kubernetes.io/projected/3345073b-8907-4de9-829f-73d8e79a01bb-kube-api-access-5vb4w\") pod \"nova-scheduler-0\" (UID: \"3345073b-8907-4de9-829f-73d8e79a01bb\") " pod="openstack/nova-scheduler-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.465966 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.467024 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.487328 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3527c16-7547-4e37-bcda-452193c45fee-logs\") pod \"nova-metadata-0\" (UID: \"d3527c16-7547-4e37-bcda-452193c45fee\") " pod="openstack/nova-metadata-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.487446 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3527c16-7547-4e37-bcda-452193c45fee-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d3527c16-7547-4e37-bcda-452193c45fee\") " pod="openstack/nova-metadata-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.487627 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3527c16-7547-4e37-bcda-452193c45fee-config-data\") pod \"nova-metadata-0\" (UID: \"d3527c16-7547-4e37-bcda-452193c45fee\") " pod="openstack/nova-metadata-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.487844 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st84x\" (UniqueName: \"kubernetes.io/projected/d3527c16-7547-4e37-bcda-452193c45fee-kube-api-access-st84x\") pod \"nova-metadata-0\" (UID: \"d3527c16-7547-4e37-bcda-452193c45fee\") " pod="openstack/nova-metadata-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.578435 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9f55bccdc-ghvhg"] Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.580893 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9f55bccdc-ghvhg" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.592511 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3527c16-7547-4e37-bcda-452193c45fee-logs\") pod \"nova-metadata-0\" (UID: \"d3527c16-7547-4e37-bcda-452193c45fee\") " pod="openstack/nova-metadata-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.592596 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3527c16-7547-4e37-bcda-452193c45fee-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d3527c16-7547-4e37-bcda-452193c45fee\") " pod="openstack/nova-metadata-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.592673 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3527c16-7547-4e37-bcda-452193c45fee-config-data\") pod \"nova-metadata-0\" (UID: \"d3527c16-7547-4e37-bcda-452193c45fee\") " pod="openstack/nova-metadata-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.592772 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st84x\" (UniqueName: \"kubernetes.io/projected/d3527c16-7547-4e37-bcda-452193c45fee-kube-api-access-st84x\") pod \"nova-metadata-0\" (UID: \"d3527c16-7547-4e37-bcda-452193c45fee\") " pod="openstack/nova-metadata-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.593664 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3527c16-7547-4e37-bcda-452193c45fee-logs\") pod \"nova-metadata-0\" (UID: \"d3527c16-7547-4e37-bcda-452193c45fee\") " pod="openstack/nova-metadata-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.598640 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.599626 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9f55bccdc-ghvhg"] Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.601510 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3527c16-7547-4e37-bcda-452193c45fee-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d3527c16-7547-4e37-bcda-452193c45fee\") " pod="openstack/nova-metadata-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.601740 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3527c16-7547-4e37-bcda-452193c45fee-config-data\") pod \"nova-metadata-0\" (UID: \"d3527c16-7547-4e37-bcda-452193c45fee\") " pod="openstack/nova-metadata-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.614631 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st84x\" (UniqueName: \"kubernetes.io/projected/d3527c16-7547-4e37-bcda-452193c45fee-kube-api-access-st84x\") pod \"nova-metadata-0\" (UID: \"d3527c16-7547-4e37-bcda-452193c45fee\") " pod="openstack/nova-metadata-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.695164 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/859758f9-0dc2-4397-a75a-b098eaabe613-config\") pod \"dnsmasq-dns-9f55bccdc-ghvhg\" (UID: \"859758f9-0dc2-4397-a75a-b098eaabe613\") " pod="openstack/dnsmasq-dns-9f55bccdc-ghvhg" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.695483 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/859758f9-0dc2-4397-a75a-b098eaabe613-ovsdbserver-nb\") pod \"dnsmasq-dns-9f55bccdc-ghvhg\" (UID: \"859758f9-0dc2-4397-a75a-b098eaabe613\") " pod="openstack/dnsmasq-dns-9f55bccdc-ghvhg" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.695508 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/859758f9-0dc2-4397-a75a-b098eaabe613-ovsdbserver-sb\") pod \"dnsmasq-dns-9f55bccdc-ghvhg\" (UID: \"859758f9-0dc2-4397-a75a-b098eaabe613\") " pod="openstack/dnsmasq-dns-9f55bccdc-ghvhg" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.695534 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/859758f9-0dc2-4397-a75a-b098eaabe613-dns-swift-storage-0\") pod \"dnsmasq-dns-9f55bccdc-ghvhg\" (UID: \"859758f9-0dc2-4397-a75a-b098eaabe613\") " pod="openstack/dnsmasq-dns-9f55bccdc-ghvhg" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.695566 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/859758f9-0dc2-4397-a75a-b098eaabe613-dns-svc\") pod \"dnsmasq-dns-9f55bccdc-ghvhg\" (UID: \"859758f9-0dc2-4397-a75a-b098eaabe613\") " pod="openstack/dnsmasq-dns-9f55bccdc-ghvhg" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.697669 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prhq6\" (UniqueName: \"kubernetes.io/projected/859758f9-0dc2-4397-a75a-b098eaabe613-kube-api-access-prhq6\") pod \"dnsmasq-dns-9f55bccdc-ghvhg\" (UID: \"859758f9-0dc2-4397-a75a-b098eaabe613\") " pod="openstack/dnsmasq-dns-9f55bccdc-ghvhg" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.805479 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prhq6\" (UniqueName: \"kubernetes.io/projected/859758f9-0dc2-4397-a75a-b098eaabe613-kube-api-access-prhq6\") pod \"dnsmasq-dns-9f55bccdc-ghvhg\" (UID: \"859758f9-0dc2-4397-a75a-b098eaabe613\") " pod="openstack/dnsmasq-dns-9f55bccdc-ghvhg" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.805557 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/859758f9-0dc2-4397-a75a-b098eaabe613-config\") pod \"dnsmasq-dns-9f55bccdc-ghvhg\" (UID: \"859758f9-0dc2-4397-a75a-b098eaabe613\") " pod="openstack/dnsmasq-dns-9f55bccdc-ghvhg" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.805594 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/859758f9-0dc2-4397-a75a-b098eaabe613-ovsdbserver-nb\") pod \"dnsmasq-dns-9f55bccdc-ghvhg\" (UID: \"859758f9-0dc2-4397-a75a-b098eaabe613\") " pod="openstack/dnsmasq-dns-9f55bccdc-ghvhg" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.805628 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/859758f9-0dc2-4397-a75a-b098eaabe613-ovsdbserver-sb\") pod \"dnsmasq-dns-9f55bccdc-ghvhg\" (UID: \"859758f9-0dc2-4397-a75a-b098eaabe613\") " pod="openstack/dnsmasq-dns-9f55bccdc-ghvhg" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.805677 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/859758f9-0dc2-4397-a75a-b098eaabe613-dns-swift-storage-0\") pod \"dnsmasq-dns-9f55bccdc-ghvhg\" (UID: \"859758f9-0dc2-4397-a75a-b098eaabe613\") " pod="openstack/dnsmasq-dns-9f55bccdc-ghvhg" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.805708 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/859758f9-0dc2-4397-a75a-b098eaabe613-dns-svc\") pod \"dnsmasq-dns-9f55bccdc-ghvhg\" (UID: \"859758f9-0dc2-4397-a75a-b098eaabe613\") " pod="openstack/dnsmasq-dns-9f55bccdc-ghvhg" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.807127 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/859758f9-0dc2-4397-a75a-b098eaabe613-dns-svc\") pod \"dnsmasq-dns-9f55bccdc-ghvhg\" (UID: \"859758f9-0dc2-4397-a75a-b098eaabe613\") " pod="openstack/dnsmasq-dns-9f55bccdc-ghvhg" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.807915 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/859758f9-0dc2-4397-a75a-b098eaabe613-config\") pod \"dnsmasq-dns-9f55bccdc-ghvhg\" (UID: \"859758f9-0dc2-4397-a75a-b098eaabe613\") " pod="openstack/dnsmasq-dns-9f55bccdc-ghvhg" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.808411 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/859758f9-0dc2-4397-a75a-b098eaabe613-ovsdbserver-nb\") pod \"dnsmasq-dns-9f55bccdc-ghvhg\" (UID: \"859758f9-0dc2-4397-a75a-b098eaabe613\") " pod="openstack/dnsmasq-dns-9f55bccdc-ghvhg" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.808932 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/859758f9-0dc2-4397-a75a-b098eaabe613-ovsdbserver-sb\") pod \"dnsmasq-dns-9f55bccdc-ghvhg\" (UID: \"859758f9-0dc2-4397-a75a-b098eaabe613\") " pod="openstack/dnsmasq-dns-9f55bccdc-ghvhg" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.810569 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/859758f9-0dc2-4397-a75a-b098eaabe613-dns-swift-storage-0\") pod \"dnsmasq-dns-9f55bccdc-ghvhg\" (UID: \"859758f9-0dc2-4397-a75a-b098eaabe613\") " pod="openstack/dnsmasq-dns-9f55bccdc-ghvhg" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.827604 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.838529 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prhq6\" (UniqueName: \"kubernetes.io/projected/859758f9-0dc2-4397-a75a-b098eaabe613-kube-api-access-prhq6\") pod \"dnsmasq-dns-9f55bccdc-ghvhg\" (UID: \"859758f9-0dc2-4397-a75a-b098eaabe613\") " pod="openstack/dnsmasq-dns-9f55bccdc-ghvhg" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.915244 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9f55bccdc-ghvhg" Jan 21 11:22:54 crc kubenswrapper[4881]: I0121 11:22:54.932362 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-qgqh7"] Jan 21 11:22:55 crc kubenswrapper[4881]: I0121 11:22:55.043546 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qgqh7" event={"ID":"9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad","Type":"ContainerStarted","Data":"c99b0af6c38bc6fdca36563516e7441a2da5b379535ed6ab05553b2802c64c82"} Jan 21 11:22:55 crc kubenswrapper[4881]: I0121 11:22:55.464194 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-sf7xj"] Jan 21 11:22:55 crc kubenswrapper[4881]: I0121 11:22:55.467158 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-sf7xj" Jan 21 11:22:55 crc kubenswrapper[4881]: I0121 11:22:55.478381 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 21 11:22:55 crc kubenswrapper[4881]: I0121 11:22:55.478539 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 21 11:22:55 crc kubenswrapper[4881]: I0121 11:22:55.504311 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-sf7xj"] Jan 21 11:22:55 crc kubenswrapper[4881]: I0121 11:22:55.530768 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/813d73da-18da-40fa-b949-bbeec6604ac9-scripts\") pod \"nova-cell1-conductor-db-sync-sf7xj\" (UID: \"813d73da-18da-40fa-b949-bbeec6604ac9\") " pod="openstack/nova-cell1-conductor-db-sync-sf7xj" Jan 21 11:22:55 crc kubenswrapper[4881]: I0121 11:22:55.530837 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtwfp\" (UniqueName: \"kubernetes.io/projected/813d73da-18da-40fa-b949-bbeec6604ac9-kube-api-access-xtwfp\") pod \"nova-cell1-conductor-db-sync-sf7xj\" (UID: \"813d73da-18da-40fa-b949-bbeec6604ac9\") " pod="openstack/nova-cell1-conductor-db-sync-sf7xj" Jan 21 11:22:55 crc kubenswrapper[4881]: I0121 11:22:55.530860 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/813d73da-18da-40fa-b949-bbeec6604ac9-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-sf7xj\" (UID: \"813d73da-18da-40fa-b949-bbeec6604ac9\") " pod="openstack/nova-cell1-conductor-db-sync-sf7xj" Jan 21 11:22:55 crc kubenswrapper[4881]: I0121 11:22:55.530955 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/813d73da-18da-40fa-b949-bbeec6604ac9-config-data\") pod \"nova-cell1-conductor-db-sync-sf7xj\" (UID: \"813d73da-18da-40fa-b949-bbeec6604ac9\") " pod="openstack/nova-cell1-conductor-db-sync-sf7xj" Jan 21 11:22:55 crc kubenswrapper[4881]: I0121 11:22:55.593964 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 11:22:55 crc kubenswrapper[4881]: I0121 11:22:55.634317 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/813d73da-18da-40fa-b949-bbeec6604ac9-config-data\") pod \"nova-cell1-conductor-db-sync-sf7xj\" (UID: \"813d73da-18da-40fa-b949-bbeec6604ac9\") " pod="openstack/nova-cell1-conductor-db-sync-sf7xj" Jan 21 11:22:55 crc kubenswrapper[4881]: I0121 11:22:55.635005 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/813d73da-18da-40fa-b949-bbeec6604ac9-scripts\") pod \"nova-cell1-conductor-db-sync-sf7xj\" (UID: \"813d73da-18da-40fa-b949-bbeec6604ac9\") " pod="openstack/nova-cell1-conductor-db-sync-sf7xj" Jan 21 11:22:55 crc kubenswrapper[4881]: I0121 11:22:55.635076 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtwfp\" (UniqueName: \"kubernetes.io/projected/813d73da-18da-40fa-b949-bbeec6604ac9-kube-api-access-xtwfp\") pod \"nova-cell1-conductor-db-sync-sf7xj\" (UID: \"813d73da-18da-40fa-b949-bbeec6604ac9\") " pod="openstack/nova-cell1-conductor-db-sync-sf7xj" Jan 21 11:22:55 crc kubenswrapper[4881]: I0121 11:22:55.635133 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/813d73da-18da-40fa-b949-bbeec6604ac9-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-sf7xj\" (UID: \"813d73da-18da-40fa-b949-bbeec6604ac9\") " pod="openstack/nova-cell1-conductor-db-sync-sf7xj" Jan 21 11:22:55 crc kubenswrapper[4881]: I0121 11:22:55.641777 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/813d73da-18da-40fa-b949-bbeec6604ac9-config-data\") pod \"nova-cell1-conductor-db-sync-sf7xj\" (UID: \"813d73da-18da-40fa-b949-bbeec6604ac9\") " pod="openstack/nova-cell1-conductor-db-sync-sf7xj" Jan 21 11:22:55 crc kubenswrapper[4881]: I0121 11:22:55.642092 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/813d73da-18da-40fa-b949-bbeec6604ac9-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-sf7xj\" (UID: \"813d73da-18da-40fa-b949-bbeec6604ac9\") " pod="openstack/nova-cell1-conductor-db-sync-sf7xj" Jan 21 11:22:55 crc kubenswrapper[4881]: I0121 11:22:55.643366 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/813d73da-18da-40fa-b949-bbeec6604ac9-scripts\") pod \"nova-cell1-conductor-db-sync-sf7xj\" (UID: \"813d73da-18da-40fa-b949-bbeec6604ac9\") " pod="openstack/nova-cell1-conductor-db-sync-sf7xj" Jan 21 11:22:55 crc kubenswrapper[4881]: I0121 11:22:55.655457 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtwfp\" (UniqueName: \"kubernetes.io/projected/813d73da-18da-40fa-b949-bbeec6604ac9-kube-api-access-xtwfp\") pod \"nova-cell1-conductor-db-sync-sf7xj\" (UID: \"813d73da-18da-40fa-b949-bbeec6604ac9\") " pod="openstack/nova-cell1-conductor-db-sync-sf7xj" Jan 21 11:22:55 crc kubenswrapper[4881]: I0121 11:22:55.724335 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 11:22:55 crc kubenswrapper[4881]: W0121 11:22:55.726930 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf39b23f8_2c7e_46d6_8e59_7980b1d2c27c.slice/crio-a5806f41aee852119e408747b6a9159dc66b4ea14896033d8861a45a5e319518 WatchSource:0}: Error finding container a5806f41aee852119e408747b6a9159dc66b4ea14896033d8861a45a5e319518: Status 404 returned error can't find the container with id a5806f41aee852119e408747b6a9159dc66b4ea14896033d8861a45a5e319518 Jan 21 11:22:55 crc kubenswrapper[4881]: I0121 11:22:55.738449 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 11:22:55 crc kubenswrapper[4881]: I0121 11:22:55.804577 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-sf7xj" Jan 21 11:22:55 crc kubenswrapper[4881]: I0121 11:22:55.944034 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 11:22:56 crc kubenswrapper[4881]: I0121 11:22:56.116252 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d3527c16-7547-4e37-bcda-452193c45fee","Type":"ContainerStarted","Data":"19360c29690f2d877803b5397f0b64081dcdd4e4fc63374ceab9aad4daa3f1c3"} Jan 21 11:22:56 crc kubenswrapper[4881]: I0121 11:22:56.131773 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f39b23f8-2c7e-46d6-8e59-7980b1d2c27c","Type":"ContainerStarted","Data":"a5806f41aee852119e408747b6a9159dc66b4ea14896033d8861a45a5e319518"} Jan 21 11:22:56 crc kubenswrapper[4881]: I0121 11:22:56.140828 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9f55bccdc-ghvhg"] Jan 21 11:22:56 crc kubenswrapper[4881]: I0121 11:22:56.150264 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"50ff1a29-d6ee-4911-bb22-165aca6d8605","Type":"ContainerStarted","Data":"6aaf4e142828aa790e377df87440347084937144bb74fce4d8edde8de8915f28"} Jan 21 11:22:56 crc kubenswrapper[4881]: I0121 11:22:56.177363 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qgqh7" event={"ID":"9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad","Type":"ContainerStarted","Data":"0055b21217090cd15d9d0b17356b22b40f32a70cf1a35f1e9043b6cc9a7f1186"} Jan 21 11:22:56 crc kubenswrapper[4881]: I0121 11:22:56.186530 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3345073b-8907-4de9-829f-73d8e79a01bb","Type":"ContainerStarted","Data":"b74119743bb7cd487418f8d001a744431b3d7a1804f43dd5e7dc76b033b63247"} Jan 21 11:22:56 crc kubenswrapper[4881]: W0121 11:22:56.205319 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod859758f9_0dc2_4397_a75a_b098eaabe613.slice/crio-f75b793fa7a8fa638c746656a34aafcf67f449119cc5beb64d5b0d6054ef7320 WatchSource:0}: Error finding container f75b793fa7a8fa638c746656a34aafcf67f449119cc5beb64d5b0d6054ef7320: Status 404 returned error can't find the container with id f75b793fa7a8fa638c746656a34aafcf67f449119cc5beb64d5b0d6054ef7320 Jan 21 11:22:56 crc kubenswrapper[4881]: I0121 11:22:56.734654 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-qgqh7" podStartSLOduration=3.734630157 podStartE2EDuration="3.734630157s" podCreationTimestamp="2026-01-21 11:22:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:22:56.231534024 +0000 UTC m=+1563.491490503" watchObservedRunningTime="2026-01-21 11:22:56.734630157 +0000 UTC m=+1563.994586636" Jan 21 11:22:56 crc kubenswrapper[4881]: I0121 11:22:56.737385 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-sf7xj"] Jan 21 11:22:56 crc kubenswrapper[4881]: W0121 11:22:56.748247 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod813d73da_18da_40fa_b949_bbeec6604ac9.slice/crio-27b08df378991b2d98990d6780e79b553e25ff279cca08756a8d58c7593ae3cb WatchSource:0}: Error finding container 27b08df378991b2d98990d6780e79b553e25ff279cca08756a8d58c7593ae3cb: Status 404 returned error can't find the container with id 27b08df378991b2d98990d6780e79b553e25ff279cca08756a8d58c7593ae3cb Jan 21 11:22:56 crc kubenswrapper[4881]: E0121 11:22:56.867343 4881 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod859758f9_0dc2_4397_a75a_b098eaabe613.slice/crio-conmon-14c1d2dd7151297f216e34923a28ce4dc55ea08298e597088fec945419be539f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod859758f9_0dc2_4397_a75a_b098eaabe613.slice/crio-14c1d2dd7151297f216e34923a28ce4dc55ea08298e597088fec945419be539f.scope\": RecentStats: unable to find data in memory cache]" Jan 21 11:22:57 crc kubenswrapper[4881]: I0121 11:22:57.209166 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-sf7xj" event={"ID":"813d73da-18da-40fa-b949-bbeec6604ac9","Type":"ContainerStarted","Data":"02004fbf2f26b53236286799b468ab78450f8557fc37a01d6e78bf2e7876befc"} Jan 21 11:22:57 crc kubenswrapper[4881]: I0121 11:22:57.209502 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-sf7xj" event={"ID":"813d73da-18da-40fa-b949-bbeec6604ac9","Type":"ContainerStarted","Data":"27b08df378991b2d98990d6780e79b553e25ff279cca08756a8d58c7593ae3cb"} Jan 21 11:22:57 crc kubenswrapper[4881]: I0121 11:22:57.218171 4881 generic.go:334] "Generic (PLEG): container finished" podID="859758f9-0dc2-4397-a75a-b098eaabe613" containerID="14c1d2dd7151297f216e34923a28ce4dc55ea08298e597088fec945419be539f" exitCode=0 Jan 21 11:22:57 crc kubenswrapper[4881]: I0121 11:22:57.220479 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9f55bccdc-ghvhg" event={"ID":"859758f9-0dc2-4397-a75a-b098eaabe613","Type":"ContainerDied","Data":"14c1d2dd7151297f216e34923a28ce4dc55ea08298e597088fec945419be539f"} Jan 21 11:22:57 crc kubenswrapper[4881]: I0121 11:22:57.220529 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9f55bccdc-ghvhg" event={"ID":"859758f9-0dc2-4397-a75a-b098eaabe613","Type":"ContainerStarted","Data":"f75b793fa7a8fa638c746656a34aafcf67f449119cc5beb64d5b0d6054ef7320"} Jan 21 11:22:57 crc kubenswrapper[4881]: I0121 11:22:57.235233 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-sf7xj" podStartSLOduration=2.2352118 podStartE2EDuration="2.2352118s" podCreationTimestamp="2026-01-21 11:22:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:22:57.22830879 +0000 UTC m=+1564.488265259" watchObservedRunningTime="2026-01-21 11:22:57.2352118 +0000 UTC m=+1564.495168269" Jan 21 11:22:58 crc kubenswrapper[4881]: I0121 11:22:58.628207 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 11:22:58 crc kubenswrapper[4881]: I0121 11:22:58.660478 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 11:23:01 crc kubenswrapper[4881]: I0121 11:23:01.284154 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9f55bccdc-ghvhg" event={"ID":"859758f9-0dc2-4397-a75a-b098eaabe613","Type":"ContainerStarted","Data":"ddbf5564e0fec706a2bc3be62fec290ad0d5c0dccb7ad63e5048139ac59265e0"} Jan 21 11:23:01 crc kubenswrapper[4881]: I0121 11:23:01.284647 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-9f55bccdc-ghvhg" Jan 21 11:23:01 crc kubenswrapper[4881]: I0121 11:23:01.289372 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"50ff1a29-d6ee-4911-bb22-165aca6d8605","Type":"ContainerStarted","Data":"9d3665845c2c2c09903d0aa16a7538de5b4dcf05cef7d82865d9c9d446cdaf41"} Jan 21 11:23:01 crc kubenswrapper[4881]: I0121 11:23:01.289415 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="50ff1a29-d6ee-4911-bb22-165aca6d8605" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://9d3665845c2c2c09903d0aa16a7538de5b4dcf05cef7d82865d9c9d446cdaf41" gracePeriod=30 Jan 21 11:23:01 crc kubenswrapper[4881]: I0121 11:23:01.297565 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3345073b-8907-4de9-829f-73d8e79a01bb","Type":"ContainerStarted","Data":"574febd8df92e0f37adc7968b35f9fcf1e5f52e202a4769da6f91161f9a9f02c"} Jan 21 11:23:01 crc kubenswrapper[4881]: I0121 11:23:01.300204 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d3527c16-7547-4e37-bcda-452193c45fee","Type":"ContainerStarted","Data":"d5d6be9da18cdb336cad44c85f030f31c3a241f6234a1b668281031e8ffb56ec"} Jan 21 11:23:01 crc kubenswrapper[4881]: I0121 11:23:01.300246 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d3527c16-7547-4e37-bcda-452193c45fee","Type":"ContainerStarted","Data":"000840a5458dc374424237a1e0edaa7bc61f3e5c2c1a3524dfdcefbcaa258c53"} Jan 21 11:23:01 crc kubenswrapper[4881]: I0121 11:23:01.300420 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d3527c16-7547-4e37-bcda-452193c45fee" containerName="nova-metadata-log" containerID="cri-o://000840a5458dc374424237a1e0edaa7bc61f3e5c2c1a3524dfdcefbcaa258c53" gracePeriod=30 Jan 21 11:23:01 crc kubenswrapper[4881]: I0121 11:23:01.300491 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d3527c16-7547-4e37-bcda-452193c45fee" containerName="nova-metadata-metadata" containerID="cri-o://d5d6be9da18cdb336cad44c85f030f31c3a241f6234a1b668281031e8ffb56ec" gracePeriod=30 Jan 21 11:23:01 crc kubenswrapper[4881]: I0121 11:23:01.306985 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f39b23f8-2c7e-46d6-8e59-7980b1d2c27c","Type":"ContainerStarted","Data":"b1e94b3b719b1a2213452fd275be74fdb796e7c03d99fa5695466085e68a91fd"} Jan 21 11:23:01 crc kubenswrapper[4881]: I0121 11:23:01.307326 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f39b23f8-2c7e-46d6-8e59-7980b1d2c27c","Type":"ContainerStarted","Data":"71bf37a912cb19763de6a839082bf72ecae64d550a077ed5461e0d2fa0d9be80"} Jan 21 11:23:01 crc kubenswrapper[4881]: I0121 11:23:01.309967 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-9f55bccdc-ghvhg" podStartSLOduration=7.309948371 podStartE2EDuration="7.309948371s" podCreationTimestamp="2026-01-21 11:22:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:23:01.309932631 +0000 UTC m=+1568.569889110" watchObservedRunningTime="2026-01-21 11:23:01.309948371 +0000 UTC m=+1568.569904840" Jan 21 11:23:01 crc kubenswrapper[4881]: I0121 11:23:01.343384 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.731849469 podStartE2EDuration="7.34335909s" podCreationTimestamp="2026-01-21 11:22:54 +0000 UTC" firstStartedPulling="2026-01-21 11:22:55.728730318 +0000 UTC m=+1562.988686787" lastFinishedPulling="2026-01-21 11:23:00.340239939 +0000 UTC m=+1567.600196408" observedRunningTime="2026-01-21 11:23:01.327710017 +0000 UTC m=+1568.587666506" watchObservedRunningTime="2026-01-21 11:23:01.34335909 +0000 UTC m=+1568.603315559" Jan 21 11:23:01 crc kubenswrapper[4881]: I0121 11:23:01.395830 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.780540632 podStartE2EDuration="8.395807394s" podCreationTimestamp="2026-01-21 11:22:53 +0000 UTC" firstStartedPulling="2026-01-21 11:22:55.730361548 +0000 UTC m=+1562.990318017" lastFinishedPulling="2026-01-21 11:23:00.34562831 +0000 UTC m=+1567.605584779" observedRunningTime="2026-01-21 11:23:01.355086697 +0000 UTC m=+1568.615043156" watchObservedRunningTime="2026-01-21 11:23:01.395807394 +0000 UTC m=+1568.655763873" Jan 21 11:23:01 crc kubenswrapper[4881]: I0121 11:23:01.403659 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.672504875 podStartE2EDuration="7.403635086s" podCreationTimestamp="2026-01-21 11:22:54 +0000 UTC" firstStartedPulling="2026-01-21 11:22:55.608485832 +0000 UTC m=+1562.868442301" lastFinishedPulling="2026-01-21 11:23:00.339616043 +0000 UTC m=+1567.599572512" observedRunningTime="2026-01-21 11:23:01.375735773 +0000 UTC m=+1568.635692242" watchObservedRunningTime="2026-01-21 11:23:01.403635086 +0000 UTC m=+1568.663591555" Jan 21 11:23:01 crc kubenswrapper[4881]: I0121 11:23:01.428664 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.045126613 podStartE2EDuration="7.428644819s" podCreationTimestamp="2026-01-21 11:22:54 +0000 UTC" firstStartedPulling="2026-01-21 11:22:55.954703693 +0000 UTC m=+1563.214660162" lastFinishedPulling="2026-01-21 11:23:00.338221899 +0000 UTC m=+1567.598178368" observedRunningTime="2026-01-21 11:23:01.404936608 +0000 UTC m=+1568.664893077" watchObservedRunningTime="2026-01-21 11:23:01.428644819 +0000 UTC m=+1568.688601288" Jan 21 11:23:02 crc kubenswrapper[4881]: I0121 11:23:02.556909 4881 generic.go:334] "Generic (PLEG): container finished" podID="d3527c16-7547-4e37-bcda-452193c45fee" containerID="d5d6be9da18cdb336cad44c85f030f31c3a241f6234a1b668281031e8ffb56ec" exitCode=0 Jan 21 11:23:02 crc kubenswrapper[4881]: I0121 11:23:02.557583 4881 generic.go:334] "Generic (PLEG): container finished" podID="d3527c16-7547-4e37-bcda-452193c45fee" containerID="000840a5458dc374424237a1e0edaa7bc61f3e5c2c1a3524dfdcefbcaa258c53" exitCode=143 Jan 21 11:23:02 crc kubenswrapper[4881]: I0121 11:23:02.557116 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d3527c16-7547-4e37-bcda-452193c45fee","Type":"ContainerDied","Data":"d5d6be9da18cdb336cad44c85f030f31c3a241f6234a1b668281031e8ffb56ec"} Jan 21 11:23:02 crc kubenswrapper[4881]: I0121 11:23:02.558776 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d3527c16-7547-4e37-bcda-452193c45fee","Type":"ContainerDied","Data":"000840a5458dc374424237a1e0edaa7bc61f3e5c2c1a3524dfdcefbcaa258c53"} Jan 21 11:23:02 crc kubenswrapper[4881]: I0121 11:23:02.558811 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d3527c16-7547-4e37-bcda-452193c45fee","Type":"ContainerDied","Data":"19360c29690f2d877803b5397f0b64081dcdd4e4fc63374ceab9aad4daa3f1c3"} Jan 21 11:23:02 crc kubenswrapper[4881]: I0121 11:23:02.558830 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19360c29690f2d877803b5397f0b64081dcdd4e4fc63374ceab9aad4daa3f1c3" Jan 21 11:23:02 crc kubenswrapper[4881]: I0121 11:23:02.563104 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 11:23:02 crc kubenswrapper[4881]: I0121 11:23:02.612467 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3527c16-7547-4e37-bcda-452193c45fee-combined-ca-bundle\") pod \"d3527c16-7547-4e37-bcda-452193c45fee\" (UID: \"d3527c16-7547-4e37-bcda-452193c45fee\") " Jan 21 11:23:02 crc kubenswrapper[4881]: I0121 11:23:02.612634 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3527c16-7547-4e37-bcda-452193c45fee-logs\") pod \"d3527c16-7547-4e37-bcda-452193c45fee\" (UID: \"d3527c16-7547-4e37-bcda-452193c45fee\") " Jan 21 11:23:02 crc kubenswrapper[4881]: I0121 11:23:02.613175 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3527c16-7547-4e37-bcda-452193c45fee-logs" (OuterVolumeSpecName: "logs") pod "d3527c16-7547-4e37-bcda-452193c45fee" (UID: "d3527c16-7547-4e37-bcda-452193c45fee"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:23:02 crc kubenswrapper[4881]: I0121 11:23:02.613191 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-st84x\" (UniqueName: \"kubernetes.io/projected/d3527c16-7547-4e37-bcda-452193c45fee-kube-api-access-st84x\") pod \"d3527c16-7547-4e37-bcda-452193c45fee\" (UID: \"d3527c16-7547-4e37-bcda-452193c45fee\") " Jan 21 11:23:02 crc kubenswrapper[4881]: I0121 11:23:02.613358 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3527c16-7547-4e37-bcda-452193c45fee-config-data\") pod \"d3527c16-7547-4e37-bcda-452193c45fee\" (UID: \"d3527c16-7547-4e37-bcda-452193c45fee\") " Jan 21 11:23:02 crc kubenswrapper[4881]: I0121 11:23:02.614454 4881 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d3527c16-7547-4e37-bcda-452193c45fee-logs\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:02 crc kubenswrapper[4881]: I0121 11:23:02.619744 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3527c16-7547-4e37-bcda-452193c45fee-kube-api-access-st84x" (OuterVolumeSpecName: "kube-api-access-st84x") pod "d3527c16-7547-4e37-bcda-452193c45fee" (UID: "d3527c16-7547-4e37-bcda-452193c45fee"). InnerVolumeSpecName "kube-api-access-st84x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:23:02 crc kubenswrapper[4881]: I0121 11:23:02.648947 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3527c16-7547-4e37-bcda-452193c45fee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d3527c16-7547-4e37-bcda-452193c45fee" (UID: "d3527c16-7547-4e37-bcda-452193c45fee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:23:02 crc kubenswrapper[4881]: I0121 11:23:02.659597 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3527c16-7547-4e37-bcda-452193c45fee-config-data" (OuterVolumeSpecName: "config-data") pod "d3527c16-7547-4e37-bcda-452193c45fee" (UID: "d3527c16-7547-4e37-bcda-452193c45fee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:23:02 crc kubenswrapper[4881]: I0121 11:23:02.716186 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-st84x\" (UniqueName: \"kubernetes.io/projected/d3527c16-7547-4e37-bcda-452193c45fee-kube-api-access-st84x\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:02 crc kubenswrapper[4881]: I0121 11:23:02.716261 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3527c16-7547-4e37-bcda-452193c45fee-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:02 crc kubenswrapper[4881]: I0121 11:23:02.716277 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3527c16-7547-4e37-bcda-452193c45fee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:03 crc kubenswrapper[4881]: I0121 11:23:03.570550 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 11:23:03 crc kubenswrapper[4881]: I0121 11:23:03.602163 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 11:23:03 crc kubenswrapper[4881]: I0121 11:23:03.616749 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 11:23:03 crc kubenswrapper[4881]: I0121 11:23:03.653489 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 11:23:03 crc kubenswrapper[4881]: E0121 11:23:03.654380 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3527c16-7547-4e37-bcda-452193c45fee" containerName="nova-metadata-metadata" Jan 21 11:23:03 crc kubenswrapper[4881]: I0121 11:23:03.654410 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3527c16-7547-4e37-bcda-452193c45fee" containerName="nova-metadata-metadata" Jan 21 11:23:03 crc kubenswrapper[4881]: E0121 11:23:03.654452 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3527c16-7547-4e37-bcda-452193c45fee" containerName="nova-metadata-log" Jan 21 11:23:03 crc kubenswrapper[4881]: I0121 11:23:03.654461 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3527c16-7547-4e37-bcda-452193c45fee" containerName="nova-metadata-log" Jan 21 11:23:03 crc kubenswrapper[4881]: I0121 11:23:03.654722 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3527c16-7547-4e37-bcda-452193c45fee" containerName="nova-metadata-log" Jan 21 11:23:03 crc kubenswrapper[4881]: I0121 11:23:03.654749 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3527c16-7547-4e37-bcda-452193c45fee" containerName="nova-metadata-metadata" Jan 21 11:23:03 crc kubenswrapper[4881]: I0121 11:23:03.656089 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 11:23:03 crc kubenswrapper[4881]: I0121 11:23:03.656204 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 11:23:03 crc kubenswrapper[4881]: I0121 11:23:03.661044 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 21 11:23:03 crc kubenswrapper[4881]: I0121 11:23:03.661240 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 11:23:03 crc kubenswrapper[4881]: I0121 11:23:03.839966 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtx52\" (UniqueName: \"kubernetes.io/projected/3c6ca904-2790-425f-81ac-37cdc543cf0f-kube-api-access-dtx52\") pod \"nova-metadata-0\" (UID: \"3c6ca904-2790-425f-81ac-37cdc543cf0f\") " pod="openstack/nova-metadata-0" Jan 21 11:23:03 crc kubenswrapper[4881]: I0121 11:23:03.840037 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c6ca904-2790-425f-81ac-37cdc543cf0f-logs\") pod \"nova-metadata-0\" (UID: \"3c6ca904-2790-425f-81ac-37cdc543cf0f\") " pod="openstack/nova-metadata-0" Jan 21 11:23:03 crc kubenswrapper[4881]: I0121 11:23:03.840382 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c6ca904-2790-425f-81ac-37cdc543cf0f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3c6ca904-2790-425f-81ac-37cdc543cf0f\") " pod="openstack/nova-metadata-0" Jan 21 11:23:03 crc kubenswrapper[4881]: I0121 11:23:03.840661 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c6ca904-2790-425f-81ac-37cdc543cf0f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3c6ca904-2790-425f-81ac-37cdc543cf0f\") " pod="openstack/nova-metadata-0" Jan 21 11:23:03 crc kubenswrapper[4881]: I0121 11:23:03.840866 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c6ca904-2790-425f-81ac-37cdc543cf0f-config-data\") pod \"nova-metadata-0\" (UID: \"3c6ca904-2790-425f-81ac-37cdc543cf0f\") " pod="openstack/nova-metadata-0" Jan 21 11:23:03 crc kubenswrapper[4881]: I0121 11:23:03.942928 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c6ca904-2790-425f-81ac-37cdc543cf0f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3c6ca904-2790-425f-81ac-37cdc543cf0f\") " pod="openstack/nova-metadata-0" Jan 21 11:23:03 crc kubenswrapper[4881]: I0121 11:23:03.943083 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c6ca904-2790-425f-81ac-37cdc543cf0f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3c6ca904-2790-425f-81ac-37cdc543cf0f\") " pod="openstack/nova-metadata-0" Jan 21 11:23:03 crc kubenswrapper[4881]: I0121 11:23:03.943156 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c6ca904-2790-425f-81ac-37cdc543cf0f-config-data\") pod \"nova-metadata-0\" (UID: \"3c6ca904-2790-425f-81ac-37cdc543cf0f\") " pod="openstack/nova-metadata-0" Jan 21 11:23:03 crc kubenswrapper[4881]: I0121 11:23:03.943228 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtx52\" (UniqueName: \"kubernetes.io/projected/3c6ca904-2790-425f-81ac-37cdc543cf0f-kube-api-access-dtx52\") pod \"nova-metadata-0\" (UID: \"3c6ca904-2790-425f-81ac-37cdc543cf0f\") " pod="openstack/nova-metadata-0" Jan 21 11:23:03 crc kubenswrapper[4881]: I0121 11:23:03.943265 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c6ca904-2790-425f-81ac-37cdc543cf0f-logs\") pod \"nova-metadata-0\" (UID: \"3c6ca904-2790-425f-81ac-37cdc543cf0f\") " pod="openstack/nova-metadata-0" Jan 21 11:23:03 crc kubenswrapper[4881]: I0121 11:23:03.943842 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c6ca904-2790-425f-81ac-37cdc543cf0f-logs\") pod \"nova-metadata-0\" (UID: \"3c6ca904-2790-425f-81ac-37cdc543cf0f\") " pod="openstack/nova-metadata-0" Jan 21 11:23:03 crc kubenswrapper[4881]: I0121 11:23:03.951331 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c6ca904-2790-425f-81ac-37cdc543cf0f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3c6ca904-2790-425f-81ac-37cdc543cf0f\") " pod="openstack/nova-metadata-0" Jan 21 11:23:03 crc kubenswrapper[4881]: I0121 11:23:03.967722 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c6ca904-2790-425f-81ac-37cdc543cf0f-config-data\") pod \"nova-metadata-0\" (UID: \"3c6ca904-2790-425f-81ac-37cdc543cf0f\") " pod="openstack/nova-metadata-0" Jan 21 11:23:03 crc kubenswrapper[4881]: I0121 11:23:03.967945 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c6ca904-2790-425f-81ac-37cdc543cf0f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3c6ca904-2790-425f-81ac-37cdc543cf0f\") " pod="openstack/nova-metadata-0" Jan 21 11:23:03 crc kubenswrapper[4881]: I0121 11:23:03.970850 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtx52\" (UniqueName: \"kubernetes.io/projected/3c6ca904-2790-425f-81ac-37cdc543cf0f-kube-api-access-dtx52\") pod \"nova-metadata-0\" (UID: \"3c6ca904-2790-425f-81ac-37cdc543cf0f\") " pod="openstack/nova-metadata-0" Jan 21 11:23:03 crc kubenswrapper[4881]: I0121 11:23:03.981322 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 11:23:04 crc kubenswrapper[4881]: I0121 11:23:04.464758 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 11:23:04 crc kubenswrapper[4881]: I0121 11:23:04.467838 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 21 11:23:04 crc kubenswrapper[4881]: I0121 11:23:04.467879 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 21 11:23:04 crc kubenswrapper[4881]: I0121 11:23:04.468037 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 11:23:04 crc kubenswrapper[4881]: I0121 11:23:04.469578 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 11:23:04 crc kubenswrapper[4881]: I0121 11:23:04.529537 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 21 11:23:04 crc kubenswrapper[4881]: I0121 11:23:04.600987 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:23:04 crc kubenswrapper[4881]: I0121 11:23:04.602905 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3c6ca904-2790-425f-81ac-37cdc543cf0f","Type":"ContainerStarted","Data":"2c61e4a0cf50faebb3da795860373a82c98ee972146a5292709cde146a4a9c15"} Jan 21 11:23:04 crc kubenswrapper[4881]: I0121 11:23:04.653299 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 21 11:23:05 crc kubenswrapper[4881]: I0121 11:23:05.334596 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3527c16-7547-4e37-bcda-452193c45fee" path="/var/lib/kubelet/pods/d3527c16-7547-4e37-bcda-452193c45fee/volumes" Jan 21 11:23:05 crc kubenswrapper[4881]: I0121 11:23:05.550069 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f39b23f8-2c7e-46d6-8e59-7980b1d2c27c" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.207:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 11:23:05 crc kubenswrapper[4881]: I0121 11:23:05.550084 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f39b23f8-2c7e-46d6-8e59-7980b1d2c27c" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.207:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 11:23:05 crc kubenswrapper[4881]: I0121 11:23:05.616208 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3c6ca904-2790-425f-81ac-37cdc543cf0f","Type":"ContainerStarted","Data":"04b14eafe282879a10a549256a83522f141403e701c9d0a5d0f5ea8746de26b5"} Jan 21 11:23:05 crc kubenswrapper[4881]: I0121 11:23:05.616264 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3c6ca904-2790-425f-81ac-37cdc543cf0f","Type":"ContainerStarted","Data":"cb3c8eb696c2d6f70dd5b7efed28b2b6d15d294b8d97901355bfdcf5ce7eaa3e"} Jan 21 11:23:05 crc kubenswrapper[4881]: I0121 11:23:05.650361 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.65033599 podStartE2EDuration="2.65033599s" podCreationTimestamp="2026-01-21 11:23:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:23:05.649131721 +0000 UTC m=+1572.909088190" watchObservedRunningTime="2026-01-21 11:23:05.65033599 +0000 UTC m=+1572.910292459" Jan 21 11:23:07 crc kubenswrapper[4881]: I0121 11:23:07.831617 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 21 11:23:08 crc kubenswrapper[4881]: I0121 11:23:08.650325 4881 generic.go:334] "Generic (PLEG): container finished" podID="9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad" containerID="0055b21217090cd15d9d0b17356b22b40f32a70cf1a35f1e9043b6cc9a7f1186" exitCode=0 Jan 21 11:23:08 crc kubenswrapper[4881]: I0121 11:23:08.650365 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qgqh7" event={"ID":"9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad","Type":"ContainerDied","Data":"0055b21217090cd15d9d0b17356b22b40f32a70cf1a35f1e9043b6cc9a7f1186"} Jan 21 11:23:08 crc kubenswrapper[4881]: I0121 11:23:08.981906 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 11:23:08 crc kubenswrapper[4881]: I0121 11:23:08.982158 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 11:23:09 crc kubenswrapper[4881]: I0121 11:23:09.918197 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-9f55bccdc-ghvhg" Jan 21 11:23:10 crc kubenswrapper[4881]: I0121 11:23:10.465544 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c849cf559-fjllv"] Jan 21 11:23:10 crc kubenswrapper[4881]: I0121 11:23:10.465798 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-c849cf559-fjllv" podUID="4a89a9d0-4859-41cb-896d-f1a91e854d7b" containerName="dnsmasq-dns" containerID="cri-o://520ec1cfcb7fa94d0057499475a0936b202225668f29de849ba69f710c127ead" gracePeriod=10 Jan 21 11:23:10 crc kubenswrapper[4881]: I0121 11:23:10.676776 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qgqh7" event={"ID":"9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad","Type":"ContainerDied","Data":"c99b0af6c38bc6fdca36563516e7441a2da5b379535ed6ab05553b2802c64c82"} Jan 21 11:23:10 crc kubenswrapper[4881]: I0121 11:23:10.676975 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c99b0af6c38bc6fdca36563516e7441a2da5b379535ed6ab05553b2802c64c82" Jan 21 11:23:10 crc kubenswrapper[4881]: I0121 11:23:10.679201 4881 generic.go:334] "Generic (PLEG): container finished" podID="813d73da-18da-40fa-b949-bbeec6604ac9" containerID="02004fbf2f26b53236286799b468ab78450f8557fc37a01d6e78bf2e7876befc" exitCode=0 Jan 21 11:23:10 crc kubenswrapper[4881]: I0121 11:23:10.679336 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-sf7xj" event={"ID":"813d73da-18da-40fa-b949-bbeec6604ac9","Type":"ContainerDied","Data":"02004fbf2f26b53236286799b468ab78450f8557fc37a01d6e78bf2e7876befc"} Jan 21 11:23:10 crc kubenswrapper[4881]: I0121 11:23:10.800940 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qgqh7" Jan 21 11:23:10 crc kubenswrapper[4881]: I0121 11:23:10.926586 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5c6l8\" (UniqueName: \"kubernetes.io/projected/9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad-kube-api-access-5c6l8\") pod \"9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad\" (UID: \"9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad\") " Jan 21 11:23:10 crc kubenswrapper[4881]: I0121 11:23:10.926721 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad-config-data\") pod \"9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad\" (UID: \"9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad\") " Jan 21 11:23:10 crc kubenswrapper[4881]: I0121 11:23:10.927016 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad-combined-ca-bundle\") pod \"9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad\" (UID: \"9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad\") " Jan 21 11:23:10 crc kubenswrapper[4881]: I0121 11:23:10.927240 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad-scripts\") pod \"9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad\" (UID: \"9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad\") " Jan 21 11:23:10 crc kubenswrapper[4881]: I0121 11:23:10.932796 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad-scripts" (OuterVolumeSpecName: "scripts") pod "9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad" (UID: "9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:23:10 crc kubenswrapper[4881]: I0121 11:23:10.933706 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad-kube-api-access-5c6l8" (OuterVolumeSpecName: "kube-api-access-5c6l8") pod "9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad" (UID: "9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad"). InnerVolumeSpecName "kube-api-access-5c6l8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:23:10 crc kubenswrapper[4881]: I0121 11:23:10.959327 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad-config-data" (OuterVolumeSpecName: "config-data") pod "9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad" (UID: "9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:23:10 crc kubenswrapper[4881]: I0121 11:23:10.987904 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad" (UID: "9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:23:11 crc kubenswrapper[4881]: I0121 11:23:11.030080 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:11 crc kubenswrapper[4881]: I0121 11:23:11.030510 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5c6l8\" (UniqueName: \"kubernetes.io/projected/9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad-kube-api-access-5c6l8\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:11 crc kubenswrapper[4881]: I0121 11:23:11.030598 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:11 crc kubenswrapper[4881]: I0121 11:23:11.030683 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:11 crc kubenswrapper[4881]: I0121 11:23:11.693411 4881 generic.go:334] "Generic (PLEG): container finished" podID="4a89a9d0-4859-41cb-896d-f1a91e854d7b" containerID="520ec1cfcb7fa94d0057499475a0936b202225668f29de849ba69f710c127ead" exitCode=0 Jan 21 11:23:11 crc kubenswrapper[4881]: I0121 11:23:11.693654 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c849cf559-fjllv" event={"ID":"4a89a9d0-4859-41cb-896d-f1a91e854d7b","Type":"ContainerDied","Data":"520ec1cfcb7fa94d0057499475a0936b202225668f29de849ba69f710c127ead"} Jan 21 11:23:11 crc kubenswrapper[4881]: I0121 11:23:11.693736 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qgqh7" Jan 21 11:23:11 crc kubenswrapper[4881]: I0121 11:23:11.996525 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 11:23:11 crc kubenswrapper[4881]: I0121 11:23:11.996820 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f39b23f8-2c7e-46d6-8e59-7980b1d2c27c" containerName="nova-api-log" containerID="cri-o://71bf37a912cb19763de6a839082bf72ecae64d550a077ed5461e0d2fa0d9be80" gracePeriod=30 Jan 21 11:23:11 crc kubenswrapper[4881]: I0121 11:23:11.996980 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f39b23f8-2c7e-46d6-8e59-7980b1d2c27c" containerName="nova-api-api" containerID="cri-o://b1e94b3b719b1a2213452fd275be74fdb796e7c03d99fa5695466085e68a91fd" gracePeriod=30 Jan 21 11:23:12 crc kubenswrapper[4881]: I0121 11:23:12.028854 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 11:23:12 crc kubenswrapper[4881]: I0121 11:23:12.029131 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="3345073b-8907-4de9-829f-73d8e79a01bb" containerName="nova-scheduler-scheduler" containerID="cri-o://574febd8df92e0f37adc7968b35f9fcf1e5f52e202a4769da6f91161f9a9f02c" gracePeriod=30 Jan 21 11:23:12 crc kubenswrapper[4881]: I0121 11:23:12.051900 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 11:23:12 crc kubenswrapper[4881]: I0121 11:23:12.052166 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3c6ca904-2790-425f-81ac-37cdc543cf0f" containerName="nova-metadata-log" containerID="cri-o://cb3c8eb696c2d6f70dd5b7efed28b2b6d15d294b8d97901355bfdcf5ce7eaa3e" gracePeriod=30 Jan 21 11:23:12 crc kubenswrapper[4881]: I0121 11:23:12.052292 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3c6ca904-2790-425f-81ac-37cdc543cf0f" containerName="nova-metadata-metadata" containerID="cri-o://04b14eafe282879a10a549256a83522f141403e701c9d0a5d0f5ea8746de26b5" gracePeriod=30 Jan 21 11:23:12 crc kubenswrapper[4881]: I0121 11:23:12.350381 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-sf7xj" Jan 21 11:23:12 crc kubenswrapper[4881]: I0121 11:23:12.849491 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-sf7xj" event={"ID":"813d73da-18da-40fa-b949-bbeec6604ac9","Type":"ContainerDied","Data":"27b08df378991b2d98990d6780e79b553e25ff279cca08756a8d58c7593ae3cb"} Jan 21 11:23:12 crc kubenswrapper[4881]: I0121 11:23:12.849533 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27b08df378991b2d98990d6780e79b553e25ff279cca08756a8d58c7593ae3cb" Jan 21 11:23:12 crc kubenswrapper[4881]: I0121 11:23:12.849594 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-sf7xj" Jan 21 11:23:12 crc kubenswrapper[4881]: I0121 11:23:12.890608 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/813d73da-18da-40fa-b949-bbeec6604ac9-scripts\") pod \"813d73da-18da-40fa-b949-bbeec6604ac9\" (UID: \"813d73da-18da-40fa-b949-bbeec6604ac9\") " Jan 21 11:23:12 crc kubenswrapper[4881]: I0121 11:23:12.890661 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/813d73da-18da-40fa-b949-bbeec6604ac9-combined-ca-bundle\") pod \"813d73da-18da-40fa-b949-bbeec6604ac9\" (UID: \"813d73da-18da-40fa-b949-bbeec6604ac9\") " Jan 21 11:23:12 crc kubenswrapper[4881]: I0121 11:23:12.890891 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/813d73da-18da-40fa-b949-bbeec6604ac9-config-data\") pod \"813d73da-18da-40fa-b949-bbeec6604ac9\" (UID: \"813d73da-18da-40fa-b949-bbeec6604ac9\") " Jan 21 11:23:12 crc kubenswrapper[4881]: I0121 11:23:12.891005 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtwfp\" (UniqueName: \"kubernetes.io/projected/813d73da-18da-40fa-b949-bbeec6604ac9-kube-api-access-xtwfp\") pod \"813d73da-18da-40fa-b949-bbeec6604ac9\" (UID: \"813d73da-18da-40fa-b949-bbeec6604ac9\") " Jan 21 11:23:12 crc kubenswrapper[4881]: I0121 11:23:12.944863 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/813d73da-18da-40fa-b949-bbeec6604ac9-scripts" (OuterVolumeSpecName: "scripts") pod "813d73da-18da-40fa-b949-bbeec6604ac9" (UID: "813d73da-18da-40fa-b949-bbeec6604ac9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:23:12 crc kubenswrapper[4881]: I0121 11:23:12.944959 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/813d73da-18da-40fa-b949-bbeec6604ac9-kube-api-access-xtwfp" (OuterVolumeSpecName: "kube-api-access-xtwfp") pod "813d73da-18da-40fa-b949-bbeec6604ac9" (UID: "813d73da-18da-40fa-b949-bbeec6604ac9"). InnerVolumeSpecName "kube-api-access-xtwfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:23:12 crc kubenswrapper[4881]: I0121 11:23:12.953857 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/813d73da-18da-40fa-b949-bbeec6604ac9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "813d73da-18da-40fa-b949-bbeec6604ac9" (UID: "813d73da-18da-40fa-b949-bbeec6604ac9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:23:12 crc kubenswrapper[4881]: I0121 11:23:12.991873 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/813d73da-18da-40fa-b949-bbeec6604ac9-config-data" (OuterVolumeSpecName: "config-data") pod "813d73da-18da-40fa-b949-bbeec6604ac9" (UID: "813d73da-18da-40fa-b949-bbeec6604ac9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:23:12 crc kubenswrapper[4881]: I0121 11:23:12.993962 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/813d73da-18da-40fa-b949-bbeec6604ac9-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:12 crc kubenswrapper[4881]: I0121 11:23:12.993986 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtwfp\" (UniqueName: \"kubernetes.io/projected/813d73da-18da-40fa-b949-bbeec6604ac9-kube-api-access-xtwfp\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:12 crc kubenswrapper[4881]: I0121 11:23:12.993997 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/813d73da-18da-40fa-b949-bbeec6604ac9-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:12 crc kubenswrapper[4881]: I0121 11:23:12.994005 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/813d73da-18da-40fa-b949-bbeec6604ac9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.366104 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c849cf559-fjllv" Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.513473 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a89a9d0-4859-41cb-896d-f1a91e854d7b-config\") pod \"4a89a9d0-4859-41cb-896d-f1a91e854d7b\" (UID: \"4a89a9d0-4859-41cb-896d-f1a91e854d7b\") " Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.513576 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cd8b7\" (UniqueName: \"kubernetes.io/projected/4a89a9d0-4859-41cb-896d-f1a91e854d7b-kube-api-access-cd8b7\") pod \"4a89a9d0-4859-41cb-896d-f1a91e854d7b\" (UID: \"4a89a9d0-4859-41cb-896d-f1a91e854d7b\") " Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.513630 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a89a9d0-4859-41cb-896d-f1a91e854d7b-ovsdbserver-nb\") pod \"4a89a9d0-4859-41cb-896d-f1a91e854d7b\" (UID: \"4a89a9d0-4859-41cb-896d-f1a91e854d7b\") " Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.513754 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a89a9d0-4859-41cb-896d-f1a91e854d7b-ovsdbserver-sb\") pod \"4a89a9d0-4859-41cb-896d-f1a91e854d7b\" (UID: \"4a89a9d0-4859-41cb-896d-f1a91e854d7b\") " Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.513844 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a89a9d0-4859-41cb-896d-f1a91e854d7b-dns-svc\") pod \"4a89a9d0-4859-41cb-896d-f1a91e854d7b\" (UID: \"4a89a9d0-4859-41cb-896d-f1a91e854d7b\") " Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.513893 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4a89a9d0-4859-41cb-896d-f1a91e854d7b-dns-swift-storage-0\") pod \"4a89a9d0-4859-41cb-896d-f1a91e854d7b\" (UID: \"4a89a9d0-4859-41cb-896d-f1a91e854d7b\") " Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.552005 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a89a9d0-4859-41cb-896d-f1a91e854d7b-kube-api-access-cd8b7" (OuterVolumeSpecName: "kube-api-access-cd8b7") pod "4a89a9d0-4859-41cb-896d-f1a91e854d7b" (UID: "4a89a9d0-4859-41cb-896d-f1a91e854d7b"). InnerVolumeSpecName "kube-api-access-cd8b7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.622443 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cd8b7\" (UniqueName: \"kubernetes.io/projected/4a89a9d0-4859-41cb-896d-f1a91e854d7b-kube-api-access-cd8b7\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.628641 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a89a9d0-4859-41cb-896d-f1a91e854d7b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4a89a9d0-4859-41cb-896d-f1a91e854d7b" (UID: "4a89a9d0-4859-41cb-896d-f1a91e854d7b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.674173 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a89a9d0-4859-41cb-896d-f1a91e854d7b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4a89a9d0-4859-41cb-896d-f1a91e854d7b" (UID: "4a89a9d0-4859-41cb-896d-f1a91e854d7b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.687321 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a89a9d0-4859-41cb-896d-f1a91e854d7b-config" (OuterVolumeSpecName: "config") pod "4a89a9d0-4859-41cb-896d-f1a91e854d7b" (UID: "4a89a9d0-4859-41cb-896d-f1a91e854d7b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.697369 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a89a9d0-4859-41cb-896d-f1a91e854d7b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4a89a9d0-4859-41cb-896d-f1a91e854d7b" (UID: "4a89a9d0-4859-41cb-896d-f1a91e854d7b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.727674 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a89a9d0-4859-41cb-896d-f1a91e854d7b-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.727717 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a89a9d0-4859-41cb-896d-f1a91e854d7b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.727727 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a89a9d0-4859-41cb-896d-f1a91e854d7b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.727738 4881 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4a89a9d0-4859-41cb-896d-f1a91e854d7b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.738513 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a89a9d0-4859-41cb-896d-f1a91e854d7b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4a89a9d0-4859-41cb-896d-f1a91e854d7b" (UID: "4a89a9d0-4859-41cb-896d-f1a91e854d7b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.829226 4881 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a89a9d0-4859-41cb-896d-f1a91e854d7b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.863243 4881 generic.go:334] "Generic (PLEG): container finished" podID="f39b23f8-2c7e-46d6-8e59-7980b1d2c27c" containerID="71bf37a912cb19763de6a839082bf72ecae64d550a077ed5461e0d2fa0d9be80" exitCode=143 Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.863326 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f39b23f8-2c7e-46d6-8e59-7980b1d2c27c","Type":"ContainerDied","Data":"71bf37a912cb19763de6a839082bf72ecae64d550a077ed5461e0d2fa0d9be80"} Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.865638 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c849cf559-fjllv" event={"ID":"4a89a9d0-4859-41cb-896d-f1a91e854d7b","Type":"ContainerDied","Data":"7d5f5a0fecb347a3031d8e9d038b27129aa5ce2b2e49dd11bb8a2bb4f461cdbf"} Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.865685 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c849cf559-fjllv" Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.865715 4881 scope.go:117] "RemoveContainer" containerID="520ec1cfcb7fa94d0057499475a0936b202225668f29de849ba69f710c127ead" Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.868291 4881 generic.go:334] "Generic (PLEG): container finished" podID="3c6ca904-2790-425f-81ac-37cdc543cf0f" containerID="04b14eafe282879a10a549256a83522f141403e701c9d0a5d0f5ea8746de26b5" exitCode=0 Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.868583 4881 generic.go:334] "Generic (PLEG): container finished" podID="3c6ca904-2790-425f-81ac-37cdc543cf0f" containerID="cb3c8eb696c2d6f70dd5b7efed28b2b6d15d294b8d97901355bfdcf5ce7eaa3e" exitCode=143 Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.868335 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3c6ca904-2790-425f-81ac-37cdc543cf0f","Type":"ContainerDied","Data":"04b14eafe282879a10a549256a83522f141403e701c9d0a5d0f5ea8746de26b5"} Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.868632 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3c6ca904-2790-425f-81ac-37cdc543cf0f","Type":"ContainerDied","Data":"cb3c8eb696c2d6f70dd5b7efed28b2b6d15d294b8d97901355bfdcf5ce7eaa3e"} Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.921431 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 11:23:13 crc kubenswrapper[4881]: E0121 11:23:13.922007 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="813d73da-18da-40fa-b949-bbeec6604ac9" containerName="nova-cell1-conductor-db-sync" Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.922030 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="813d73da-18da-40fa-b949-bbeec6604ac9" containerName="nova-cell1-conductor-db-sync" Jan 21 11:23:13 crc kubenswrapper[4881]: E0121 11:23:13.922058 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a89a9d0-4859-41cb-896d-f1a91e854d7b" containerName="dnsmasq-dns" Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.922065 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a89a9d0-4859-41cb-896d-f1a91e854d7b" containerName="dnsmasq-dns" Jan 21 11:23:13 crc kubenswrapper[4881]: E0121 11:23:13.922080 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a89a9d0-4859-41cb-896d-f1a91e854d7b" containerName="init" Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.922086 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a89a9d0-4859-41cb-896d-f1a91e854d7b" containerName="init" Jan 21 11:23:13 crc kubenswrapper[4881]: E0121 11:23:13.922096 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad" containerName="nova-manage" Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.922102 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad" containerName="nova-manage" Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.922361 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a89a9d0-4859-41cb-896d-f1a91e854d7b" containerName="dnsmasq-dns" Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.922387 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad" containerName="nova-manage" Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.922404 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="813d73da-18da-40fa-b949-bbeec6604ac9" containerName="nova-cell1-conductor-db-sync" Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.923261 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.935304 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.939837 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c849cf559-fjllv"] Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.949853 4881 scope.go:117] "RemoveContainer" containerID="e80fa73fd255dd2a9302a2ee6b75f7b4cf8767d543328dc915247c69166c0c25" Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.951066 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-c849cf559-fjllv"] Jan 21 11:23:13 crc kubenswrapper[4881]: I0121 11:23:13.982428 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 11:23:14 crc kubenswrapper[4881]: I0121 11:23:14.035102 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/161c46d2-7b98-4a9e-a648-ce25b966f589-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"161c46d2-7b98-4a9e-a648-ce25b966f589\") " pod="openstack/nova-cell1-conductor-0" Jan 21 11:23:14 crc kubenswrapper[4881]: I0121 11:23:14.035163 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4l8z\" (UniqueName: \"kubernetes.io/projected/161c46d2-7b98-4a9e-a648-ce25b966f589-kube-api-access-q4l8z\") pod \"nova-cell1-conductor-0\" (UID: \"161c46d2-7b98-4a9e-a648-ce25b966f589\") " pod="openstack/nova-cell1-conductor-0" Jan 21 11:23:14 crc kubenswrapper[4881]: I0121 11:23:14.036076 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/161c46d2-7b98-4a9e-a648-ce25b966f589-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"161c46d2-7b98-4a9e-a648-ce25b966f589\") " pod="openstack/nova-cell1-conductor-0" Jan 21 11:23:14 crc kubenswrapper[4881]: I0121 11:23:14.138072 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/161c46d2-7b98-4a9e-a648-ce25b966f589-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"161c46d2-7b98-4a9e-a648-ce25b966f589\") " pod="openstack/nova-cell1-conductor-0" Jan 21 11:23:14 crc kubenswrapper[4881]: I0121 11:23:14.138159 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/161c46d2-7b98-4a9e-a648-ce25b966f589-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"161c46d2-7b98-4a9e-a648-ce25b966f589\") " pod="openstack/nova-cell1-conductor-0" Jan 21 11:23:14 crc kubenswrapper[4881]: I0121 11:23:14.138183 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4l8z\" (UniqueName: \"kubernetes.io/projected/161c46d2-7b98-4a9e-a648-ce25b966f589-kube-api-access-q4l8z\") pod \"nova-cell1-conductor-0\" (UID: \"161c46d2-7b98-4a9e-a648-ce25b966f589\") " pod="openstack/nova-cell1-conductor-0" Jan 21 11:23:14 crc kubenswrapper[4881]: I0121 11:23:14.143770 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/161c46d2-7b98-4a9e-a648-ce25b966f589-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"161c46d2-7b98-4a9e-a648-ce25b966f589\") " pod="openstack/nova-cell1-conductor-0" Jan 21 11:23:14 crc kubenswrapper[4881]: I0121 11:23:14.148649 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/161c46d2-7b98-4a9e-a648-ce25b966f589-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"161c46d2-7b98-4a9e-a648-ce25b966f589\") " pod="openstack/nova-cell1-conductor-0" Jan 21 11:23:14 crc kubenswrapper[4881]: I0121 11:23:14.165757 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4l8z\" (UniqueName: \"kubernetes.io/projected/161c46d2-7b98-4a9e-a648-ce25b966f589-kube-api-access-q4l8z\") pod \"nova-cell1-conductor-0\" (UID: \"161c46d2-7b98-4a9e-a648-ce25b966f589\") " pod="openstack/nova-cell1-conductor-0" Jan 21 11:23:14 crc kubenswrapper[4881]: I0121 11:23:14.281255 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 21 11:23:14 crc kubenswrapper[4881]: E0121 11:23:14.471390 4881 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 574febd8df92e0f37adc7968b35f9fcf1e5f52e202a4769da6f91161f9a9f02c is running failed: container process not found" containerID="574febd8df92e0f37adc7968b35f9fcf1e5f52e202a4769da6f91161f9a9f02c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 21 11:23:14 crc kubenswrapper[4881]: E0121 11:23:14.472043 4881 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 574febd8df92e0f37adc7968b35f9fcf1e5f52e202a4769da6f91161f9a9f02c is running failed: container process not found" containerID="574febd8df92e0f37adc7968b35f9fcf1e5f52e202a4769da6f91161f9a9f02c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 21 11:23:14 crc kubenswrapper[4881]: E0121 11:23:14.473490 4881 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 574febd8df92e0f37adc7968b35f9fcf1e5f52e202a4769da6f91161f9a9f02c is running failed: container process not found" containerID="574febd8df92e0f37adc7968b35f9fcf1e5f52e202a4769da6f91161f9a9f02c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 21 11:23:14 crc kubenswrapper[4881]: E0121 11:23:14.473528 4881 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 574febd8df92e0f37adc7968b35f9fcf1e5f52e202a4769da6f91161f9a9f02c is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="3345073b-8907-4de9-829f-73d8e79a01bb" containerName="nova-scheduler-scheduler" Jan 21 11:23:14 crc kubenswrapper[4881]: I0121 11:23:14.486098 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 11:23:14 crc kubenswrapper[4881]: I0121 11:23:14.486320 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="c5b6c25e-e882-4ea4-a284-6f55bfe75093" containerName="kube-state-metrics" containerID="cri-o://af06053084a285bc01330cffd9858a387580ee179dad2789e77044a776e5acf8" gracePeriod=30 Jan 21 11:23:14 crc kubenswrapper[4881]: I0121 11:23:14.858531 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 11:23:14 crc kubenswrapper[4881]: I0121 11:23:14.882976 4881 generic.go:334] "Generic (PLEG): container finished" podID="c5b6c25e-e882-4ea4-a284-6f55bfe75093" containerID="af06053084a285bc01330cffd9858a387580ee179dad2789e77044a776e5acf8" exitCode=2 Jan 21 11:23:14 crc kubenswrapper[4881]: I0121 11:23:14.883077 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c5b6c25e-e882-4ea4-a284-6f55bfe75093","Type":"ContainerDied","Data":"af06053084a285bc01330cffd9858a387580ee179dad2789e77044a776e5acf8"} Jan 21 11:23:14 crc kubenswrapper[4881]: I0121 11:23:14.887711 4881 generic.go:334] "Generic (PLEG): container finished" podID="3345073b-8907-4de9-829f-73d8e79a01bb" containerID="574febd8df92e0f37adc7968b35f9fcf1e5f52e202a4769da6f91161f9a9f02c" exitCode=0 Jan 21 11:23:14 crc kubenswrapper[4881]: I0121 11:23:14.887773 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3345073b-8907-4de9-829f-73d8e79a01bb","Type":"ContainerDied","Data":"574febd8df92e0f37adc7968b35f9fcf1e5f52e202a4769da6f91161f9a9f02c"} Jan 21 11:23:14 crc kubenswrapper[4881]: I0121 11:23:14.947837 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="c5b6c25e-e882-4ea4-a284-6f55bfe75093" containerName="kube-state-metrics" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": dial tcp 10.217.0.112:8081: connect: connection refused" Jan 21 11:23:15 crc kubenswrapper[4881]: I0121 11:23:15.342836 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a89a9d0-4859-41cb-896d-f1a91e854d7b" path="/var/lib/kubelet/pods/4a89a9d0-4859-41cb-896d-f1a91e854d7b/volumes" Jan 21 11:23:15 crc kubenswrapper[4881]: I0121 11:23:15.401079 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 11:23:15 crc kubenswrapper[4881]: I0121 11:23:15.411526 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 11:23:15 crc kubenswrapper[4881]: I0121 11:23:15.472452 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c6ca904-2790-425f-81ac-37cdc543cf0f-config-data\") pod \"3c6ca904-2790-425f-81ac-37cdc543cf0f\" (UID: \"3c6ca904-2790-425f-81ac-37cdc543cf0f\") " Jan 21 11:23:15 crc kubenswrapper[4881]: I0121 11:23:15.472897 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c6ca904-2790-425f-81ac-37cdc543cf0f-combined-ca-bundle\") pod \"3c6ca904-2790-425f-81ac-37cdc543cf0f\" (UID: \"3c6ca904-2790-425f-81ac-37cdc543cf0f\") " Jan 21 11:23:15 crc kubenswrapper[4881]: I0121 11:23:15.472951 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtx52\" (UniqueName: \"kubernetes.io/projected/3c6ca904-2790-425f-81ac-37cdc543cf0f-kube-api-access-dtx52\") pod \"3c6ca904-2790-425f-81ac-37cdc543cf0f\" (UID: \"3c6ca904-2790-425f-81ac-37cdc543cf0f\") " Jan 21 11:23:15 crc kubenswrapper[4881]: I0121 11:23:15.473107 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c6ca904-2790-425f-81ac-37cdc543cf0f-logs\") pod \"3c6ca904-2790-425f-81ac-37cdc543cf0f\" (UID: \"3c6ca904-2790-425f-81ac-37cdc543cf0f\") " Jan 21 11:23:15 crc kubenswrapper[4881]: I0121 11:23:15.473126 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c6ca904-2790-425f-81ac-37cdc543cf0f-nova-metadata-tls-certs\") pod \"3c6ca904-2790-425f-81ac-37cdc543cf0f\" (UID: \"3c6ca904-2790-425f-81ac-37cdc543cf0f\") " Jan 21 11:23:15 crc kubenswrapper[4881]: I0121 11:23:15.478022 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c6ca904-2790-425f-81ac-37cdc543cf0f-logs" (OuterVolumeSpecName: "logs") pod "3c6ca904-2790-425f-81ac-37cdc543cf0f" (UID: "3c6ca904-2790-425f-81ac-37cdc543cf0f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:23:15 crc kubenswrapper[4881]: I0121 11:23:15.494286 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c6ca904-2790-425f-81ac-37cdc543cf0f-kube-api-access-dtx52" (OuterVolumeSpecName: "kube-api-access-dtx52") pod "3c6ca904-2790-425f-81ac-37cdc543cf0f" (UID: "3c6ca904-2790-425f-81ac-37cdc543cf0f"). InnerVolumeSpecName "kube-api-access-dtx52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:23:15 crc kubenswrapper[4881]: I0121 11:23:15.528887 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c6ca904-2790-425f-81ac-37cdc543cf0f-config-data" (OuterVolumeSpecName: "config-data") pod "3c6ca904-2790-425f-81ac-37cdc543cf0f" (UID: "3c6ca904-2790-425f-81ac-37cdc543cf0f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:23:15 crc kubenswrapper[4881]: I0121 11:23:15.536360 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 11:23:15 crc kubenswrapper[4881]: I0121 11:23:15.539503 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c6ca904-2790-425f-81ac-37cdc543cf0f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3c6ca904-2790-425f-81ac-37cdc543cf0f" (UID: "3c6ca904-2790-425f-81ac-37cdc543cf0f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:23:15 crc kubenswrapper[4881]: I0121 11:23:15.575017 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3345073b-8907-4de9-829f-73d8e79a01bb-config-data\") pod \"3345073b-8907-4de9-829f-73d8e79a01bb\" (UID: \"3345073b-8907-4de9-829f-73d8e79a01bb\") " Jan 21 11:23:15 crc kubenswrapper[4881]: I0121 11:23:15.575479 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3345073b-8907-4de9-829f-73d8e79a01bb-combined-ca-bundle\") pod \"3345073b-8907-4de9-829f-73d8e79a01bb\" (UID: \"3345073b-8907-4de9-829f-73d8e79a01bb\") " Jan 21 11:23:15 crc kubenswrapper[4881]: I0121 11:23:15.575622 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vb4w\" (UniqueName: \"kubernetes.io/projected/3345073b-8907-4de9-829f-73d8e79a01bb-kube-api-access-5vb4w\") pod \"3345073b-8907-4de9-829f-73d8e79a01bb\" (UID: \"3345073b-8907-4de9-829f-73d8e79a01bb\") " Jan 21 11:23:15 crc kubenswrapper[4881]: I0121 11:23:15.576295 4881 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c6ca904-2790-425f-81ac-37cdc543cf0f-logs\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:15 crc kubenswrapper[4881]: I0121 11:23:15.576412 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c6ca904-2790-425f-81ac-37cdc543cf0f-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:15 crc kubenswrapper[4881]: I0121 11:23:15.576511 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c6ca904-2790-425f-81ac-37cdc543cf0f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:15 crc kubenswrapper[4881]: I0121 11:23:15.576619 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtx52\" (UniqueName: \"kubernetes.io/projected/3c6ca904-2790-425f-81ac-37cdc543cf0f-kube-api-access-dtx52\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:15 crc kubenswrapper[4881]: I0121 11:23:15.585053 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3345073b-8907-4de9-829f-73d8e79a01bb-kube-api-access-5vb4w" (OuterVolumeSpecName: "kube-api-access-5vb4w") pod "3345073b-8907-4de9-829f-73d8e79a01bb" (UID: "3345073b-8907-4de9-829f-73d8e79a01bb"). InnerVolumeSpecName "kube-api-access-5vb4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:23:15 crc kubenswrapper[4881]: I0121 11:23:15.593132 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c6ca904-2790-425f-81ac-37cdc543cf0f-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "3c6ca904-2790-425f-81ac-37cdc543cf0f" (UID: "3c6ca904-2790-425f-81ac-37cdc543cf0f"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:23:15 crc kubenswrapper[4881]: I0121 11:23:15.602753 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3345073b-8907-4de9-829f-73d8e79a01bb-config-data" (OuterVolumeSpecName: "config-data") pod "3345073b-8907-4de9-829f-73d8e79a01bb" (UID: "3345073b-8907-4de9-829f-73d8e79a01bb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:23:15 crc kubenswrapper[4881]: I0121 11:23:15.644673 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3345073b-8907-4de9-829f-73d8e79a01bb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3345073b-8907-4de9-829f-73d8e79a01bb" (UID: "3345073b-8907-4de9-829f-73d8e79a01bb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:23:15 crc kubenswrapper[4881]: I0121 11:23:15.678098 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25992\" (UniqueName: \"kubernetes.io/projected/c5b6c25e-e882-4ea4-a284-6f55bfe75093-kube-api-access-25992\") pod \"c5b6c25e-e882-4ea4-a284-6f55bfe75093\" (UID: \"c5b6c25e-e882-4ea4-a284-6f55bfe75093\") " Jan 21 11:23:15 crc kubenswrapper[4881]: I0121 11:23:15.679220 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3345073b-8907-4de9-829f-73d8e79a01bb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:15 crc kubenswrapper[4881]: I0121 11:23:15.679329 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vb4w\" (UniqueName: \"kubernetes.io/projected/3345073b-8907-4de9-829f-73d8e79a01bb-kube-api-access-5vb4w\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:15 crc kubenswrapper[4881]: I0121 11:23:15.679426 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3345073b-8907-4de9-829f-73d8e79a01bb-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:15 crc kubenswrapper[4881]: I0121 11:23:15.679505 4881 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c6ca904-2790-425f-81ac-37cdc543cf0f-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:15 crc kubenswrapper[4881]: I0121 11:23:15.686278 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5b6c25e-e882-4ea4-a284-6f55bfe75093-kube-api-access-25992" (OuterVolumeSpecName: "kube-api-access-25992") pod "c5b6c25e-e882-4ea4-a284-6f55bfe75093" (UID: "c5b6c25e-e882-4ea4-a284-6f55bfe75093"). InnerVolumeSpecName "kube-api-access-25992". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:23:15 crc kubenswrapper[4881]: I0121 11:23:15.781700 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-25992\" (UniqueName: \"kubernetes.io/projected/c5b6c25e-e882-4ea4-a284-6f55bfe75093-kube-api-access-25992\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.118258 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3345073b-8907-4de9-829f-73d8e79a01bb","Type":"ContainerDied","Data":"b74119743bb7cd487418f8d001a744431b3d7a1804f43dd5e7dc76b033b63247"} Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.118322 4881 scope.go:117] "RemoveContainer" containerID="574febd8df92e0f37adc7968b35f9fcf1e5f52e202a4769da6f91161f9a9f02c" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.118477 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.144051 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"161c46d2-7b98-4a9e-a648-ce25b966f589","Type":"ContainerStarted","Data":"03979ebbf81c9d21976f0e3ca57a5ac30c3d37cb4b88415ec35bd982a6541479"} Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.144117 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"161c46d2-7b98-4a9e-a648-ce25b966f589","Type":"ContainerStarted","Data":"b758db0eba45c64b878abf4b0937e61b4ada35f40c8640d44e698e03acf155c4"} Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.145452 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.148932 4881 generic.go:334] "Generic (PLEG): container finished" podID="f39b23f8-2c7e-46d6-8e59-7980b1d2c27c" containerID="b1e94b3b719b1a2213452fd275be74fdb796e7c03d99fa5695466085e68a91fd" exitCode=0 Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.149009 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f39b23f8-2c7e-46d6-8e59-7980b1d2c27c","Type":"ContainerDied","Data":"b1e94b3b719b1a2213452fd275be74fdb796e7c03d99fa5695466085e68a91fd"} Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.150438 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3c6ca904-2790-425f-81ac-37cdc543cf0f","Type":"ContainerDied","Data":"2c61e4a0cf50faebb3da795860373a82c98ee972146a5292709cde146a4a9c15"} Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.150514 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.151609 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c5b6c25e-e882-4ea4-a284-6f55bfe75093","Type":"ContainerDied","Data":"a902e47db0ad78d4b1a0c530458a8cc5f24a6bbadf9cb6042572a73fad768c2d"} Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.151673 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.187345 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=3.187320867 podStartE2EDuration="3.187320867s" podCreationTimestamp="2026-01-21 11:23:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:23:16.181275579 +0000 UTC m=+1583.441232068" watchObservedRunningTime="2026-01-21 11:23:16.187320867 +0000 UTC m=+1583.447277336" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.227862 4881 scope.go:117] "RemoveContainer" containerID="04b14eafe282879a10a549256a83522f141403e701c9d0a5d0f5ea8746de26b5" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.262515 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.297675 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.309086 4881 scope.go:117] "RemoveContainer" containerID="cb3c8eb696c2d6f70dd5b7efed28b2b6d15d294b8d97901355bfdcf5ce7eaa3e" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.317035 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.336918 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 11:23:16 crc kubenswrapper[4881]: E0121 11:23:16.337688 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c6ca904-2790-425f-81ac-37cdc543cf0f" containerName="nova-metadata-metadata" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.337716 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c6ca904-2790-425f-81ac-37cdc543cf0f" containerName="nova-metadata-metadata" Jan 21 11:23:16 crc kubenswrapper[4881]: E0121 11:23:16.337765 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5b6c25e-e882-4ea4-a284-6f55bfe75093" containerName="kube-state-metrics" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.337773 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5b6c25e-e882-4ea4-a284-6f55bfe75093" containerName="kube-state-metrics" Jan 21 11:23:16 crc kubenswrapper[4881]: E0121 11:23:16.337815 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c6ca904-2790-425f-81ac-37cdc543cf0f" containerName="nova-metadata-log" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.337823 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c6ca904-2790-425f-81ac-37cdc543cf0f" containerName="nova-metadata-log" Jan 21 11:23:16 crc kubenswrapper[4881]: E0121 11:23:16.337837 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3345073b-8907-4de9-829f-73d8e79a01bb" containerName="nova-scheduler-scheduler" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.337844 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="3345073b-8907-4de9-829f-73d8e79a01bb" containerName="nova-scheduler-scheduler" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.338077 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c6ca904-2790-425f-81ac-37cdc543cf0f" containerName="nova-metadata-log" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.338115 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="3345073b-8907-4de9-829f-73d8e79a01bb" containerName="nova-scheduler-scheduler" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.338134 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5b6c25e-e882-4ea4-a284-6f55bfe75093" containerName="kube-state-metrics" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.338145 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c6ca904-2790-425f-81ac-37cdc543cf0f" containerName="nova-metadata-metadata" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.339362 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.340379 4881 scope.go:117] "RemoveContainer" containerID="af06053084a285bc01330cffd9858a387580ee179dad2789e77044a776e5acf8" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.350766 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.351063 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.351925 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.370335 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.381890 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.394017 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.403241 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e33ff3f-b508-4ac4-9a60-6189a65be2a6-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"0e33ff3f-b508-4ac4-9a60-6189a65be2a6\") " pod="openstack/kube-state-metrics-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.403284 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf49q\" (UniqueName: \"kubernetes.io/projected/0e33ff3f-b508-4ac4-9a60-6189a65be2a6-kube-api-access-pf49q\") pod \"kube-state-metrics-0\" (UID: \"0e33ff3f-b508-4ac4-9a60-6189a65be2a6\") " pod="openstack/kube-state-metrics-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.403392 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e33ff3f-b508-4ac4-9a60-6189a65be2a6-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"0e33ff3f-b508-4ac4-9a60-6189a65be2a6\") " pod="openstack/kube-state-metrics-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.403481 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/0e33ff3f-b508-4ac4-9a60-6189a65be2a6-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"0e33ff3f-b508-4ac4-9a60-6189a65be2a6\") " pod="openstack/kube-state-metrics-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.406936 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.408676 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.413190 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.417217 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.422481 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.424090 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.426089 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.437889 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.463768 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.505016 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpprt\" (UniqueName: \"kubernetes.io/projected/0f1fb00c-903a-48c9-95e5-8ad34c731f41-kube-api-access-zpprt\") pod \"nova-scheduler-0\" (UID: \"0f1fb00c-903a-48c9-95e5-8ad34c731f41\") " pod="openstack/nova-scheduler-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.505979 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/0e33ff3f-b508-4ac4-9a60-6189a65be2a6-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"0e33ff3f-b508-4ac4-9a60-6189a65be2a6\") " pod="openstack/kube-state-metrics-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.506056 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmm59\" (UniqueName: \"kubernetes.io/projected/7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52-kube-api-access-xmm59\") pod \"nova-metadata-0\" (UID: \"7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52\") " pod="openstack/nova-metadata-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.506088 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f1fb00c-903a-48c9-95e5-8ad34c731f41-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0f1fb00c-903a-48c9-95e5-8ad34c731f41\") " pod="openstack/nova-scheduler-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.506191 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e33ff3f-b508-4ac4-9a60-6189a65be2a6-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"0e33ff3f-b508-4ac4-9a60-6189a65be2a6\") " pod="openstack/kube-state-metrics-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.506220 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pf49q\" (UniqueName: \"kubernetes.io/projected/0e33ff3f-b508-4ac4-9a60-6189a65be2a6-kube-api-access-pf49q\") pod \"kube-state-metrics-0\" (UID: \"0e33ff3f-b508-4ac4-9a60-6189a65be2a6\") " pod="openstack/kube-state-metrics-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.506273 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f1fb00c-903a-48c9-95e5-8ad34c731f41-config-data\") pod \"nova-scheduler-0\" (UID: \"0f1fb00c-903a-48c9-95e5-8ad34c731f41\") " pod="openstack/nova-scheduler-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.506336 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52\") " pod="openstack/nova-metadata-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.506401 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52-logs\") pod \"nova-metadata-0\" (UID: \"7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52\") " pod="openstack/nova-metadata-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.506433 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52-config-data\") pod \"nova-metadata-0\" (UID: \"7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52\") " pod="openstack/nova-metadata-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.506514 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e33ff3f-b508-4ac4-9a60-6189a65be2a6-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"0e33ff3f-b508-4ac4-9a60-6189a65be2a6\") " pod="openstack/kube-state-metrics-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.506580 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52\") " pod="openstack/nova-metadata-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.543752 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e33ff3f-b508-4ac4-9a60-6189a65be2a6-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"0e33ff3f-b508-4ac4-9a60-6189a65be2a6\") " pod="openstack/kube-state-metrics-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.544051 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/0e33ff3f-b508-4ac4-9a60-6189a65be2a6-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"0e33ff3f-b508-4ac4-9a60-6189a65be2a6\") " pod="openstack/kube-state-metrics-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.544417 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e33ff3f-b508-4ac4-9a60-6189a65be2a6-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"0e33ff3f-b508-4ac4-9a60-6189a65be2a6\") " pod="openstack/kube-state-metrics-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.566626 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pf49q\" (UniqueName: \"kubernetes.io/projected/0e33ff3f-b508-4ac4-9a60-6189a65be2a6-kube-api-access-pf49q\") pod \"kube-state-metrics-0\" (UID: \"0e33ff3f-b508-4ac4-9a60-6189a65be2a6\") " pod="openstack/kube-state-metrics-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.610932 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpprt\" (UniqueName: \"kubernetes.io/projected/0f1fb00c-903a-48c9-95e5-8ad34c731f41-kube-api-access-zpprt\") pod \"nova-scheduler-0\" (UID: \"0f1fb00c-903a-48c9-95e5-8ad34c731f41\") " pod="openstack/nova-scheduler-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.611061 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmm59\" (UniqueName: \"kubernetes.io/projected/7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52-kube-api-access-xmm59\") pod \"nova-metadata-0\" (UID: \"7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52\") " pod="openstack/nova-metadata-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.611095 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f1fb00c-903a-48c9-95e5-8ad34c731f41-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0f1fb00c-903a-48c9-95e5-8ad34c731f41\") " pod="openstack/nova-scheduler-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.611201 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f1fb00c-903a-48c9-95e5-8ad34c731f41-config-data\") pod \"nova-scheduler-0\" (UID: \"0f1fb00c-903a-48c9-95e5-8ad34c731f41\") " pod="openstack/nova-scheduler-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.611265 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52\") " pod="openstack/nova-metadata-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.611331 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52-logs\") pod \"nova-metadata-0\" (UID: \"7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52\") " pod="openstack/nova-metadata-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.611368 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52-config-data\") pod \"nova-metadata-0\" (UID: \"7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52\") " pod="openstack/nova-metadata-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.611456 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52\") " pod="openstack/nova-metadata-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.619704 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52-logs\") pod \"nova-metadata-0\" (UID: \"7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52\") " pod="openstack/nova-metadata-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.626479 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f1fb00c-903a-48c9-95e5-8ad34c731f41-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0f1fb00c-903a-48c9-95e5-8ad34c731f41\") " pod="openstack/nova-scheduler-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.629594 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52\") " pod="openstack/nova-metadata-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.640228 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52-config-data\") pod \"nova-metadata-0\" (UID: \"7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52\") " pod="openstack/nova-metadata-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.646135 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmm59\" (UniqueName: \"kubernetes.io/projected/7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52-kube-api-access-xmm59\") pod \"nova-metadata-0\" (UID: \"7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52\") " pod="openstack/nova-metadata-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.647655 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpprt\" (UniqueName: \"kubernetes.io/projected/0f1fb00c-903a-48c9-95e5-8ad34c731f41-kube-api-access-zpprt\") pod \"nova-scheduler-0\" (UID: \"0f1fb00c-903a-48c9-95e5-8ad34c731f41\") " pod="openstack/nova-scheduler-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.654091 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f1fb00c-903a-48c9-95e5-8ad34c731f41-config-data\") pod \"nova-scheduler-0\" (UID: \"0f1fb00c-903a-48c9-95e5-8ad34c731f41\") " pod="openstack/nova-scheduler-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.665410 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52\") " pod="openstack/nova-metadata-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.677570 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.742391 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 11:23:16 crc kubenswrapper[4881]: I0121 11:23:16.763660 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 11:23:17 crc kubenswrapper[4881]: I0121 11:23:17.329645 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3345073b-8907-4de9-829f-73d8e79a01bb" path="/var/lib/kubelet/pods/3345073b-8907-4de9-829f-73d8e79a01bb/volumes" Jan 21 11:23:17 crc kubenswrapper[4881]: I0121 11:23:17.330845 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c6ca904-2790-425f-81ac-37cdc543cf0f" path="/var/lib/kubelet/pods/3c6ca904-2790-425f-81ac-37cdc543cf0f/volumes" Jan 21 11:23:17 crc kubenswrapper[4881]: I0121 11:23:17.331358 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5b6c25e-e882-4ea4-a284-6f55bfe75093" path="/var/lib/kubelet/pods/c5b6c25e-e882-4ea4-a284-6f55bfe75093/volumes" Jan 21 11:23:17 crc kubenswrapper[4881]: I0121 11:23:17.684497 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 11:23:17 crc kubenswrapper[4881]: W0121 11:23:17.722116 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0e33ff3f_b508_4ac4_9a60_6189a65be2a6.slice/crio-77720f409630e323e17d0bdf3c7919468d28beac14e84407eb9a7547caf761d6 WatchSource:0}: Error finding container 77720f409630e323e17d0bdf3c7919468d28beac14e84407eb9a7547caf761d6: Status 404 returned error can't find the container with id 77720f409630e323e17d0bdf3c7919468d28beac14e84407eb9a7547caf761d6 Jan 21 11:23:17 crc kubenswrapper[4881]: I0121 11:23:17.732199 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 11:23:17 crc kubenswrapper[4881]: I0121 11:23:17.740818 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f39b23f8-2c7e-46d6-8e59-7980b1d2c27c-logs\") pod \"f39b23f8-2c7e-46d6-8e59-7980b1d2c27c\" (UID: \"f39b23f8-2c7e-46d6-8e59-7980b1d2c27c\") " Jan 21 11:23:17 crc kubenswrapper[4881]: I0121 11:23:17.740925 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f39b23f8-2c7e-46d6-8e59-7980b1d2c27c-combined-ca-bundle\") pod \"f39b23f8-2c7e-46d6-8e59-7980b1d2c27c\" (UID: \"f39b23f8-2c7e-46d6-8e59-7980b1d2c27c\") " Jan 21 11:23:17 crc kubenswrapper[4881]: I0121 11:23:17.741305 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f39b23f8-2c7e-46d6-8e59-7980b1d2c27c-config-data\") pod \"f39b23f8-2c7e-46d6-8e59-7980b1d2c27c\" (UID: \"f39b23f8-2c7e-46d6-8e59-7980b1d2c27c\") " Jan 21 11:23:17 crc kubenswrapper[4881]: I0121 11:23:17.741348 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lqtx\" (UniqueName: \"kubernetes.io/projected/f39b23f8-2c7e-46d6-8e59-7980b1d2c27c-kube-api-access-2lqtx\") pod \"f39b23f8-2c7e-46d6-8e59-7980b1d2c27c\" (UID: \"f39b23f8-2c7e-46d6-8e59-7980b1d2c27c\") " Jan 21 11:23:17 crc kubenswrapper[4881]: I0121 11:23:17.742142 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f39b23f8-2c7e-46d6-8e59-7980b1d2c27c-logs" (OuterVolumeSpecName: "logs") pod "f39b23f8-2c7e-46d6-8e59-7980b1d2c27c" (UID: "f39b23f8-2c7e-46d6-8e59-7980b1d2c27c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:23:17 crc kubenswrapper[4881]: I0121 11:23:17.756055 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f39b23f8-2c7e-46d6-8e59-7980b1d2c27c-kube-api-access-2lqtx" (OuterVolumeSpecName: "kube-api-access-2lqtx") pod "f39b23f8-2c7e-46d6-8e59-7980b1d2c27c" (UID: "f39b23f8-2c7e-46d6-8e59-7980b1d2c27c"). InnerVolumeSpecName "kube-api-access-2lqtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:23:17 crc kubenswrapper[4881]: I0121 11:23:17.763276 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 11:23:17 crc kubenswrapper[4881]: I0121 11:23:17.797329 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f39b23f8-2c7e-46d6-8e59-7980b1d2c27c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f39b23f8-2c7e-46d6-8e59-7980b1d2c27c" (UID: "f39b23f8-2c7e-46d6-8e59-7980b1d2c27c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:23:17 crc kubenswrapper[4881]: I0121 11:23:17.825116 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f39b23f8-2c7e-46d6-8e59-7980b1d2c27c-config-data" (OuterVolumeSpecName: "config-data") pod "f39b23f8-2c7e-46d6-8e59-7980b1d2c27c" (UID: "f39b23f8-2c7e-46d6-8e59-7980b1d2c27c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:23:17 crc kubenswrapper[4881]: I0121 11:23:17.844442 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f39b23f8-2c7e-46d6-8e59-7980b1d2c27c-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:17 crc kubenswrapper[4881]: I0121 11:23:17.844488 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lqtx\" (UniqueName: \"kubernetes.io/projected/f39b23f8-2c7e-46d6-8e59-7980b1d2c27c-kube-api-access-2lqtx\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:17 crc kubenswrapper[4881]: I0121 11:23:17.844503 4881 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f39b23f8-2c7e-46d6-8e59-7980b1d2c27c-logs\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:17 crc kubenswrapper[4881]: I0121 11:23:17.844513 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f39b23f8-2c7e-46d6-8e59-7980b1d2c27c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:17 crc kubenswrapper[4881]: I0121 11:23:17.917401 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 11:23:17 crc kubenswrapper[4881]: W0121 11:23:17.918968 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f1fb00c_903a_48c9_95e5_8ad34c731f41.slice/crio-b3157e678fa44dfdf1c50a29c3af5b7c20661b982fcfdccdd420bdba43c8cf36 WatchSource:0}: Error finding container b3157e678fa44dfdf1c50a29c3af5b7c20661b982fcfdccdd420bdba43c8cf36: Status 404 returned error can't find the container with id b3157e678fa44dfdf1c50a29c3af5b7c20661b982fcfdccdd420bdba43c8cf36 Jan 21 11:23:17 crc kubenswrapper[4881]: I0121 11:23:17.999045 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:23:17 crc kubenswrapper[4881]: I0121 11:23:17.999563 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="20eeb602-9c98-48ed-a9c9-22121156e8cb" containerName="ceilometer-central-agent" containerID="cri-o://8256e63406ff9c5a7c526341a649b275e3f5ab402c57f45ac53e47b1d11393f9" gracePeriod=30 Jan 21 11:23:17 crc kubenswrapper[4881]: I0121 11:23:17.999643 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="20eeb602-9c98-48ed-a9c9-22121156e8cb" containerName="proxy-httpd" containerID="cri-o://ebf63005cec886f7073127e6f8a1b1d91309382b4d83ebbd9aca189eabae9b37" gracePeriod=30 Jan 21 11:23:17 crc kubenswrapper[4881]: I0121 11:23:17.999698 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="20eeb602-9c98-48ed-a9c9-22121156e8cb" containerName="ceilometer-notification-agent" containerID="cri-o://f833baf807f57255c45be1ba58cccaca032385ccba346e4fc3846694862bc6ee" gracePeriod=30 Jan 21 11:23:17 crc kubenswrapper[4881]: I0121 11:23:17.999699 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="20eeb602-9c98-48ed-a9c9-22121156e8cb" containerName="sg-core" containerID="cri-o://19d2c0708e63a625c9564d43bfbff6b4bf382eb29c4f5fe75600d774080fe1d6" gracePeriod=30 Jan 21 11:23:18 crc kubenswrapper[4881]: I0121 11:23:18.214901 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0e33ff3f-b508-4ac4-9a60-6189a65be2a6","Type":"ContainerStarted","Data":"77720f409630e323e17d0bdf3c7919468d28beac14e84407eb9a7547caf761d6"} Jan 21 11:23:18 crc kubenswrapper[4881]: I0121 11:23:18.216017 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52","Type":"ContainerStarted","Data":"94be8c422811e4e8ba1078eb2e0e3d71d40e6f5e6c07d283df8a7544b7b7a114"} Jan 21 11:23:18 crc kubenswrapper[4881]: I0121 11:23:18.218436 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f39b23f8-2c7e-46d6-8e59-7980b1d2c27c","Type":"ContainerDied","Data":"a5806f41aee852119e408747b6a9159dc66b4ea14896033d8861a45a5e319518"} Jan 21 11:23:18 crc kubenswrapper[4881]: I0121 11:23:18.218494 4881 scope.go:117] "RemoveContainer" containerID="b1e94b3b719b1a2213452fd275be74fdb796e7c03d99fa5695466085e68a91fd" Jan 21 11:23:18 crc kubenswrapper[4881]: I0121 11:23:18.218494 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 11:23:18 crc kubenswrapper[4881]: I0121 11:23:18.226165 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0f1fb00c-903a-48c9-95e5-8ad34c731f41","Type":"ContainerStarted","Data":"b3157e678fa44dfdf1c50a29c3af5b7c20661b982fcfdccdd420bdba43c8cf36"} Jan 21 11:23:18 crc kubenswrapper[4881]: I0121 11:23:18.258279 4881 scope.go:117] "RemoveContainer" containerID="71bf37a912cb19763de6a839082bf72ecae64d550a077ed5461e0d2fa0d9be80" Jan 21 11:23:18 crc kubenswrapper[4881]: I0121 11:23:18.284965 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 11:23:18 crc kubenswrapper[4881]: I0121 11:23:18.294568 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 21 11:23:18 crc kubenswrapper[4881]: I0121 11:23:18.305552 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 11:23:18 crc kubenswrapper[4881]: E0121 11:23:18.306250 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f39b23f8-2c7e-46d6-8e59-7980b1d2c27c" containerName="nova-api-api" Jan 21 11:23:18 crc kubenswrapper[4881]: I0121 11:23:18.306273 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="f39b23f8-2c7e-46d6-8e59-7980b1d2c27c" containerName="nova-api-api" Jan 21 11:23:18 crc kubenswrapper[4881]: E0121 11:23:18.306286 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f39b23f8-2c7e-46d6-8e59-7980b1d2c27c" containerName="nova-api-log" Jan 21 11:23:18 crc kubenswrapper[4881]: I0121 11:23:18.306294 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="f39b23f8-2c7e-46d6-8e59-7980b1d2c27c" containerName="nova-api-log" Jan 21 11:23:18 crc kubenswrapper[4881]: I0121 11:23:18.306507 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="f39b23f8-2c7e-46d6-8e59-7980b1d2c27c" containerName="nova-api-api" Jan 21 11:23:18 crc kubenswrapper[4881]: I0121 11:23:18.306527 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="f39b23f8-2c7e-46d6-8e59-7980b1d2c27c" containerName="nova-api-log" Jan 21 11:23:18 crc kubenswrapper[4881]: I0121 11:23:18.307832 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 11:23:18 crc kubenswrapper[4881]: I0121 11:23:18.313358 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 11:23:18 crc kubenswrapper[4881]: I0121 11:23:18.315307 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 11:23:18 crc kubenswrapper[4881]: I0121 11:23:18.358536 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb8d5e00-825f-4df2-9720-3de7be3e0837-logs\") pod \"nova-api-0\" (UID: \"cb8d5e00-825f-4df2-9720-3de7be3e0837\") " pod="openstack/nova-api-0" Jan 21 11:23:18 crc kubenswrapper[4881]: I0121 11:23:18.358610 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb8d5e00-825f-4df2-9720-3de7be3e0837-config-data\") pod \"nova-api-0\" (UID: \"cb8d5e00-825f-4df2-9720-3de7be3e0837\") " pod="openstack/nova-api-0" Jan 21 11:23:18 crc kubenswrapper[4881]: I0121 11:23:18.358639 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7klwj\" (UniqueName: \"kubernetes.io/projected/cb8d5e00-825f-4df2-9720-3de7be3e0837-kube-api-access-7klwj\") pod \"nova-api-0\" (UID: \"cb8d5e00-825f-4df2-9720-3de7be3e0837\") " pod="openstack/nova-api-0" Jan 21 11:23:18 crc kubenswrapper[4881]: I0121 11:23:18.358775 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb8d5e00-825f-4df2-9720-3de7be3e0837-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"cb8d5e00-825f-4df2-9720-3de7be3e0837\") " pod="openstack/nova-api-0" Jan 21 11:23:18 crc kubenswrapper[4881]: I0121 11:23:18.460769 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7klwj\" (UniqueName: \"kubernetes.io/projected/cb8d5e00-825f-4df2-9720-3de7be3e0837-kube-api-access-7klwj\") pod \"nova-api-0\" (UID: \"cb8d5e00-825f-4df2-9720-3de7be3e0837\") " pod="openstack/nova-api-0" Jan 21 11:23:18 crc kubenswrapper[4881]: I0121 11:23:18.460931 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb8d5e00-825f-4df2-9720-3de7be3e0837-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"cb8d5e00-825f-4df2-9720-3de7be3e0837\") " pod="openstack/nova-api-0" Jan 21 11:23:18 crc kubenswrapper[4881]: I0121 11:23:18.461021 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb8d5e00-825f-4df2-9720-3de7be3e0837-logs\") pod \"nova-api-0\" (UID: \"cb8d5e00-825f-4df2-9720-3de7be3e0837\") " pod="openstack/nova-api-0" Jan 21 11:23:18 crc kubenswrapper[4881]: I0121 11:23:18.461080 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb8d5e00-825f-4df2-9720-3de7be3e0837-config-data\") pod \"nova-api-0\" (UID: \"cb8d5e00-825f-4df2-9720-3de7be3e0837\") " pod="openstack/nova-api-0" Jan 21 11:23:18 crc kubenswrapper[4881]: I0121 11:23:18.462276 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb8d5e00-825f-4df2-9720-3de7be3e0837-logs\") pod \"nova-api-0\" (UID: \"cb8d5e00-825f-4df2-9720-3de7be3e0837\") " pod="openstack/nova-api-0" Jan 21 11:23:18 crc kubenswrapper[4881]: I0121 11:23:18.471697 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb8d5e00-825f-4df2-9720-3de7be3e0837-config-data\") pod \"nova-api-0\" (UID: \"cb8d5e00-825f-4df2-9720-3de7be3e0837\") " pod="openstack/nova-api-0" Jan 21 11:23:18 crc kubenswrapper[4881]: I0121 11:23:18.471973 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb8d5e00-825f-4df2-9720-3de7be3e0837-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"cb8d5e00-825f-4df2-9720-3de7be3e0837\") " pod="openstack/nova-api-0" Jan 21 11:23:18 crc kubenswrapper[4881]: I0121 11:23:18.491475 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7klwj\" (UniqueName: \"kubernetes.io/projected/cb8d5e00-825f-4df2-9720-3de7be3e0837-kube-api-access-7klwj\") pod \"nova-api-0\" (UID: \"cb8d5e00-825f-4df2-9720-3de7be3e0837\") " pod="openstack/nova-api-0" Jan 21 11:23:18 crc kubenswrapper[4881]: I0121 11:23:18.636561 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 11:23:19 crc kubenswrapper[4881]: I0121 11:23:19.110515 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 11:23:19 crc kubenswrapper[4881]: I0121 11:23:19.648063 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f39b23f8-2c7e-46d6-8e59-7980b1d2c27c" path="/var/lib/kubelet/pods/f39b23f8-2c7e-46d6-8e59-7980b1d2c27c/volumes" Jan 21 11:23:19 crc kubenswrapper[4881]: I0121 11:23:19.654571 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cb8d5e00-825f-4df2-9720-3de7be3e0837","Type":"ContainerStarted","Data":"9b384c1c04b091d7070db9b5be692cbf3307b83743e8c28c7fc7e9002650814f"} Jan 21 11:23:20 crc kubenswrapper[4881]: I0121 11:23:20.678959 4881 generic.go:334] "Generic (PLEG): container finished" podID="20eeb602-9c98-48ed-a9c9-22121156e8cb" containerID="ebf63005cec886f7073127e6f8a1b1d91309382b4d83ebbd9aca189eabae9b37" exitCode=0 Jan 21 11:23:20 crc kubenswrapper[4881]: I0121 11:23:20.679523 4881 generic.go:334] "Generic (PLEG): container finished" podID="20eeb602-9c98-48ed-a9c9-22121156e8cb" containerID="19d2c0708e63a625c9564d43bfbff6b4bf382eb29c4f5fe75600d774080fe1d6" exitCode=2 Jan 21 11:23:20 crc kubenswrapper[4881]: I0121 11:23:20.679533 4881 generic.go:334] "Generic (PLEG): container finished" podID="20eeb602-9c98-48ed-a9c9-22121156e8cb" containerID="8256e63406ff9c5a7c526341a649b275e3f5ab402c57f45ac53e47b1d11393f9" exitCode=0 Jan 21 11:23:20 crc kubenswrapper[4881]: I0121 11:23:20.679150 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20eeb602-9c98-48ed-a9c9-22121156e8cb","Type":"ContainerDied","Data":"ebf63005cec886f7073127e6f8a1b1d91309382b4d83ebbd9aca189eabae9b37"} Jan 21 11:23:20 crc kubenswrapper[4881]: I0121 11:23:20.679598 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20eeb602-9c98-48ed-a9c9-22121156e8cb","Type":"ContainerDied","Data":"19d2c0708e63a625c9564d43bfbff6b4bf382eb29c4f5fe75600d774080fe1d6"} Jan 21 11:23:20 crc kubenswrapper[4881]: I0121 11:23:20.679612 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20eeb602-9c98-48ed-a9c9-22121156e8cb","Type":"ContainerDied","Data":"8256e63406ff9c5a7c526341a649b275e3f5ab402c57f45ac53e47b1d11393f9"} Jan 21 11:23:20 crc kubenswrapper[4881]: I0121 11:23:20.682764 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cb8d5e00-825f-4df2-9720-3de7be3e0837","Type":"ContainerStarted","Data":"bb359efc78c8172dc142be7dbd66247c577cc9e68e31667efda8eaa45e2b6e87"} Jan 21 11:23:20 crc kubenswrapper[4881]: I0121 11:23:20.684744 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52","Type":"ContainerStarted","Data":"5317a19a5c6fd411002c22415e0ba75ced188c533ac4cf93ad9bafb7600cfba0"} Jan 21 11:23:20 crc kubenswrapper[4881]: I0121 11:23:20.686620 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0f1fb00c-903a-48c9-95e5-8ad34c731f41","Type":"ContainerStarted","Data":"e52d14e5b47ceff7047b0b43cb94e03af0a112544f5fe0cee4d41a4bd236c070"} Jan 21 11:23:20 crc kubenswrapper[4881]: I0121 11:23:20.720330 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=4.720307764 podStartE2EDuration="4.720307764s" podCreationTimestamp="2026-01-21 11:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:23:20.710528764 +0000 UTC m=+1587.970485233" watchObservedRunningTime="2026-01-21 11:23:20.720307764 +0000 UTC m=+1587.980264233" Jan 21 11:23:21 crc kubenswrapper[4881]: I0121 11:23:21.700750 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cb8d5e00-825f-4df2-9720-3de7be3e0837","Type":"ContainerStarted","Data":"2dfa759ad5f3629117201697e51e9070f4706b866df3273a3c40b4948e6b8705"} Jan 21 11:23:21 crc kubenswrapper[4881]: I0121 11:23:21.703838 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52","Type":"ContainerStarted","Data":"77ab3e90f4bd352be1d58beb21ac3b7c5b6ccdc4776384b4fd7529acffc8aa21"} Jan 21 11:23:21 crc kubenswrapper[4881]: I0121 11:23:21.709193 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0e33ff3f-b508-4ac4-9a60-6189a65be2a6","Type":"ContainerStarted","Data":"2c2969eba13541bcaf91a75b7beeb9e4ac3bc6b6be20cbcb1615223e9a1d0b46"} Jan 21 11:23:21 crc kubenswrapper[4881]: I0121 11:23:21.709362 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 21 11:23:21 crc kubenswrapper[4881]: I0121 11:23:21.728539 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.72851765 podStartE2EDuration="3.72851765s" podCreationTimestamp="2026-01-21 11:23:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:23:21.719423378 +0000 UTC m=+1588.979379847" watchObservedRunningTime="2026-01-21 11:23:21.72851765 +0000 UTC m=+1588.988474119" Jan 21 11:23:21 crc kubenswrapper[4881]: I0121 11:23:21.742224 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.150349048 podStartE2EDuration="5.742201115s" podCreationTimestamp="2026-01-21 11:23:16 +0000 UTC" firstStartedPulling="2026-01-21 11:23:17.727412403 +0000 UTC m=+1584.987368872" lastFinishedPulling="2026-01-21 11:23:20.31926447 +0000 UTC m=+1587.579220939" observedRunningTime="2026-01-21 11:23:21.738189327 +0000 UTC m=+1588.998145806" watchObservedRunningTime="2026-01-21 11:23:21.742201115 +0000 UTC m=+1589.002157584" Jan 21 11:23:21 crc kubenswrapper[4881]: I0121 11:23:21.743281 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 11:23:21 crc kubenswrapper[4881]: I0121 11:23:21.743387 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 11:23:21 crc kubenswrapper[4881]: I0121 11:23:21.765599 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 21 11:23:21 crc kubenswrapper[4881]: I0121 11:23:21.775236 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=5.775216374 podStartE2EDuration="5.775216374s" podCreationTimestamp="2026-01-21 11:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:23:21.761876277 +0000 UTC m=+1589.021832746" watchObservedRunningTime="2026-01-21 11:23:21.775216374 +0000 UTC m=+1589.035172833" Jan 21 11:23:22 crc kubenswrapper[4881]: I0121 11:23:22.618755 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-llk4v"] Jan 21 11:23:22 crc kubenswrapper[4881]: I0121 11:23:22.621397 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-llk4v" Jan 21 11:23:22 crc kubenswrapper[4881]: I0121 11:23:22.691976 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb575609-e27b-438e-b305-754fed7dbd0c-utilities\") pod \"certified-operators-llk4v\" (UID: \"eb575609-e27b-438e-b305-754fed7dbd0c\") " pod="openshift-marketplace/certified-operators-llk4v" Jan 21 11:23:22 crc kubenswrapper[4881]: I0121 11:23:22.692039 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctnrj\" (UniqueName: \"kubernetes.io/projected/eb575609-e27b-438e-b305-754fed7dbd0c-kube-api-access-ctnrj\") pod \"certified-operators-llk4v\" (UID: \"eb575609-e27b-438e-b305-754fed7dbd0c\") " pod="openshift-marketplace/certified-operators-llk4v" Jan 21 11:23:22 crc kubenswrapper[4881]: I0121 11:23:22.692181 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb575609-e27b-438e-b305-754fed7dbd0c-catalog-content\") pod \"certified-operators-llk4v\" (UID: \"eb575609-e27b-438e-b305-754fed7dbd0c\") " pod="openshift-marketplace/certified-operators-llk4v" Jan 21 11:23:23 crc kubenswrapper[4881]: I0121 11:23:23.127256 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb575609-e27b-438e-b305-754fed7dbd0c-utilities\") pod \"certified-operators-llk4v\" (UID: \"eb575609-e27b-438e-b305-754fed7dbd0c\") " pod="openshift-marketplace/certified-operators-llk4v" Jan 21 11:23:23 crc kubenswrapper[4881]: I0121 11:23:23.127311 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctnrj\" (UniqueName: \"kubernetes.io/projected/eb575609-e27b-438e-b305-754fed7dbd0c-kube-api-access-ctnrj\") pod \"certified-operators-llk4v\" (UID: \"eb575609-e27b-438e-b305-754fed7dbd0c\") " pod="openshift-marketplace/certified-operators-llk4v" Jan 21 11:23:23 crc kubenswrapper[4881]: I0121 11:23:23.127389 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb575609-e27b-438e-b305-754fed7dbd0c-catalog-content\") pod \"certified-operators-llk4v\" (UID: \"eb575609-e27b-438e-b305-754fed7dbd0c\") " pod="openshift-marketplace/certified-operators-llk4v" Jan 21 11:23:23 crc kubenswrapper[4881]: I0121 11:23:23.133077 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb575609-e27b-438e-b305-754fed7dbd0c-catalog-content\") pod \"certified-operators-llk4v\" (UID: \"eb575609-e27b-438e-b305-754fed7dbd0c\") " pod="openshift-marketplace/certified-operators-llk4v" Jan 21 11:23:23 crc kubenswrapper[4881]: I0121 11:23:23.135436 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb575609-e27b-438e-b305-754fed7dbd0c-utilities\") pod \"certified-operators-llk4v\" (UID: \"eb575609-e27b-438e-b305-754fed7dbd0c\") " pod="openshift-marketplace/certified-operators-llk4v" Jan 21 11:23:23 crc kubenswrapper[4881]: I0121 11:23:23.200869 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-llk4v"] Jan 21 11:23:23 crc kubenswrapper[4881]: I0121 11:23:23.240729 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctnrj\" (UniqueName: \"kubernetes.io/projected/eb575609-e27b-438e-b305-754fed7dbd0c-kube-api-access-ctnrj\") pod \"certified-operators-llk4v\" (UID: \"eb575609-e27b-438e-b305-754fed7dbd0c\") " pod="openshift-marketplace/certified-operators-llk4v" Jan 21 11:23:23 crc kubenswrapper[4881]: I0121 11:23:23.464552 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-llk4v" Jan 21 11:23:23 crc kubenswrapper[4881]: I0121 11:23:23.980011 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-llk4v"] Jan 21 11:23:24 crc kubenswrapper[4881]: I0121 11:23:24.307842 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-llk4v" event={"ID":"eb575609-e27b-438e-b305-754fed7dbd0c","Type":"ContainerStarted","Data":"e500de19668bd863773799072a1748fadbbfeb7a569a7019d89d37c178966126"} Jan 21 11:23:24 crc kubenswrapper[4881]: I0121 11:23:24.354239 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 21 11:23:25 crc kubenswrapper[4881]: I0121 11:23:25.330473 4881 generic.go:334] "Generic (PLEG): container finished" podID="20eeb602-9c98-48ed-a9c9-22121156e8cb" containerID="f833baf807f57255c45be1ba58cccaca032385ccba346e4fc3846694862bc6ee" exitCode=0 Jan 21 11:23:25 crc kubenswrapper[4881]: I0121 11:23:25.330858 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20eeb602-9c98-48ed-a9c9-22121156e8cb","Type":"ContainerDied","Data":"f833baf807f57255c45be1ba58cccaca032385ccba346e4fc3846694862bc6ee"} Jan 21 11:23:25 crc kubenswrapper[4881]: I0121 11:23:25.332822 4881 generic.go:334] "Generic (PLEG): container finished" podID="eb575609-e27b-438e-b305-754fed7dbd0c" containerID="1728cee101905ae9b1f39e05752401a8a7ecb94af74ddb10abd60ea126aafa34" exitCode=0 Jan 21 11:23:25 crc kubenswrapper[4881]: I0121 11:23:25.332872 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-llk4v" event={"ID":"eb575609-e27b-438e-b305-754fed7dbd0c","Type":"ContainerDied","Data":"1728cee101905ae9b1f39e05752401a8a7ecb94af74ddb10abd60ea126aafa34"} Jan 21 11:23:25 crc kubenswrapper[4881]: I0121 11:23:25.557443 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:23:25 crc kubenswrapper[4881]: I0121 11:23:25.721874 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20eeb602-9c98-48ed-a9c9-22121156e8cb-combined-ca-bundle\") pod \"20eeb602-9c98-48ed-a9c9-22121156e8cb\" (UID: \"20eeb602-9c98-48ed-a9c9-22121156e8cb\") " Jan 21 11:23:25 crc kubenswrapper[4881]: I0121 11:23:25.721961 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20eeb602-9c98-48ed-a9c9-22121156e8cb-sg-core-conf-yaml\") pod \"20eeb602-9c98-48ed-a9c9-22121156e8cb\" (UID: \"20eeb602-9c98-48ed-a9c9-22121156e8cb\") " Jan 21 11:23:25 crc kubenswrapper[4881]: I0121 11:23:25.722066 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20eeb602-9c98-48ed-a9c9-22121156e8cb-scripts\") pod \"20eeb602-9c98-48ed-a9c9-22121156e8cb\" (UID: \"20eeb602-9c98-48ed-a9c9-22121156e8cb\") " Jan 21 11:23:25 crc kubenswrapper[4881]: I0121 11:23:25.722136 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20eeb602-9c98-48ed-a9c9-22121156e8cb-run-httpd\") pod \"20eeb602-9c98-48ed-a9c9-22121156e8cb\" (UID: \"20eeb602-9c98-48ed-a9c9-22121156e8cb\") " Jan 21 11:23:25 crc kubenswrapper[4881]: I0121 11:23:25.722178 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20eeb602-9c98-48ed-a9c9-22121156e8cb-log-httpd\") pod \"20eeb602-9c98-48ed-a9c9-22121156e8cb\" (UID: \"20eeb602-9c98-48ed-a9c9-22121156e8cb\") " Jan 21 11:23:25 crc kubenswrapper[4881]: I0121 11:23:25.722199 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20eeb602-9c98-48ed-a9c9-22121156e8cb-config-data\") pod \"20eeb602-9c98-48ed-a9c9-22121156e8cb\" (UID: \"20eeb602-9c98-48ed-a9c9-22121156e8cb\") " Jan 21 11:23:25 crc kubenswrapper[4881]: I0121 11:23:25.722265 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgzxk\" (UniqueName: \"kubernetes.io/projected/20eeb602-9c98-48ed-a9c9-22121156e8cb-kube-api-access-zgzxk\") pod \"20eeb602-9c98-48ed-a9c9-22121156e8cb\" (UID: \"20eeb602-9c98-48ed-a9c9-22121156e8cb\") " Jan 21 11:23:25 crc kubenswrapper[4881]: I0121 11:23:25.723044 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20eeb602-9c98-48ed-a9c9-22121156e8cb-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "20eeb602-9c98-48ed-a9c9-22121156e8cb" (UID: "20eeb602-9c98-48ed-a9c9-22121156e8cb"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:23:25 crc kubenswrapper[4881]: I0121 11:23:25.723594 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20eeb602-9c98-48ed-a9c9-22121156e8cb-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "20eeb602-9c98-48ed-a9c9-22121156e8cb" (UID: "20eeb602-9c98-48ed-a9c9-22121156e8cb"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:23:25 crc kubenswrapper[4881]: I0121 11:23:25.737826 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20eeb602-9c98-48ed-a9c9-22121156e8cb-kube-api-access-zgzxk" (OuterVolumeSpecName: "kube-api-access-zgzxk") pod "20eeb602-9c98-48ed-a9c9-22121156e8cb" (UID: "20eeb602-9c98-48ed-a9c9-22121156e8cb"). InnerVolumeSpecName "kube-api-access-zgzxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:23:25 crc kubenswrapper[4881]: I0121 11:23:25.741617 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20eeb602-9c98-48ed-a9c9-22121156e8cb-scripts" (OuterVolumeSpecName: "scripts") pod "20eeb602-9c98-48ed-a9c9-22121156e8cb" (UID: "20eeb602-9c98-48ed-a9c9-22121156e8cb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:23:25 crc kubenswrapper[4881]: I0121 11:23:25.758124 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20eeb602-9c98-48ed-a9c9-22121156e8cb-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "20eeb602-9c98-48ed-a9c9-22121156e8cb" (UID: "20eeb602-9c98-48ed-a9c9-22121156e8cb"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:23:25 crc kubenswrapper[4881]: I0121 11:23:25.808432 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20eeb602-9c98-48ed-a9c9-22121156e8cb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20eeb602-9c98-48ed-a9c9-22121156e8cb" (UID: "20eeb602-9c98-48ed-a9c9-22121156e8cb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:23:25 crc kubenswrapper[4881]: I0121 11:23:25.824476 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20eeb602-9c98-48ed-a9c9-22121156e8cb-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:25 crc kubenswrapper[4881]: I0121 11:23:25.824513 4881 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20eeb602-9c98-48ed-a9c9-22121156e8cb-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:25 crc kubenswrapper[4881]: I0121 11:23:25.824522 4881 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20eeb602-9c98-48ed-a9c9-22121156e8cb-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:25 crc kubenswrapper[4881]: I0121 11:23:25.824531 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgzxk\" (UniqueName: \"kubernetes.io/projected/20eeb602-9c98-48ed-a9c9-22121156e8cb-kube-api-access-zgzxk\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:25 crc kubenswrapper[4881]: I0121 11:23:25.824541 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20eeb602-9c98-48ed-a9c9-22121156e8cb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:25 crc kubenswrapper[4881]: I0121 11:23:25.824550 4881 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20eeb602-9c98-48ed-a9c9-22121156e8cb-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:25 crc kubenswrapper[4881]: I0121 11:23:25.858423 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20eeb602-9c98-48ed-a9c9-22121156e8cb-config-data" (OuterVolumeSpecName: "config-data") pod "20eeb602-9c98-48ed-a9c9-22121156e8cb" (UID: "20eeb602-9c98-48ed-a9c9-22121156e8cb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:23:25 crc kubenswrapper[4881]: I0121 11:23:25.931209 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20eeb602-9c98-48ed-a9c9-22121156e8cb-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.347264 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20eeb602-9c98-48ed-a9c9-22121156e8cb","Type":"ContainerDied","Data":"98b63a4387f707fe8989f7007a02efb416a3ce182b681d864a6fffaef05cd43d"} Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.348606 4881 scope.go:117] "RemoveContainer" containerID="ebf63005cec886f7073127e6f8a1b1d91309382b4d83ebbd9aca189eabae9b37" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.347291 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.381979 4881 scope.go:117] "RemoveContainer" containerID="19d2c0708e63a625c9564d43bfbff6b4bf382eb29c4f5fe75600d774080fe1d6" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.392901 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.412368 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.426101 4881 scope.go:117] "RemoveContainer" containerID="f833baf807f57255c45be1ba58cccaca032385ccba346e4fc3846694862bc6ee" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.428477 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:23:26 crc kubenswrapper[4881]: E0121 11:23:26.429023 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20eeb602-9c98-48ed-a9c9-22121156e8cb" containerName="proxy-httpd" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.429044 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="20eeb602-9c98-48ed-a9c9-22121156e8cb" containerName="proxy-httpd" Jan 21 11:23:26 crc kubenswrapper[4881]: E0121 11:23:26.429063 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20eeb602-9c98-48ed-a9c9-22121156e8cb" containerName="ceilometer-central-agent" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.429071 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="20eeb602-9c98-48ed-a9c9-22121156e8cb" containerName="ceilometer-central-agent" Jan 21 11:23:26 crc kubenswrapper[4881]: E0121 11:23:26.429089 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20eeb602-9c98-48ed-a9c9-22121156e8cb" containerName="sg-core" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.429096 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="20eeb602-9c98-48ed-a9c9-22121156e8cb" containerName="sg-core" Jan 21 11:23:26 crc kubenswrapper[4881]: E0121 11:23:26.429124 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20eeb602-9c98-48ed-a9c9-22121156e8cb" containerName="ceilometer-notification-agent" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.429131 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="20eeb602-9c98-48ed-a9c9-22121156e8cb" containerName="ceilometer-notification-agent" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.429355 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="20eeb602-9c98-48ed-a9c9-22121156e8cb" containerName="ceilometer-notification-agent" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.429380 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="20eeb602-9c98-48ed-a9c9-22121156e8cb" containerName="sg-core" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.429394 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="20eeb602-9c98-48ed-a9c9-22121156e8cb" containerName="ceilometer-central-agent" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.429404 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="20eeb602-9c98-48ed-a9c9-22121156e8cb" containerName="proxy-httpd" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.432016 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.435494 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.435499 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.435868 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.459935 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.463432 4881 scope.go:117] "RemoveContainer" containerID="8256e63406ff9c5a7c526341a649b275e3f5ab402c57f45ac53e47b1d11393f9" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.559881 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/201fb26a-87ca-4563-a6ae-1279da9cf1d9-run-httpd\") pod \"ceilometer-0\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " pod="openstack/ceilometer-0" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.559960 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/201fb26a-87ca-4563-a6ae-1279da9cf1d9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " pod="openstack/ceilometer-0" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.560007 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/201fb26a-87ca-4563-a6ae-1279da9cf1d9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " pod="openstack/ceilometer-0" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.560139 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/201fb26a-87ca-4563-a6ae-1279da9cf1d9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " pod="openstack/ceilometer-0" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.560213 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/201fb26a-87ca-4563-a6ae-1279da9cf1d9-log-httpd\") pod \"ceilometer-0\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " pod="openstack/ceilometer-0" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.560252 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/201fb26a-87ca-4563-a6ae-1279da9cf1d9-config-data\") pod \"ceilometer-0\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " pod="openstack/ceilometer-0" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.560322 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/201fb26a-87ca-4563-a6ae-1279da9cf1d9-scripts\") pod \"ceilometer-0\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " pod="openstack/ceilometer-0" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.560407 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc45w\" (UniqueName: \"kubernetes.io/projected/201fb26a-87ca-4563-a6ae-1279da9cf1d9-kube-api-access-bc45w\") pod \"ceilometer-0\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " pod="openstack/ceilometer-0" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.662216 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/201fb26a-87ca-4563-a6ae-1279da9cf1d9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " pod="openstack/ceilometer-0" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.662286 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/201fb26a-87ca-4563-a6ae-1279da9cf1d9-log-httpd\") pod \"ceilometer-0\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " pod="openstack/ceilometer-0" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.662315 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/201fb26a-87ca-4563-a6ae-1279da9cf1d9-config-data\") pod \"ceilometer-0\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " pod="openstack/ceilometer-0" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.662339 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/201fb26a-87ca-4563-a6ae-1279da9cf1d9-scripts\") pod \"ceilometer-0\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " pod="openstack/ceilometer-0" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.662381 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bc45w\" (UniqueName: \"kubernetes.io/projected/201fb26a-87ca-4563-a6ae-1279da9cf1d9-kube-api-access-bc45w\") pod \"ceilometer-0\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " pod="openstack/ceilometer-0" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.662428 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/201fb26a-87ca-4563-a6ae-1279da9cf1d9-run-httpd\") pod \"ceilometer-0\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " pod="openstack/ceilometer-0" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.662466 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/201fb26a-87ca-4563-a6ae-1279da9cf1d9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " pod="openstack/ceilometer-0" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.662526 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/201fb26a-87ca-4563-a6ae-1279da9cf1d9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " pod="openstack/ceilometer-0" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.663420 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/201fb26a-87ca-4563-a6ae-1279da9cf1d9-run-httpd\") pod \"ceilometer-0\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " pod="openstack/ceilometer-0" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.664587 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/201fb26a-87ca-4563-a6ae-1279da9cf1d9-log-httpd\") pod \"ceilometer-0\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " pod="openstack/ceilometer-0" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.668576 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/201fb26a-87ca-4563-a6ae-1279da9cf1d9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " pod="openstack/ceilometer-0" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.669277 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/201fb26a-87ca-4563-a6ae-1279da9cf1d9-config-data\") pod \"ceilometer-0\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " pod="openstack/ceilometer-0" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.671397 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/201fb26a-87ca-4563-a6ae-1279da9cf1d9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " pod="openstack/ceilometer-0" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.672832 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/201fb26a-87ca-4563-a6ae-1279da9cf1d9-scripts\") pod \"ceilometer-0\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " pod="openstack/ceilometer-0" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.683349 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/201fb26a-87ca-4563-a6ae-1279da9cf1d9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " pod="openstack/ceilometer-0" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.687121 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bc45w\" (UniqueName: \"kubernetes.io/projected/201fb26a-87ca-4563-a6ae-1279da9cf1d9-kube-api-access-bc45w\") pod \"ceilometer-0\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " pod="openstack/ceilometer-0" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.704801 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.743810 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.743905 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.760492 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.765386 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 21 11:23:26 crc kubenswrapper[4881]: I0121 11:23:26.812625 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 21 11:23:27 crc kubenswrapper[4881]: I0121 11:23:27.325699 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20eeb602-9c98-48ed-a9c9-22121156e8cb" path="/var/lib/kubelet/pods/20eeb602-9c98-48ed-a9c9-22121156e8cb/volumes" Jan 21 11:23:27 crc kubenswrapper[4881]: I0121 11:23:27.431144 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 21 11:23:27 crc kubenswrapper[4881]: I0121 11:23:27.762072 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.216:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 11:23:27 crc kubenswrapper[4881]: I0121 11:23:27.762712 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.216:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 11:23:27 crc kubenswrapper[4881]: I0121 11:23:27.921478 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:23:28 crc kubenswrapper[4881]: I0121 11:23:28.407095 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"201fb26a-87ca-4563-a6ae-1279da9cf1d9","Type":"ContainerStarted","Data":"66e45f9085cd7aa6bc51a5b18dd439286f856ddcee2ed6d0f6e2f8de173537a4"} Jan 21 11:23:28 crc kubenswrapper[4881]: I0121 11:23:28.409510 4881 generic.go:334] "Generic (PLEG): container finished" podID="eb575609-e27b-438e-b305-754fed7dbd0c" containerID="e929562399ff233dd1a78f425dfd303c1e447dae54c360f17a5f7618c63f02f3" exitCode=0 Jan 21 11:23:28 crc kubenswrapper[4881]: I0121 11:23:28.409846 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-llk4v" event={"ID":"eb575609-e27b-438e-b305-754fed7dbd0c","Type":"ContainerDied","Data":"e929562399ff233dd1a78f425dfd303c1e447dae54c360f17a5f7618c63f02f3"} Jan 21 11:23:28 crc kubenswrapper[4881]: I0121 11:23:28.637874 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 11:23:28 crc kubenswrapper[4881]: I0121 11:23:28.637930 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 11:23:29 crc kubenswrapper[4881]: I0121 11:23:29.423111 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"201fb26a-87ca-4563-a6ae-1279da9cf1d9","Type":"ContainerStarted","Data":"21e7befe3db09a0933a930666700555026336530bd06628c4d04638027f5dd37"} Jan 21 11:23:29 crc kubenswrapper[4881]: I0121 11:23:29.721009 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="cb8d5e00-825f-4df2-9720-3de7be3e0837" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.218:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 11:23:29 crc kubenswrapper[4881]: I0121 11:23:29.721003 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="cb8d5e00-825f-4df2-9720-3de7be3e0837" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.218:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 11:23:29 crc kubenswrapper[4881]: I0121 11:23:29.851021 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:23:29 crc kubenswrapper[4881]: I0121 11:23:29.851109 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:23:32 crc kubenswrapper[4881]: I0121 11:23:32.248618 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vpxn7"] Jan 21 11:23:32 crc kubenswrapper[4881]: I0121 11:23:32.253399 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vpxn7" Jan 21 11:23:32 crc kubenswrapper[4881]: I0121 11:23:32.830676 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52706c95-5c29-44cb-bc9d-2873d3a4d437-utilities\") pod \"community-operators-vpxn7\" (UID: \"52706c95-5c29-44cb-bc9d-2873d3a4d437\") " pod="openshift-marketplace/community-operators-vpxn7" Jan 21 11:23:32 crc kubenswrapper[4881]: I0121 11:23:32.830953 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52706c95-5c29-44cb-bc9d-2873d3a4d437-catalog-content\") pod \"community-operators-vpxn7\" (UID: \"52706c95-5c29-44cb-bc9d-2873d3a4d437\") " pod="openshift-marketplace/community-operators-vpxn7" Jan 21 11:23:32 crc kubenswrapper[4881]: I0121 11:23:32.830988 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkwt4\" (UniqueName: \"kubernetes.io/projected/52706c95-5c29-44cb-bc9d-2873d3a4d437-kube-api-access-gkwt4\") pod \"community-operators-vpxn7\" (UID: \"52706c95-5c29-44cb-bc9d-2873d3a4d437\") " pod="openshift-marketplace/community-operators-vpxn7" Jan 21 11:23:32 crc kubenswrapper[4881]: I0121 11:23:32.905666 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vpxn7"] Jan 21 11:23:32 crc kubenswrapper[4881]: I0121 11:23:32.932377 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52706c95-5c29-44cb-bc9d-2873d3a4d437-utilities\") pod \"community-operators-vpxn7\" (UID: \"52706c95-5c29-44cb-bc9d-2873d3a4d437\") " pod="openshift-marketplace/community-operators-vpxn7" Jan 21 11:23:32 crc kubenswrapper[4881]: I0121 11:23:32.932808 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52706c95-5c29-44cb-bc9d-2873d3a4d437-catalog-content\") pod \"community-operators-vpxn7\" (UID: \"52706c95-5c29-44cb-bc9d-2873d3a4d437\") " pod="openshift-marketplace/community-operators-vpxn7" Jan 21 11:23:32 crc kubenswrapper[4881]: I0121 11:23:32.932902 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkwt4\" (UniqueName: \"kubernetes.io/projected/52706c95-5c29-44cb-bc9d-2873d3a4d437-kube-api-access-gkwt4\") pod \"community-operators-vpxn7\" (UID: \"52706c95-5c29-44cb-bc9d-2873d3a4d437\") " pod="openshift-marketplace/community-operators-vpxn7" Jan 21 11:23:32 crc kubenswrapper[4881]: I0121 11:23:32.934019 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52706c95-5c29-44cb-bc9d-2873d3a4d437-utilities\") pod \"community-operators-vpxn7\" (UID: \"52706c95-5c29-44cb-bc9d-2873d3a4d437\") " pod="openshift-marketplace/community-operators-vpxn7" Jan 21 11:23:32 crc kubenswrapper[4881]: I0121 11:23:32.934636 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52706c95-5c29-44cb-bc9d-2873d3a4d437-catalog-content\") pod \"community-operators-vpxn7\" (UID: \"52706c95-5c29-44cb-bc9d-2873d3a4d437\") " pod="openshift-marketplace/community-operators-vpxn7" Jan 21 11:23:32 crc kubenswrapper[4881]: I0121 11:23:32.968133 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkwt4\" (UniqueName: \"kubernetes.io/projected/52706c95-5c29-44cb-bc9d-2873d3a4d437-kube-api-access-gkwt4\") pod \"community-operators-vpxn7\" (UID: \"52706c95-5c29-44cb-bc9d-2873d3a4d437\") " pod="openshift-marketplace/community-operators-vpxn7" Jan 21 11:23:33 crc kubenswrapper[4881]: I0121 11:23:33.077626 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vpxn7" Jan 21 11:23:33 crc kubenswrapper[4881]: I0121 11:23:33.222007 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-llk4v" event={"ID":"eb575609-e27b-438e-b305-754fed7dbd0c","Type":"ContainerStarted","Data":"2219e8b5d6f5a7a40bd416bfddf08247dd9bb87c1adf182b223943c7ce68d925"} Jan 21 11:23:33 crc kubenswrapper[4881]: I0121 11:23:33.230251 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"201fb26a-87ca-4563-a6ae-1279da9cf1d9","Type":"ContainerStarted","Data":"a06398efdd27167761cc6251bd8384a3c3e25770859f0b77e181cd4905e9a62e"} Jan 21 11:23:33 crc kubenswrapper[4881]: I0121 11:23:33.232010 4881 generic.go:334] "Generic (PLEG): container finished" podID="50ff1a29-d6ee-4911-bb22-165aca6d8605" containerID="9d3665845c2c2c09903d0aa16a7538de5b4dcf05cef7d82865d9c9d446cdaf41" exitCode=137 Jan 21 11:23:33 crc kubenswrapper[4881]: I0121 11:23:33.232055 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"50ff1a29-d6ee-4911-bb22-165aca6d8605","Type":"ContainerDied","Data":"9d3665845c2c2c09903d0aa16a7538de5b4dcf05cef7d82865d9c9d446cdaf41"} Jan 21 11:23:33 crc kubenswrapper[4881]: I0121 11:23:33.232083 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"50ff1a29-d6ee-4911-bb22-165aca6d8605","Type":"ContainerDied","Data":"6aaf4e142828aa790e377df87440347084937144bb74fce4d8edde8de8915f28"} Jan 21 11:23:33 crc kubenswrapper[4881]: I0121 11:23:33.232106 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6aaf4e142828aa790e377df87440347084937144bb74fce4d8edde8de8915f28" Jan 21 11:23:33 crc kubenswrapper[4881]: I0121 11:23:33.281218 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-llk4v" podStartSLOduration=5.528926033 podStartE2EDuration="11.281194276s" podCreationTimestamp="2026-01-21 11:23:22 +0000 UTC" firstStartedPulling="2026-01-21 11:23:25.336073278 +0000 UTC m=+1592.596029747" lastFinishedPulling="2026-01-21 11:23:31.088341521 +0000 UTC m=+1598.348297990" observedRunningTime="2026-01-21 11:23:33.272549324 +0000 UTC m=+1600.532505793" watchObservedRunningTime="2026-01-21 11:23:33.281194276 +0000 UTC m=+1600.541150765" Jan 21 11:23:33 crc kubenswrapper[4881]: I0121 11:23:33.302710 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:23:33 crc kubenswrapper[4881]: I0121 11:23:33.359381 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50ff1a29-d6ee-4911-bb22-165aca6d8605-config-data\") pod \"50ff1a29-d6ee-4911-bb22-165aca6d8605\" (UID: \"50ff1a29-d6ee-4911-bb22-165aca6d8605\") " Jan 21 11:23:33 crc kubenswrapper[4881]: I0121 11:23:33.359628 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xz64s\" (UniqueName: \"kubernetes.io/projected/50ff1a29-d6ee-4911-bb22-165aca6d8605-kube-api-access-xz64s\") pod \"50ff1a29-d6ee-4911-bb22-165aca6d8605\" (UID: \"50ff1a29-d6ee-4911-bb22-165aca6d8605\") " Jan 21 11:23:33 crc kubenswrapper[4881]: I0121 11:23:33.359672 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ff1a29-d6ee-4911-bb22-165aca6d8605-combined-ca-bundle\") pod \"50ff1a29-d6ee-4911-bb22-165aca6d8605\" (UID: \"50ff1a29-d6ee-4911-bb22-165aca6d8605\") " Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:33.381502 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50ff1a29-d6ee-4911-bb22-165aca6d8605-kube-api-access-xz64s" (OuterVolumeSpecName: "kube-api-access-xz64s") pod "50ff1a29-d6ee-4911-bb22-165aca6d8605" (UID: "50ff1a29-d6ee-4911-bb22-165aca6d8605"). InnerVolumeSpecName "kube-api-access-xz64s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:33.448694 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50ff1a29-d6ee-4911-bb22-165aca6d8605-config-data" (OuterVolumeSpecName: "config-data") pod "50ff1a29-d6ee-4911-bb22-165aca6d8605" (UID: "50ff1a29-d6ee-4911-bb22-165aca6d8605"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:33.454115 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50ff1a29-d6ee-4911-bb22-165aca6d8605-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "50ff1a29-d6ee-4911-bb22-165aca6d8605" (UID: "50ff1a29-d6ee-4911-bb22-165aca6d8605"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:33.463096 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xz64s\" (UniqueName: \"kubernetes.io/projected/50ff1a29-d6ee-4911-bb22-165aca6d8605-kube-api-access-xz64s\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:33.463126 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ff1a29-d6ee-4911-bb22-165aca6d8605-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:33.463136 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50ff1a29-d6ee-4911-bb22-165aca6d8605-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:33.552336 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-llk4v" Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:33.552373 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-llk4v" Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:33.757237 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vpxn7"] Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.250838 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"201fb26a-87ca-4563-a6ae-1279da9cf1d9","Type":"ContainerStarted","Data":"8c2d9a4ccbe836f11691d18a98cd55c1064fb634fa10ae39a24965732048adf6"} Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.255382 4881 generic.go:334] "Generic (PLEG): container finished" podID="52706c95-5c29-44cb-bc9d-2873d3a4d437" containerID="839be54c5d528613e443040a57965cbb40c5fa31def7b53542cfe13d609474b7" exitCode=0 Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.255542 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.255519 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vpxn7" event={"ID":"52706c95-5c29-44cb-bc9d-2873d3a4d437","Type":"ContainerDied","Data":"839be54c5d528613e443040a57965cbb40c5fa31def7b53542cfe13d609474b7"} Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.255613 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vpxn7" event={"ID":"52706c95-5c29-44cb-bc9d-2873d3a4d437","Type":"ContainerStarted","Data":"1de739443c6dfd6b37749b58394c2360dea5377c680c0b8dae6cbb306ba43ef6"} Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.327659 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.341660 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.362793 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 11:23:34 crc kubenswrapper[4881]: E0121 11:23:34.363425 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50ff1a29-d6ee-4911-bb22-165aca6d8605" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.363450 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="50ff1a29-d6ee-4911-bb22-165aca6d8605" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.363732 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="50ff1a29-d6ee-4911-bb22-165aca6d8605" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.364600 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.368301 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.368850 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.368886 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.380081 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.431351 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt2fc\" (UniqueName: \"kubernetes.io/projected/b9ce9000-94ef-4f6e-8bc7-97feca616b9e-kube-api-access-bt2fc\") pod \"nova-cell1-novncproxy-0\" (UID: \"b9ce9000-94ef-4f6e-8bc7-97feca616b9e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.431718 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9ce9000-94ef-4f6e-8bc7-97feca616b9e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b9ce9000-94ef-4f6e-8bc7-97feca616b9e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.431941 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9ce9000-94ef-4f6e-8bc7-97feca616b9e-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b9ce9000-94ef-4f6e-8bc7-97feca616b9e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.431960 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9ce9000-94ef-4f6e-8bc7-97feca616b9e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b9ce9000-94ef-4f6e-8bc7-97feca616b9e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.431979 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9ce9000-94ef-4f6e-8bc7-97feca616b9e-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b9ce9000-94ef-4f6e-8bc7-97feca616b9e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.534676 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9ce9000-94ef-4f6e-8bc7-97feca616b9e-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b9ce9000-94ef-4f6e-8bc7-97feca616b9e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.535053 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9ce9000-94ef-4f6e-8bc7-97feca616b9e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b9ce9000-94ef-4f6e-8bc7-97feca616b9e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.535156 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9ce9000-94ef-4f6e-8bc7-97feca616b9e-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b9ce9000-94ef-4f6e-8bc7-97feca616b9e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.535332 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bt2fc\" (UniqueName: \"kubernetes.io/projected/b9ce9000-94ef-4f6e-8bc7-97feca616b9e-kube-api-access-bt2fc\") pod \"nova-cell1-novncproxy-0\" (UID: \"b9ce9000-94ef-4f6e-8bc7-97feca616b9e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.535539 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9ce9000-94ef-4f6e-8bc7-97feca616b9e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b9ce9000-94ef-4f6e-8bc7-97feca616b9e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.544064 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9ce9000-94ef-4f6e-8bc7-97feca616b9e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b9ce9000-94ef-4f6e-8bc7-97feca616b9e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.544467 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9ce9000-94ef-4f6e-8bc7-97feca616b9e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b9ce9000-94ef-4f6e-8bc7-97feca616b9e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.546561 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9ce9000-94ef-4f6e-8bc7-97feca616b9e-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b9ce9000-94ef-4f6e-8bc7-97feca616b9e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.549420 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9ce9000-94ef-4f6e-8bc7-97feca616b9e-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b9ce9000-94ef-4f6e-8bc7-97feca616b9e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.555628 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-llk4v" podUID="eb575609-e27b-438e-b305-754fed7dbd0c" containerName="registry-server" probeResult="failure" output=< Jan 21 11:23:34 crc kubenswrapper[4881]: timeout: failed to connect service ":50051" within 1s Jan 21 11:23:34 crc kubenswrapper[4881]: > Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.556567 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bt2fc\" (UniqueName: \"kubernetes.io/projected/b9ce9000-94ef-4f6e-8bc7-97feca616b9e-kube-api-access-bt2fc\") pod \"nova-cell1-novncproxy-0\" (UID: \"b9ce9000-94ef-4f6e-8bc7-97feca616b9e\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:23:34 crc kubenswrapper[4881]: I0121 11:23:34.734674 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:23:35 crc kubenswrapper[4881]: I0121 11:23:35.326762 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50ff1a29-d6ee-4911-bb22-165aca6d8605" path="/var/lib/kubelet/pods/50ff1a29-d6ee-4911-bb22-165aca6d8605/volumes" Jan 21 11:23:35 crc kubenswrapper[4881]: I0121 11:23:35.545797 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 11:23:36 crc kubenswrapper[4881]: I0121 11:23:36.289576 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"201fb26a-87ca-4563-a6ae-1279da9cf1d9","Type":"ContainerStarted","Data":"23fdf8bf079c92f8fffad95a39aeec48a0ce6ca5c3d367fd5c481ae6d0630f69"} Jan 21 11:23:36 crc kubenswrapper[4881]: I0121 11:23:36.290596 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 11:23:36 crc kubenswrapper[4881]: I0121 11:23:36.291917 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b9ce9000-94ef-4f6e-8bc7-97feca616b9e","Type":"ContainerStarted","Data":"39179a3f03cf7c0e700dc4ab827a9768bb1a1685b7d25388ec54358da8590f28"} Jan 21 11:23:36 crc kubenswrapper[4881]: I0121 11:23:36.292515 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b9ce9000-94ef-4f6e-8bc7-97feca616b9e","Type":"ContainerStarted","Data":"2a8a88246eed90b5f605d9f43551dceedbd8321c987cdcb16739add4b22765d2"} Jan 21 11:23:36 crc kubenswrapper[4881]: I0121 11:23:36.294698 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vpxn7" event={"ID":"52706c95-5c29-44cb-bc9d-2873d3a4d437","Type":"ContainerStarted","Data":"fb028c7404b9ff86895c5bd0739f99516ab80f521a872c7d6e2892460b2e7b12"} Jan 21 11:23:36 crc kubenswrapper[4881]: I0121 11:23:36.324982 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.895448176 podStartE2EDuration="10.324958664s" podCreationTimestamp="2026-01-21 11:23:26 +0000 UTC" firstStartedPulling="2026-01-21 11:23:27.916357873 +0000 UTC m=+1595.176314332" lastFinishedPulling="2026-01-21 11:23:35.345868351 +0000 UTC m=+1602.605824820" observedRunningTime="2026-01-21 11:23:36.313738189 +0000 UTC m=+1603.573694658" watchObservedRunningTime="2026-01-21 11:23:36.324958664 +0000 UTC m=+1603.584915133" Jan 21 11:23:36 crc kubenswrapper[4881]: I0121 11:23:36.387623 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.387583278 podStartE2EDuration="2.387583278s" podCreationTimestamp="2026-01-21 11:23:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:23:36.356467386 +0000 UTC m=+1603.616423855" watchObservedRunningTime="2026-01-21 11:23:36.387583278 +0000 UTC m=+1603.647539747" Jan 21 11:23:36 crc kubenswrapper[4881]: I0121 11:23:36.751835 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 11:23:36 crc kubenswrapper[4881]: I0121 11:23:36.753838 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 11:23:36 crc kubenswrapper[4881]: I0121 11:23:36.770420 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 11:23:37 crc kubenswrapper[4881]: I0121 11:23:37.308714 4881 generic.go:334] "Generic (PLEG): container finished" podID="52706c95-5c29-44cb-bc9d-2873d3a4d437" containerID="fb028c7404b9ff86895c5bd0739f99516ab80f521a872c7d6e2892460b2e7b12" exitCode=0 Jan 21 11:23:37 crc kubenswrapper[4881]: I0121 11:23:37.308772 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vpxn7" event={"ID":"52706c95-5c29-44cb-bc9d-2873d3a4d437","Type":"ContainerDied","Data":"fb028c7404b9ff86895c5bd0739f99516ab80f521a872c7d6e2892460b2e7b12"} Jan 21 11:23:37 crc kubenswrapper[4881]: I0121 11:23:37.327564 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 11:23:38 crc kubenswrapper[4881]: I0121 11:23:38.657304 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 11:23:38 crc kubenswrapper[4881]: I0121 11:23:38.659024 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 11:23:38 crc kubenswrapper[4881]: I0121 11:23:38.704246 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 11:23:38 crc kubenswrapper[4881]: I0121 11:23:38.726553 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 11:23:39 crc kubenswrapper[4881]: I0121 11:23:39.351406 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vpxn7" event={"ID":"52706c95-5c29-44cb-bc9d-2873d3a4d437","Type":"ContainerStarted","Data":"c7183c2e116e85a5f629f6e5e3ffe4538c40c34d6b8cd108a955a5b4b864a2c6"} Jan 21 11:23:39 crc kubenswrapper[4881]: I0121 11:23:39.352730 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 11:23:39 crc kubenswrapper[4881]: I0121 11:23:39.366869 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 11:23:39 crc kubenswrapper[4881]: I0121 11:23:39.393590 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vpxn7" podStartSLOduration=3.904658118 podStartE2EDuration="7.39356446s" podCreationTimestamp="2026-01-21 11:23:32 +0000 UTC" firstStartedPulling="2026-01-21 11:23:34.257819939 +0000 UTC m=+1601.517776398" lastFinishedPulling="2026-01-21 11:23:37.746726271 +0000 UTC m=+1605.006682740" observedRunningTime="2026-01-21 11:23:39.371327516 +0000 UTC m=+1606.631284015" watchObservedRunningTime="2026-01-21 11:23:39.39356446 +0000 UTC m=+1606.653520939" Jan 21 11:23:39 crc kubenswrapper[4881]: I0121 11:23:39.631672 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d4b6b54d9-5jzpq"] Jan 21 11:23:39 crc kubenswrapper[4881]: I0121 11:23:39.633602 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d4b6b54d9-5jzpq" Jan 21 11:23:39 crc kubenswrapper[4881]: I0121 11:23:39.667355 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d4b6b54d9-5jzpq"] Jan 21 11:23:39 crc kubenswrapper[4881]: I0121 11:23:39.688380 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/81dbec06-59d7-4c42-a558-910811fb3811-dns-swift-storage-0\") pod \"dnsmasq-dns-6d4b6b54d9-5jzpq\" (UID: \"81dbec06-59d7-4c42-a558-910811fb3811\") " pod="openstack/dnsmasq-dns-6d4b6b54d9-5jzpq" Jan 21 11:23:39 crc kubenswrapper[4881]: I0121 11:23:39.688477 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81dbec06-59d7-4c42-a558-910811fb3811-dns-svc\") pod \"dnsmasq-dns-6d4b6b54d9-5jzpq\" (UID: \"81dbec06-59d7-4c42-a558-910811fb3811\") " pod="openstack/dnsmasq-dns-6d4b6b54d9-5jzpq" Jan 21 11:23:39 crc kubenswrapper[4881]: I0121 11:23:39.688520 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81dbec06-59d7-4c42-a558-910811fb3811-ovsdbserver-sb\") pod \"dnsmasq-dns-6d4b6b54d9-5jzpq\" (UID: \"81dbec06-59d7-4c42-a558-910811fb3811\") " pod="openstack/dnsmasq-dns-6d4b6b54d9-5jzpq" Jan 21 11:23:39 crc kubenswrapper[4881]: I0121 11:23:39.688546 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwg4c\" (UniqueName: \"kubernetes.io/projected/81dbec06-59d7-4c42-a558-910811fb3811-kube-api-access-lwg4c\") pod \"dnsmasq-dns-6d4b6b54d9-5jzpq\" (UID: \"81dbec06-59d7-4c42-a558-910811fb3811\") " pod="openstack/dnsmasq-dns-6d4b6b54d9-5jzpq" Jan 21 11:23:39 crc kubenswrapper[4881]: I0121 11:23:39.688570 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81dbec06-59d7-4c42-a558-910811fb3811-ovsdbserver-nb\") pod \"dnsmasq-dns-6d4b6b54d9-5jzpq\" (UID: \"81dbec06-59d7-4c42-a558-910811fb3811\") " pod="openstack/dnsmasq-dns-6d4b6b54d9-5jzpq" Jan 21 11:23:39 crc kubenswrapper[4881]: I0121 11:23:39.688599 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81dbec06-59d7-4c42-a558-910811fb3811-config\") pod \"dnsmasq-dns-6d4b6b54d9-5jzpq\" (UID: \"81dbec06-59d7-4c42-a558-910811fb3811\") " pod="openstack/dnsmasq-dns-6d4b6b54d9-5jzpq" Jan 21 11:23:39 crc kubenswrapper[4881]: I0121 11:23:39.735635 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:23:39 crc kubenswrapper[4881]: I0121 11:23:39.790939 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/81dbec06-59d7-4c42-a558-910811fb3811-dns-swift-storage-0\") pod \"dnsmasq-dns-6d4b6b54d9-5jzpq\" (UID: \"81dbec06-59d7-4c42-a558-910811fb3811\") " pod="openstack/dnsmasq-dns-6d4b6b54d9-5jzpq" Jan 21 11:23:39 crc kubenswrapper[4881]: I0121 11:23:39.791097 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81dbec06-59d7-4c42-a558-910811fb3811-dns-svc\") pod \"dnsmasq-dns-6d4b6b54d9-5jzpq\" (UID: \"81dbec06-59d7-4c42-a558-910811fb3811\") " pod="openstack/dnsmasq-dns-6d4b6b54d9-5jzpq" Jan 21 11:23:39 crc kubenswrapper[4881]: I0121 11:23:39.791165 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81dbec06-59d7-4c42-a558-910811fb3811-ovsdbserver-sb\") pod \"dnsmasq-dns-6d4b6b54d9-5jzpq\" (UID: \"81dbec06-59d7-4c42-a558-910811fb3811\") " pod="openstack/dnsmasq-dns-6d4b6b54d9-5jzpq" Jan 21 11:23:39 crc kubenswrapper[4881]: I0121 11:23:39.791183 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwg4c\" (UniqueName: \"kubernetes.io/projected/81dbec06-59d7-4c42-a558-910811fb3811-kube-api-access-lwg4c\") pod \"dnsmasq-dns-6d4b6b54d9-5jzpq\" (UID: \"81dbec06-59d7-4c42-a558-910811fb3811\") " pod="openstack/dnsmasq-dns-6d4b6b54d9-5jzpq" Jan 21 11:23:39 crc kubenswrapper[4881]: I0121 11:23:39.791212 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81dbec06-59d7-4c42-a558-910811fb3811-ovsdbserver-nb\") pod \"dnsmasq-dns-6d4b6b54d9-5jzpq\" (UID: \"81dbec06-59d7-4c42-a558-910811fb3811\") " pod="openstack/dnsmasq-dns-6d4b6b54d9-5jzpq" Jan 21 11:23:39 crc kubenswrapper[4881]: I0121 11:23:39.791244 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81dbec06-59d7-4c42-a558-910811fb3811-config\") pod \"dnsmasq-dns-6d4b6b54d9-5jzpq\" (UID: \"81dbec06-59d7-4c42-a558-910811fb3811\") " pod="openstack/dnsmasq-dns-6d4b6b54d9-5jzpq" Jan 21 11:23:39 crc kubenswrapper[4881]: I0121 11:23:39.792043 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/81dbec06-59d7-4c42-a558-910811fb3811-dns-swift-storage-0\") pod \"dnsmasq-dns-6d4b6b54d9-5jzpq\" (UID: \"81dbec06-59d7-4c42-a558-910811fb3811\") " pod="openstack/dnsmasq-dns-6d4b6b54d9-5jzpq" Jan 21 11:23:39 crc kubenswrapper[4881]: I0121 11:23:39.792175 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81dbec06-59d7-4c42-a558-910811fb3811-dns-svc\") pod \"dnsmasq-dns-6d4b6b54d9-5jzpq\" (UID: \"81dbec06-59d7-4c42-a558-910811fb3811\") " pod="openstack/dnsmasq-dns-6d4b6b54d9-5jzpq" Jan 21 11:23:39 crc kubenswrapper[4881]: I0121 11:23:39.792192 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81dbec06-59d7-4c42-a558-910811fb3811-ovsdbserver-nb\") pod \"dnsmasq-dns-6d4b6b54d9-5jzpq\" (UID: \"81dbec06-59d7-4c42-a558-910811fb3811\") " pod="openstack/dnsmasq-dns-6d4b6b54d9-5jzpq" Jan 21 11:23:39 crc kubenswrapper[4881]: I0121 11:23:39.792547 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81dbec06-59d7-4c42-a558-910811fb3811-ovsdbserver-sb\") pod \"dnsmasq-dns-6d4b6b54d9-5jzpq\" (UID: \"81dbec06-59d7-4c42-a558-910811fb3811\") " pod="openstack/dnsmasq-dns-6d4b6b54d9-5jzpq" Jan 21 11:23:39 crc kubenswrapper[4881]: I0121 11:23:39.792574 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81dbec06-59d7-4c42-a558-910811fb3811-config\") pod \"dnsmasq-dns-6d4b6b54d9-5jzpq\" (UID: \"81dbec06-59d7-4c42-a558-910811fb3811\") " pod="openstack/dnsmasq-dns-6d4b6b54d9-5jzpq" Jan 21 11:23:39 crc kubenswrapper[4881]: I0121 11:23:39.824129 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwg4c\" (UniqueName: \"kubernetes.io/projected/81dbec06-59d7-4c42-a558-910811fb3811-kube-api-access-lwg4c\") pod \"dnsmasq-dns-6d4b6b54d9-5jzpq\" (UID: \"81dbec06-59d7-4c42-a558-910811fb3811\") " pod="openstack/dnsmasq-dns-6d4b6b54d9-5jzpq" Jan 21 11:23:39 crc kubenswrapper[4881]: I0121 11:23:39.986712 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d4b6b54d9-5jzpq" Jan 21 11:23:40 crc kubenswrapper[4881]: I0121 11:23:40.946701 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d4b6b54d9-5jzpq"] Jan 21 11:23:41 crc kubenswrapper[4881]: I0121 11:23:41.633761 4881 generic.go:334] "Generic (PLEG): container finished" podID="81dbec06-59d7-4c42-a558-910811fb3811" containerID="7b3d565271b021e09dee5880082bea3cf44364df7d0a06382823cae7b26b1046" exitCode=0 Jan 21 11:23:41 crc kubenswrapper[4881]: I0121 11:23:41.633823 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d4b6b54d9-5jzpq" event={"ID":"81dbec06-59d7-4c42-a558-910811fb3811","Type":"ContainerDied","Data":"7b3d565271b021e09dee5880082bea3cf44364df7d0a06382823cae7b26b1046"} Jan 21 11:23:41 crc kubenswrapper[4881]: I0121 11:23:41.634379 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d4b6b54d9-5jzpq" event={"ID":"81dbec06-59d7-4c42-a558-910811fb3811","Type":"ContainerStarted","Data":"14e34995d6813b59d5fbddbd68a531e00edeb5c9ae370d72d56de9da156f7345"} Jan 21 11:23:42 crc kubenswrapper[4881]: I0121 11:23:42.666470 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d4b6b54d9-5jzpq" event={"ID":"81dbec06-59d7-4c42-a558-910811fb3811","Type":"ContainerStarted","Data":"a807273d95c9864f3ecabade018dc0a91eb28a83bcfcbef9786d9473502a12a5"} Jan 21 11:23:42 crc kubenswrapper[4881]: I0121 11:23:42.666844 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d4b6b54d9-5jzpq" Jan 21 11:23:42 crc kubenswrapper[4881]: I0121 11:23:42.699534 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d4b6b54d9-5jzpq" podStartSLOduration=3.69950894 podStartE2EDuration="3.69950894s" podCreationTimestamp="2026-01-21 11:23:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:23:42.690569692 +0000 UTC m=+1609.950526171" watchObservedRunningTime="2026-01-21 11:23:42.69950894 +0000 UTC m=+1609.959465409" Jan 21 11:23:43 crc kubenswrapper[4881]: I0121 11:23:43.069046 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 11:23:43 crc kubenswrapper[4881]: I0121 11:23:43.069628 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="cb8d5e00-825f-4df2-9720-3de7be3e0837" containerName="nova-api-log" containerID="cri-o://bb359efc78c8172dc142be7dbd66247c577cc9e68e31667efda8eaa45e2b6e87" gracePeriod=30 Jan 21 11:23:43 crc kubenswrapper[4881]: I0121 11:23:43.069716 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="cb8d5e00-825f-4df2-9720-3de7be3e0837" containerName="nova-api-api" containerID="cri-o://2dfa759ad5f3629117201697e51e9070f4706b866df3273a3c40b4948e6b8705" gracePeriod=30 Jan 21 11:23:43 crc kubenswrapper[4881]: I0121 11:23:43.081818 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vpxn7" Jan 21 11:23:43 crc kubenswrapper[4881]: I0121 11:23:43.083354 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vpxn7" Jan 21 11:23:43 crc kubenswrapper[4881]: I0121 11:23:43.524136 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-llk4v" Jan 21 11:23:43 crc kubenswrapper[4881]: I0121 11:23:43.575138 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-llk4v" Jan 21 11:23:43 crc kubenswrapper[4881]: I0121 11:23:43.679611 4881 generic.go:334] "Generic (PLEG): container finished" podID="cb8d5e00-825f-4df2-9720-3de7be3e0837" containerID="bb359efc78c8172dc142be7dbd66247c577cc9e68e31667efda8eaa45e2b6e87" exitCode=143 Jan 21 11:23:43 crc kubenswrapper[4881]: I0121 11:23:43.679651 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cb8d5e00-825f-4df2-9720-3de7be3e0837","Type":"ContainerDied","Data":"bb359efc78c8172dc142be7dbd66247c577cc9e68e31667efda8eaa45e2b6e87"} Jan 21 11:23:44 crc kubenswrapper[4881]: I0121 11:23:44.191130 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-vpxn7" podUID="52706c95-5c29-44cb-bc9d-2873d3a4d437" containerName="registry-server" probeResult="failure" output=< Jan 21 11:23:44 crc kubenswrapper[4881]: timeout: failed to connect service ":50051" within 1s Jan 21 11:23:44 crc kubenswrapper[4881]: > Jan 21 11:23:44 crc kubenswrapper[4881]: I0121 11:23:44.232601 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-llk4v"] Jan 21 11:23:44 crc kubenswrapper[4881]: I0121 11:23:44.692246 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-llk4v" podUID="eb575609-e27b-438e-b305-754fed7dbd0c" containerName="registry-server" containerID="cri-o://2219e8b5d6f5a7a40bd416bfddf08247dd9bb87c1adf182b223943c7ce68d925" gracePeriod=2 Jan 21 11:23:44 crc kubenswrapper[4881]: I0121 11:23:44.735420 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.027967 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.356273 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-llk4v" Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.516599 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctnrj\" (UniqueName: \"kubernetes.io/projected/eb575609-e27b-438e-b305-754fed7dbd0c-kube-api-access-ctnrj\") pod \"eb575609-e27b-438e-b305-754fed7dbd0c\" (UID: \"eb575609-e27b-438e-b305-754fed7dbd0c\") " Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.517312 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb575609-e27b-438e-b305-754fed7dbd0c-catalog-content\") pod \"eb575609-e27b-438e-b305-754fed7dbd0c\" (UID: \"eb575609-e27b-438e-b305-754fed7dbd0c\") " Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.517456 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb575609-e27b-438e-b305-754fed7dbd0c-utilities\") pod \"eb575609-e27b-438e-b305-754fed7dbd0c\" (UID: \"eb575609-e27b-438e-b305-754fed7dbd0c\") " Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.517938 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb575609-e27b-438e-b305-754fed7dbd0c-utilities" (OuterVolumeSpecName: "utilities") pod "eb575609-e27b-438e-b305-754fed7dbd0c" (UID: "eb575609-e27b-438e-b305-754fed7dbd0c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.518872 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb575609-e27b-438e-b305-754fed7dbd0c-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.528368 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb575609-e27b-438e-b305-754fed7dbd0c-kube-api-access-ctnrj" (OuterVolumeSpecName: "kube-api-access-ctnrj") pod "eb575609-e27b-438e-b305-754fed7dbd0c" (UID: "eb575609-e27b-438e-b305-754fed7dbd0c"). InnerVolumeSpecName "kube-api-access-ctnrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.560108 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb575609-e27b-438e-b305-754fed7dbd0c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eb575609-e27b-438e-b305-754fed7dbd0c" (UID: "eb575609-e27b-438e-b305-754fed7dbd0c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.620991 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ctnrj\" (UniqueName: \"kubernetes.io/projected/eb575609-e27b-438e-b305-754fed7dbd0c-kube-api-access-ctnrj\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.621030 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb575609-e27b-438e-b305-754fed7dbd0c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.705209 4881 generic.go:334] "Generic (PLEG): container finished" podID="eb575609-e27b-438e-b305-754fed7dbd0c" containerID="2219e8b5d6f5a7a40bd416bfddf08247dd9bb87c1adf182b223943c7ce68d925" exitCode=0 Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.705278 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-llk4v" Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.705295 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-llk4v" event={"ID":"eb575609-e27b-438e-b305-754fed7dbd0c","Type":"ContainerDied","Data":"2219e8b5d6f5a7a40bd416bfddf08247dd9bb87c1adf182b223943c7ce68d925"} Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.706103 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-llk4v" event={"ID":"eb575609-e27b-438e-b305-754fed7dbd0c","Type":"ContainerDied","Data":"e500de19668bd863773799072a1748fadbbfeb7a569a7019d89d37c178966126"} Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.706153 4881 scope.go:117] "RemoveContainer" containerID="2219e8b5d6f5a7a40bd416bfddf08247dd9bb87c1adf182b223943c7ce68d925" Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.727618 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.749560 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-llk4v"] Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.750127 4881 scope.go:117] "RemoveContainer" containerID="e929562399ff233dd1a78f425dfd303c1e447dae54c360f17a5f7618c63f02f3" Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.762103 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.762401 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="201fb26a-87ca-4563-a6ae-1279da9cf1d9" containerName="ceilometer-central-agent" containerID="cri-o://21e7befe3db09a0933a930666700555026336530bd06628c4d04638027f5dd37" gracePeriod=30 Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.762549 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="201fb26a-87ca-4563-a6ae-1279da9cf1d9" containerName="proxy-httpd" containerID="cri-o://23fdf8bf079c92f8fffad95a39aeec48a0ce6ca5c3d367fd5c481ae6d0630f69" gracePeriod=30 Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.762602 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="201fb26a-87ca-4563-a6ae-1279da9cf1d9" containerName="sg-core" containerID="cri-o://8c2d9a4ccbe836f11691d18a98cd55c1064fb634fa10ae39a24965732048adf6" gracePeriod=30 Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.762639 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="201fb26a-87ca-4563-a6ae-1279da9cf1d9" containerName="ceilometer-notification-agent" containerID="cri-o://a06398efdd27167761cc6251bd8384a3c3e25770859f0b77e181cd4905e9a62e" gracePeriod=30 Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.776017 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-llk4v"] Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.815841 4881 scope.go:117] "RemoveContainer" containerID="1728cee101905ae9b1f39e05752401a8a7ecb94af74ddb10abd60ea126aafa34" Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.866249 4881 scope.go:117] "RemoveContainer" containerID="2219e8b5d6f5a7a40bd416bfddf08247dd9bb87c1adf182b223943c7ce68d925" Jan 21 11:23:45 crc kubenswrapper[4881]: E0121 11:23:45.866812 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2219e8b5d6f5a7a40bd416bfddf08247dd9bb87c1adf182b223943c7ce68d925\": container with ID starting with 2219e8b5d6f5a7a40bd416bfddf08247dd9bb87c1adf182b223943c7ce68d925 not found: ID does not exist" containerID="2219e8b5d6f5a7a40bd416bfddf08247dd9bb87c1adf182b223943c7ce68d925" Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.866915 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2219e8b5d6f5a7a40bd416bfddf08247dd9bb87c1adf182b223943c7ce68d925"} err="failed to get container status \"2219e8b5d6f5a7a40bd416bfddf08247dd9bb87c1adf182b223943c7ce68d925\": rpc error: code = NotFound desc = could not find container \"2219e8b5d6f5a7a40bd416bfddf08247dd9bb87c1adf182b223943c7ce68d925\": container with ID starting with 2219e8b5d6f5a7a40bd416bfddf08247dd9bb87c1adf182b223943c7ce68d925 not found: ID does not exist" Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.867016 4881 scope.go:117] "RemoveContainer" containerID="e929562399ff233dd1a78f425dfd303c1e447dae54c360f17a5f7618c63f02f3" Jan 21 11:23:45 crc kubenswrapper[4881]: E0121 11:23:45.867359 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e929562399ff233dd1a78f425dfd303c1e447dae54c360f17a5f7618c63f02f3\": container with ID starting with e929562399ff233dd1a78f425dfd303c1e447dae54c360f17a5f7618c63f02f3 not found: ID does not exist" containerID="e929562399ff233dd1a78f425dfd303c1e447dae54c360f17a5f7618c63f02f3" Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.867451 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e929562399ff233dd1a78f425dfd303c1e447dae54c360f17a5f7618c63f02f3"} err="failed to get container status \"e929562399ff233dd1a78f425dfd303c1e447dae54c360f17a5f7618c63f02f3\": rpc error: code = NotFound desc = could not find container \"e929562399ff233dd1a78f425dfd303c1e447dae54c360f17a5f7618c63f02f3\": container with ID starting with e929562399ff233dd1a78f425dfd303c1e447dae54c360f17a5f7618c63f02f3 not found: ID does not exist" Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.867528 4881 scope.go:117] "RemoveContainer" containerID="1728cee101905ae9b1f39e05752401a8a7ecb94af74ddb10abd60ea126aafa34" Jan 21 11:23:45 crc kubenswrapper[4881]: E0121 11:23:45.867839 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1728cee101905ae9b1f39e05752401a8a7ecb94af74ddb10abd60ea126aafa34\": container with ID starting with 1728cee101905ae9b1f39e05752401a8a7ecb94af74ddb10abd60ea126aafa34 not found: ID does not exist" containerID="1728cee101905ae9b1f39e05752401a8a7ecb94af74ddb10abd60ea126aafa34" Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.867942 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1728cee101905ae9b1f39e05752401a8a7ecb94af74ddb10abd60ea126aafa34"} err="failed to get container status \"1728cee101905ae9b1f39e05752401a8a7ecb94af74ddb10abd60ea126aafa34\": rpc error: code = NotFound desc = could not find container \"1728cee101905ae9b1f39e05752401a8a7ecb94af74ddb10abd60ea126aafa34\": container with ID starting with 1728cee101905ae9b1f39e05752401a8a7ecb94af74ddb10abd60ea126aafa34 not found: ID does not exist" Jan 21 11:23:45 crc kubenswrapper[4881]: I0121 11:23:45.999753 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-bdc49"] Jan 21 11:23:46 crc kubenswrapper[4881]: E0121 11:23:46.000323 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb575609-e27b-438e-b305-754fed7dbd0c" containerName="extract-content" Jan 21 11:23:46 crc kubenswrapper[4881]: I0121 11:23:46.000341 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb575609-e27b-438e-b305-754fed7dbd0c" containerName="extract-content" Jan 21 11:23:46 crc kubenswrapper[4881]: E0121 11:23:46.000369 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb575609-e27b-438e-b305-754fed7dbd0c" containerName="registry-server" Jan 21 11:23:46 crc kubenswrapper[4881]: I0121 11:23:46.000376 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb575609-e27b-438e-b305-754fed7dbd0c" containerName="registry-server" Jan 21 11:23:46 crc kubenswrapper[4881]: E0121 11:23:46.000386 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb575609-e27b-438e-b305-754fed7dbd0c" containerName="extract-utilities" Jan 21 11:23:46 crc kubenswrapper[4881]: I0121 11:23:46.000392 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb575609-e27b-438e-b305-754fed7dbd0c" containerName="extract-utilities" Jan 21 11:23:46 crc kubenswrapper[4881]: I0121 11:23:46.000598 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb575609-e27b-438e-b305-754fed7dbd0c" containerName="registry-server" Jan 21 11:23:46 crc kubenswrapper[4881]: I0121 11:23:46.001384 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bdc49" Jan 21 11:23:46 crc kubenswrapper[4881]: I0121 11:23:46.004326 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 21 11:23:46 crc kubenswrapper[4881]: I0121 11:23:46.004545 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 21 11:23:46 crc kubenswrapper[4881]: I0121 11:23:46.011839 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-bdc49"] Jan 21 11:23:46 crc kubenswrapper[4881]: I0121 11:23:46.148430 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d8ffc48-6b0f-48d1-b13d-8a766f5b604a-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-bdc49\" (UID: \"3d8ffc48-6b0f-48d1-b13d-8a766f5b604a\") " pod="openstack/nova-cell1-cell-mapping-bdc49" Jan 21 11:23:46 crc kubenswrapper[4881]: I0121 11:23:46.148491 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvcfc\" (UniqueName: \"kubernetes.io/projected/3d8ffc48-6b0f-48d1-b13d-8a766f5b604a-kube-api-access-qvcfc\") pod \"nova-cell1-cell-mapping-bdc49\" (UID: \"3d8ffc48-6b0f-48d1-b13d-8a766f5b604a\") " pod="openstack/nova-cell1-cell-mapping-bdc49" Jan 21 11:23:46 crc kubenswrapper[4881]: I0121 11:23:46.148516 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d8ffc48-6b0f-48d1-b13d-8a766f5b604a-config-data\") pod \"nova-cell1-cell-mapping-bdc49\" (UID: \"3d8ffc48-6b0f-48d1-b13d-8a766f5b604a\") " pod="openstack/nova-cell1-cell-mapping-bdc49" Jan 21 11:23:46 crc kubenswrapper[4881]: I0121 11:23:46.148619 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d8ffc48-6b0f-48d1-b13d-8a766f5b604a-scripts\") pod \"nova-cell1-cell-mapping-bdc49\" (UID: \"3d8ffc48-6b0f-48d1-b13d-8a766f5b604a\") " pod="openstack/nova-cell1-cell-mapping-bdc49" Jan 21 11:23:46 crc kubenswrapper[4881]: I0121 11:23:46.250208 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d8ffc48-6b0f-48d1-b13d-8a766f5b604a-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-bdc49\" (UID: \"3d8ffc48-6b0f-48d1-b13d-8a766f5b604a\") " pod="openstack/nova-cell1-cell-mapping-bdc49" Jan 21 11:23:46 crc kubenswrapper[4881]: I0121 11:23:46.250491 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvcfc\" (UniqueName: \"kubernetes.io/projected/3d8ffc48-6b0f-48d1-b13d-8a766f5b604a-kube-api-access-qvcfc\") pod \"nova-cell1-cell-mapping-bdc49\" (UID: \"3d8ffc48-6b0f-48d1-b13d-8a766f5b604a\") " pod="openstack/nova-cell1-cell-mapping-bdc49" Jan 21 11:23:46 crc kubenswrapper[4881]: I0121 11:23:46.250514 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d8ffc48-6b0f-48d1-b13d-8a766f5b604a-config-data\") pod \"nova-cell1-cell-mapping-bdc49\" (UID: \"3d8ffc48-6b0f-48d1-b13d-8a766f5b604a\") " pod="openstack/nova-cell1-cell-mapping-bdc49" Jan 21 11:23:46 crc kubenswrapper[4881]: I0121 11:23:46.250579 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d8ffc48-6b0f-48d1-b13d-8a766f5b604a-scripts\") pod \"nova-cell1-cell-mapping-bdc49\" (UID: \"3d8ffc48-6b0f-48d1-b13d-8a766f5b604a\") " pod="openstack/nova-cell1-cell-mapping-bdc49" Jan 21 11:23:46 crc kubenswrapper[4881]: I0121 11:23:46.257933 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d8ffc48-6b0f-48d1-b13d-8a766f5b604a-scripts\") pod \"nova-cell1-cell-mapping-bdc49\" (UID: \"3d8ffc48-6b0f-48d1-b13d-8a766f5b604a\") " pod="openstack/nova-cell1-cell-mapping-bdc49" Jan 21 11:23:46 crc kubenswrapper[4881]: I0121 11:23:46.263894 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d8ffc48-6b0f-48d1-b13d-8a766f5b604a-config-data\") pod \"nova-cell1-cell-mapping-bdc49\" (UID: \"3d8ffc48-6b0f-48d1-b13d-8a766f5b604a\") " pod="openstack/nova-cell1-cell-mapping-bdc49" Jan 21 11:23:46 crc kubenswrapper[4881]: I0121 11:23:46.270707 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d8ffc48-6b0f-48d1-b13d-8a766f5b604a-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-bdc49\" (UID: \"3d8ffc48-6b0f-48d1-b13d-8a766f5b604a\") " pod="openstack/nova-cell1-cell-mapping-bdc49" Jan 21 11:23:46 crc kubenswrapper[4881]: I0121 11:23:46.275406 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvcfc\" (UniqueName: \"kubernetes.io/projected/3d8ffc48-6b0f-48d1-b13d-8a766f5b604a-kube-api-access-qvcfc\") pod \"nova-cell1-cell-mapping-bdc49\" (UID: \"3d8ffc48-6b0f-48d1-b13d-8a766f5b604a\") " pod="openstack/nova-cell1-cell-mapping-bdc49" Jan 21 11:23:46 crc kubenswrapper[4881]: I0121 11:23:46.340798 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bdc49" Jan 21 11:23:46 crc kubenswrapper[4881]: I0121 11:23:46.997417 4881 generic.go:334] "Generic (PLEG): container finished" podID="cb8d5e00-825f-4df2-9720-3de7be3e0837" containerID="2dfa759ad5f3629117201697e51e9070f4706b866df3273a3c40b4948e6b8705" exitCode=0 Jan 21 11:23:46 crc kubenswrapper[4881]: I0121 11:23:46.997689 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cb8d5e00-825f-4df2-9720-3de7be3e0837","Type":"ContainerDied","Data":"2dfa759ad5f3629117201697e51e9070f4706b866df3273a3c40b4948e6b8705"} Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.004027 4881 generic.go:334] "Generic (PLEG): container finished" podID="201fb26a-87ca-4563-a6ae-1279da9cf1d9" containerID="23fdf8bf079c92f8fffad95a39aeec48a0ce6ca5c3d367fd5c481ae6d0630f69" exitCode=0 Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.004054 4881 generic.go:334] "Generic (PLEG): container finished" podID="201fb26a-87ca-4563-a6ae-1279da9cf1d9" containerID="8c2d9a4ccbe836f11691d18a98cd55c1064fb634fa10ae39a24965732048adf6" exitCode=2 Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.004062 4881 generic.go:334] "Generic (PLEG): container finished" podID="201fb26a-87ca-4563-a6ae-1279da9cf1d9" containerID="21e7befe3db09a0933a930666700555026336530bd06628c4d04638027f5dd37" exitCode=0 Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.006175 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"201fb26a-87ca-4563-a6ae-1279da9cf1d9","Type":"ContainerDied","Data":"23fdf8bf079c92f8fffad95a39aeec48a0ce6ca5c3d367fd5c481ae6d0630f69"} Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.007031 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"201fb26a-87ca-4563-a6ae-1279da9cf1d9","Type":"ContainerDied","Data":"8c2d9a4ccbe836f11691d18a98cd55c1064fb634fa10ae39a24965732048adf6"} Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.007062 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"201fb26a-87ca-4563-a6ae-1279da9cf1d9","Type":"ContainerDied","Data":"21e7befe3db09a0933a930666700555026336530bd06628c4d04638027f5dd37"} Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.052733 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-bdc49"] Jan 21 11:23:47 crc kubenswrapper[4881]: W0121 11:23:47.095042 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d8ffc48_6b0f_48d1_b13d_8a766f5b604a.slice/crio-319a1c0ca170ca90fa0753a5c20774856788050e89dac7393a9beb4d1a3b2bec WatchSource:0}: Error finding container 319a1c0ca170ca90fa0753a5c20774856788050e89dac7393a9beb4d1a3b2bec: Status 404 returned error can't find the container with id 319a1c0ca170ca90fa0753a5c20774856788050e89dac7393a9beb4d1a3b2bec Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.334921 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb575609-e27b-438e-b305-754fed7dbd0c" path="/var/lib/kubelet/pods/eb575609-e27b-438e-b305-754fed7dbd0c/volumes" Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.606615 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.727195 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7klwj\" (UniqueName: \"kubernetes.io/projected/cb8d5e00-825f-4df2-9720-3de7be3e0837-kube-api-access-7klwj\") pod \"cb8d5e00-825f-4df2-9720-3de7be3e0837\" (UID: \"cb8d5e00-825f-4df2-9720-3de7be3e0837\") " Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.727281 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb8d5e00-825f-4df2-9720-3de7be3e0837-combined-ca-bundle\") pod \"cb8d5e00-825f-4df2-9720-3de7be3e0837\" (UID: \"cb8d5e00-825f-4df2-9720-3de7be3e0837\") " Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.727372 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb8d5e00-825f-4df2-9720-3de7be3e0837-config-data\") pod \"cb8d5e00-825f-4df2-9720-3de7be3e0837\" (UID: \"cb8d5e00-825f-4df2-9720-3de7be3e0837\") " Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.727415 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb8d5e00-825f-4df2-9720-3de7be3e0837-logs\") pod \"cb8d5e00-825f-4df2-9720-3de7be3e0837\" (UID: \"cb8d5e00-825f-4df2-9720-3de7be3e0837\") " Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.728584 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb8d5e00-825f-4df2-9720-3de7be3e0837-logs" (OuterVolumeSpecName: "logs") pod "cb8d5e00-825f-4df2-9720-3de7be3e0837" (UID: "cb8d5e00-825f-4df2-9720-3de7be3e0837"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.735471 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb8d5e00-825f-4df2-9720-3de7be3e0837-kube-api-access-7klwj" (OuterVolumeSpecName: "kube-api-access-7klwj") pod "cb8d5e00-825f-4df2-9720-3de7be3e0837" (UID: "cb8d5e00-825f-4df2-9720-3de7be3e0837"). InnerVolumeSpecName "kube-api-access-7klwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.782358 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb8d5e00-825f-4df2-9720-3de7be3e0837-config-data" (OuterVolumeSpecName: "config-data") pod "cb8d5e00-825f-4df2-9720-3de7be3e0837" (UID: "cb8d5e00-825f-4df2-9720-3de7be3e0837"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.798551 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb8d5e00-825f-4df2-9720-3de7be3e0837-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cb8d5e00-825f-4df2-9720-3de7be3e0837" (UID: "cb8d5e00-825f-4df2-9720-3de7be3e0837"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.820015 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.830859 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb8d5e00-825f-4df2-9720-3de7be3e0837-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.831219 4881 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb8d5e00-825f-4df2-9720-3de7be3e0837-logs\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.831234 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7klwj\" (UniqueName: \"kubernetes.io/projected/cb8d5e00-825f-4df2-9720-3de7be3e0837-kube-api-access-7klwj\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.831249 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb8d5e00-825f-4df2-9720-3de7be3e0837-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.932476 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/201fb26a-87ca-4563-a6ae-1279da9cf1d9-sg-core-conf-yaml\") pod \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.932607 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/201fb26a-87ca-4563-a6ae-1279da9cf1d9-run-httpd\") pod \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.932666 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/201fb26a-87ca-4563-a6ae-1279da9cf1d9-log-httpd\") pod \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.932703 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/201fb26a-87ca-4563-a6ae-1279da9cf1d9-config-data\") pod \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.932732 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/201fb26a-87ca-4563-a6ae-1279da9cf1d9-ceilometer-tls-certs\") pod \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.932809 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/201fb26a-87ca-4563-a6ae-1279da9cf1d9-combined-ca-bundle\") pod \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.932874 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/201fb26a-87ca-4563-a6ae-1279da9cf1d9-scripts\") pod \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.932915 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bc45w\" (UniqueName: \"kubernetes.io/projected/201fb26a-87ca-4563-a6ae-1279da9cf1d9-kube-api-access-bc45w\") pod \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\" (UID: \"201fb26a-87ca-4563-a6ae-1279da9cf1d9\") " Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.934913 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/201fb26a-87ca-4563-a6ae-1279da9cf1d9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "201fb26a-87ca-4563-a6ae-1279da9cf1d9" (UID: "201fb26a-87ca-4563-a6ae-1279da9cf1d9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.935624 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/201fb26a-87ca-4563-a6ae-1279da9cf1d9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "201fb26a-87ca-4563-a6ae-1279da9cf1d9" (UID: "201fb26a-87ca-4563-a6ae-1279da9cf1d9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.953435 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/201fb26a-87ca-4563-a6ae-1279da9cf1d9-kube-api-access-bc45w" (OuterVolumeSpecName: "kube-api-access-bc45w") pod "201fb26a-87ca-4563-a6ae-1279da9cf1d9" (UID: "201fb26a-87ca-4563-a6ae-1279da9cf1d9"). InnerVolumeSpecName "kube-api-access-bc45w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:23:47 crc kubenswrapper[4881]: I0121 11:23:47.953932 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/201fb26a-87ca-4563-a6ae-1279da9cf1d9-scripts" (OuterVolumeSpecName: "scripts") pod "201fb26a-87ca-4563-a6ae-1279da9cf1d9" (UID: "201fb26a-87ca-4563-a6ae-1279da9cf1d9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.041235 4881 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/201fb26a-87ca-4563-a6ae-1279da9cf1d9-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.041261 4881 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/201fb26a-87ca-4563-a6ae-1279da9cf1d9-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.041270 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/201fb26a-87ca-4563-a6ae-1279da9cf1d9-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.041279 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bc45w\" (UniqueName: \"kubernetes.io/projected/201fb26a-87ca-4563-a6ae-1279da9cf1d9-kube-api-access-bc45w\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.043599 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bdc49" event={"ID":"3d8ffc48-6b0f-48d1-b13d-8a766f5b604a","Type":"ContainerStarted","Data":"62b5fd9972946ab2305558cba9c0d54f5b29b725654cb25337e61434a431d9ea"} Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.043654 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bdc49" event={"ID":"3d8ffc48-6b0f-48d1-b13d-8a766f5b604a","Type":"ContainerStarted","Data":"319a1c0ca170ca90fa0753a5c20774856788050e89dac7393a9beb4d1a3b2bec"} Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.049685 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/201fb26a-87ca-4563-a6ae-1279da9cf1d9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "201fb26a-87ca-4563-a6ae-1279da9cf1d9" (UID: "201fb26a-87ca-4563-a6ae-1279da9cf1d9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.058346 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"cb8d5e00-825f-4df2-9720-3de7be3e0837","Type":"ContainerDied","Data":"9b384c1c04b091d7070db9b5be692cbf3307b83743e8c28c7fc7e9002650814f"} Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.058406 4881 scope.go:117] "RemoveContainer" containerID="2dfa759ad5f3629117201697e51e9070f4706b866df3273a3c40b4948e6b8705" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.058590 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.074054 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-bdc49" podStartSLOduration=3.074029371 podStartE2EDuration="3.074029371s" podCreationTimestamp="2026-01-21 11:23:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:23:48.065251205 +0000 UTC m=+1615.325207694" watchObservedRunningTime="2026-01-21 11:23:48.074029371 +0000 UTC m=+1615.333985840" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.075565 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/201fb26a-87ca-4563-a6ae-1279da9cf1d9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "201fb26a-87ca-4563-a6ae-1279da9cf1d9" (UID: "201fb26a-87ca-4563-a6ae-1279da9cf1d9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.091642 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/201fb26a-87ca-4563-a6ae-1279da9cf1d9-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "201fb26a-87ca-4563-a6ae-1279da9cf1d9" (UID: "201fb26a-87ca-4563-a6ae-1279da9cf1d9"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.092988 4881 generic.go:334] "Generic (PLEG): container finished" podID="201fb26a-87ca-4563-a6ae-1279da9cf1d9" containerID="a06398efdd27167761cc6251bd8384a3c3e25770859f0b77e181cd4905e9a62e" exitCode=0 Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.093038 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"201fb26a-87ca-4563-a6ae-1279da9cf1d9","Type":"ContainerDied","Data":"a06398efdd27167761cc6251bd8384a3c3e25770859f0b77e181cd4905e9a62e"} Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.093067 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"201fb26a-87ca-4563-a6ae-1279da9cf1d9","Type":"ContainerDied","Data":"66e45f9085cd7aa6bc51a5b18dd439286f856ddcee2ed6d0f6e2f8de173537a4"} Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.093168 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.114893 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/201fb26a-87ca-4563-a6ae-1279da9cf1d9-config-data" (OuterVolumeSpecName: "config-data") pod "201fb26a-87ca-4563-a6ae-1279da9cf1d9" (UID: "201fb26a-87ca-4563-a6ae-1279da9cf1d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.143046 4881 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/201fb26a-87ca-4563-a6ae-1279da9cf1d9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.143321 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/201fb26a-87ca-4563-a6ae-1279da9cf1d9-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.143454 4881 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/201fb26a-87ca-4563-a6ae-1279da9cf1d9-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.143563 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/201fb26a-87ca-4563-a6ae-1279da9cf1d9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.191084 4881 scope.go:117] "RemoveContainer" containerID="bb359efc78c8172dc142be7dbd66247c577cc9e68e31667efda8eaa45e2b6e87" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.216764 4881 scope.go:117] "RemoveContainer" containerID="23fdf8bf079c92f8fffad95a39aeec48a0ce6ca5c3d367fd5c481ae6d0630f69" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.219389 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.240887 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.253702 4881 scope.go:117] "RemoveContainer" containerID="8c2d9a4ccbe836f11691d18a98cd55c1064fb634fa10ae39a24965732048adf6" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.267410 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 11:23:48 crc kubenswrapper[4881]: E0121 11:23:48.284349 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="201fb26a-87ca-4563-a6ae-1279da9cf1d9" containerName="ceilometer-notification-agent" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.284385 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="201fb26a-87ca-4563-a6ae-1279da9cf1d9" containerName="ceilometer-notification-agent" Jan 21 11:23:48 crc kubenswrapper[4881]: E0121 11:23:48.284403 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="201fb26a-87ca-4563-a6ae-1279da9cf1d9" containerName="proxy-httpd" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.284409 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="201fb26a-87ca-4563-a6ae-1279da9cf1d9" containerName="proxy-httpd" Jan 21 11:23:48 crc kubenswrapper[4881]: E0121 11:23:48.284426 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb8d5e00-825f-4df2-9720-3de7be3e0837" containerName="nova-api-api" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.284433 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb8d5e00-825f-4df2-9720-3de7be3e0837" containerName="nova-api-api" Jan 21 11:23:48 crc kubenswrapper[4881]: E0121 11:23:48.284464 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="201fb26a-87ca-4563-a6ae-1279da9cf1d9" containerName="sg-core" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.284473 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="201fb26a-87ca-4563-a6ae-1279da9cf1d9" containerName="sg-core" Jan 21 11:23:48 crc kubenswrapper[4881]: E0121 11:23:48.284487 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb8d5e00-825f-4df2-9720-3de7be3e0837" containerName="nova-api-log" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.284494 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb8d5e00-825f-4df2-9720-3de7be3e0837" containerName="nova-api-log" Jan 21 11:23:48 crc kubenswrapper[4881]: E0121 11:23:48.284512 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="201fb26a-87ca-4563-a6ae-1279da9cf1d9" containerName="ceilometer-central-agent" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.284520 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="201fb26a-87ca-4563-a6ae-1279da9cf1d9" containerName="ceilometer-central-agent" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.284774 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="201fb26a-87ca-4563-a6ae-1279da9cf1d9" containerName="proxy-httpd" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.284810 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="201fb26a-87ca-4563-a6ae-1279da9cf1d9" containerName="ceilometer-central-agent" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.284822 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="201fb26a-87ca-4563-a6ae-1279da9cf1d9" containerName="ceilometer-notification-agent" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.284830 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb8d5e00-825f-4df2-9720-3de7be3e0837" containerName="nova-api-log" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.284840 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb8d5e00-825f-4df2-9720-3de7be3e0837" containerName="nova-api-api" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.284846 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="201fb26a-87ca-4563-a6ae-1279da9cf1d9" containerName="sg-core" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.286089 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.302982 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.303395 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.303527 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.307068 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.358116 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57mlh\" (UniqueName: \"kubernetes.io/projected/da2439be-4ed2-43a2-adbe-dd4afaa012f3-kube-api-access-57mlh\") pod \"nova-api-0\" (UID: \"da2439be-4ed2-43a2-adbe-dd4afaa012f3\") " pod="openstack/nova-api-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.358180 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da2439be-4ed2-43a2-adbe-dd4afaa012f3-logs\") pod \"nova-api-0\" (UID: \"da2439be-4ed2-43a2-adbe-dd4afaa012f3\") " pod="openstack/nova-api-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.358214 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da2439be-4ed2-43a2-adbe-dd4afaa012f3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"da2439be-4ed2-43a2-adbe-dd4afaa012f3\") " pod="openstack/nova-api-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.358239 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da2439be-4ed2-43a2-adbe-dd4afaa012f3-config-data\") pod \"nova-api-0\" (UID: \"da2439be-4ed2-43a2-adbe-dd4afaa012f3\") " pod="openstack/nova-api-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.358277 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/da2439be-4ed2-43a2-adbe-dd4afaa012f3-public-tls-certs\") pod \"nova-api-0\" (UID: \"da2439be-4ed2-43a2-adbe-dd4afaa012f3\") " pod="openstack/nova-api-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.358320 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/da2439be-4ed2-43a2-adbe-dd4afaa012f3-internal-tls-certs\") pod \"nova-api-0\" (UID: \"da2439be-4ed2-43a2-adbe-dd4afaa012f3\") " pod="openstack/nova-api-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.391967 4881 scope.go:117] "RemoveContainer" containerID="a06398efdd27167761cc6251bd8384a3c3e25770859f0b77e181cd4905e9a62e" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.460144 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/da2439be-4ed2-43a2-adbe-dd4afaa012f3-public-tls-certs\") pod \"nova-api-0\" (UID: \"da2439be-4ed2-43a2-adbe-dd4afaa012f3\") " pod="openstack/nova-api-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.460246 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/da2439be-4ed2-43a2-adbe-dd4afaa012f3-internal-tls-certs\") pod \"nova-api-0\" (UID: \"da2439be-4ed2-43a2-adbe-dd4afaa012f3\") " pod="openstack/nova-api-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.460377 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57mlh\" (UniqueName: \"kubernetes.io/projected/da2439be-4ed2-43a2-adbe-dd4afaa012f3-kube-api-access-57mlh\") pod \"nova-api-0\" (UID: \"da2439be-4ed2-43a2-adbe-dd4afaa012f3\") " pod="openstack/nova-api-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.460445 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da2439be-4ed2-43a2-adbe-dd4afaa012f3-logs\") pod \"nova-api-0\" (UID: \"da2439be-4ed2-43a2-adbe-dd4afaa012f3\") " pod="openstack/nova-api-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.460495 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da2439be-4ed2-43a2-adbe-dd4afaa012f3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"da2439be-4ed2-43a2-adbe-dd4afaa012f3\") " pod="openstack/nova-api-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.460527 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da2439be-4ed2-43a2-adbe-dd4afaa012f3-config-data\") pod \"nova-api-0\" (UID: \"da2439be-4ed2-43a2-adbe-dd4afaa012f3\") " pod="openstack/nova-api-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.466310 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da2439be-4ed2-43a2-adbe-dd4afaa012f3-logs\") pod \"nova-api-0\" (UID: \"da2439be-4ed2-43a2-adbe-dd4afaa012f3\") " pod="openstack/nova-api-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.477171 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da2439be-4ed2-43a2-adbe-dd4afaa012f3-config-data\") pod \"nova-api-0\" (UID: \"da2439be-4ed2-43a2-adbe-dd4afaa012f3\") " pod="openstack/nova-api-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.477584 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/da2439be-4ed2-43a2-adbe-dd4afaa012f3-internal-tls-certs\") pod \"nova-api-0\" (UID: \"da2439be-4ed2-43a2-adbe-dd4afaa012f3\") " pod="openstack/nova-api-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.479266 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/da2439be-4ed2-43a2-adbe-dd4afaa012f3-public-tls-certs\") pod \"nova-api-0\" (UID: \"da2439be-4ed2-43a2-adbe-dd4afaa012f3\") " pod="openstack/nova-api-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.482766 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da2439be-4ed2-43a2-adbe-dd4afaa012f3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"da2439be-4ed2-43a2-adbe-dd4afaa012f3\") " pod="openstack/nova-api-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.485950 4881 scope.go:117] "RemoveContainer" containerID="21e7befe3db09a0933a930666700555026336530bd06628c4d04638027f5dd37" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.502879 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.508376 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57mlh\" (UniqueName: \"kubernetes.io/projected/da2439be-4ed2-43a2-adbe-dd4afaa012f3-kube-api-access-57mlh\") pod \"nova-api-0\" (UID: \"da2439be-4ed2-43a2-adbe-dd4afaa012f3\") " pod="openstack/nova-api-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.536379 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.548846 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.551665 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.552573 4881 scope.go:117] "RemoveContainer" containerID="23fdf8bf079c92f8fffad95a39aeec48a0ce6ca5c3d367fd5c481ae6d0630f69" Jan 21 11:23:48 crc kubenswrapper[4881]: E0121 11:23:48.560995 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23fdf8bf079c92f8fffad95a39aeec48a0ce6ca5c3d367fd5c481ae6d0630f69\": container with ID starting with 23fdf8bf079c92f8fffad95a39aeec48a0ce6ca5c3d367fd5c481ae6d0630f69 not found: ID does not exist" containerID="23fdf8bf079c92f8fffad95a39aeec48a0ce6ca5c3d367fd5c481ae6d0630f69" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.561074 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23fdf8bf079c92f8fffad95a39aeec48a0ce6ca5c3d367fd5c481ae6d0630f69"} err="failed to get container status \"23fdf8bf079c92f8fffad95a39aeec48a0ce6ca5c3d367fd5c481ae6d0630f69\": rpc error: code = NotFound desc = could not find container \"23fdf8bf079c92f8fffad95a39aeec48a0ce6ca5c3d367fd5c481ae6d0630f69\": container with ID starting with 23fdf8bf079c92f8fffad95a39aeec48a0ce6ca5c3d367fd5c481ae6d0630f69 not found: ID does not exist" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.561126 4881 scope.go:117] "RemoveContainer" containerID="8c2d9a4ccbe836f11691d18a98cd55c1064fb634fa10ae39a24965732048adf6" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.561194 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.561443 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.561557 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 21 11:23:48 crc kubenswrapper[4881]: E0121 11:23:48.569156 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c2d9a4ccbe836f11691d18a98cd55c1064fb634fa10ae39a24965732048adf6\": container with ID starting with 8c2d9a4ccbe836f11691d18a98cd55c1064fb634fa10ae39a24965732048adf6 not found: ID does not exist" containerID="8c2d9a4ccbe836f11691d18a98cd55c1064fb634fa10ae39a24965732048adf6" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.569489 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c2d9a4ccbe836f11691d18a98cd55c1064fb634fa10ae39a24965732048adf6"} err="failed to get container status \"8c2d9a4ccbe836f11691d18a98cd55c1064fb634fa10ae39a24965732048adf6\": rpc error: code = NotFound desc = could not find container \"8c2d9a4ccbe836f11691d18a98cd55c1064fb634fa10ae39a24965732048adf6\": container with ID starting with 8c2d9a4ccbe836f11691d18a98cd55c1064fb634fa10ae39a24965732048adf6 not found: ID does not exist" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.569530 4881 scope.go:117] "RemoveContainer" containerID="a06398efdd27167761cc6251bd8384a3c3e25770859f0b77e181cd4905e9a62e" Jan 21 11:23:48 crc kubenswrapper[4881]: E0121 11:23:48.570325 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a06398efdd27167761cc6251bd8384a3c3e25770859f0b77e181cd4905e9a62e\": container with ID starting with a06398efdd27167761cc6251bd8384a3c3e25770859f0b77e181cd4905e9a62e not found: ID does not exist" containerID="a06398efdd27167761cc6251bd8384a3c3e25770859f0b77e181cd4905e9a62e" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.570369 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a06398efdd27167761cc6251bd8384a3c3e25770859f0b77e181cd4905e9a62e"} err="failed to get container status \"a06398efdd27167761cc6251bd8384a3c3e25770859f0b77e181cd4905e9a62e\": rpc error: code = NotFound desc = could not find container \"a06398efdd27167761cc6251bd8384a3c3e25770859f0b77e181cd4905e9a62e\": container with ID starting with a06398efdd27167761cc6251bd8384a3c3e25770859f0b77e181cd4905e9a62e not found: ID does not exist" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.570399 4881 scope.go:117] "RemoveContainer" containerID="21e7befe3db09a0933a930666700555026336530bd06628c4d04638027f5dd37" Jan 21 11:23:48 crc kubenswrapper[4881]: E0121 11:23:48.570746 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21e7befe3db09a0933a930666700555026336530bd06628c4d04638027f5dd37\": container with ID starting with 21e7befe3db09a0933a930666700555026336530bd06628c4d04638027f5dd37 not found: ID does not exist" containerID="21e7befe3db09a0933a930666700555026336530bd06628c4d04638027f5dd37" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.570773 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21e7befe3db09a0933a930666700555026336530bd06628c4d04638027f5dd37"} err="failed to get container status \"21e7befe3db09a0933a930666700555026336530bd06628c4d04638027f5dd37\": rpc error: code = NotFound desc = could not find container \"21e7befe3db09a0933a930666700555026336530bd06628c4d04638027f5dd37\": container with ID starting with 21e7befe3db09a0933a930666700555026336530bd06628c4d04638027f5dd37 not found: ID does not exist" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.581173 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.633156 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.671679 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5926a818-11da-4b6b-bae0-79e6d9e10728-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5926a818-11da-4b6b-bae0-79e6d9e10728\") " pod="openstack/ceilometer-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.671847 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5926a818-11da-4b6b-bae0-79e6d9e10728-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5926a818-11da-4b6b-bae0-79e6d9e10728\") " pod="openstack/ceilometer-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.671961 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5926a818-11da-4b6b-bae0-79e6d9e10728-log-httpd\") pod \"ceilometer-0\" (UID: \"5926a818-11da-4b6b-bae0-79e6d9e10728\") " pod="openstack/ceilometer-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.672012 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5926a818-11da-4b6b-bae0-79e6d9e10728-config-data\") pod \"ceilometer-0\" (UID: \"5926a818-11da-4b6b-bae0-79e6d9e10728\") " pod="openstack/ceilometer-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.672035 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5926a818-11da-4b6b-bae0-79e6d9e10728-scripts\") pod \"ceilometer-0\" (UID: \"5926a818-11da-4b6b-bae0-79e6d9e10728\") " pod="openstack/ceilometer-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.672082 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6dvp\" (UniqueName: \"kubernetes.io/projected/5926a818-11da-4b6b-bae0-79e6d9e10728-kube-api-access-n6dvp\") pod \"ceilometer-0\" (UID: \"5926a818-11da-4b6b-bae0-79e6d9e10728\") " pod="openstack/ceilometer-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.672155 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5926a818-11da-4b6b-bae0-79e6d9e10728-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5926a818-11da-4b6b-bae0-79e6d9e10728\") " pod="openstack/ceilometer-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.672279 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5926a818-11da-4b6b-bae0-79e6d9e10728-run-httpd\") pod \"ceilometer-0\" (UID: \"5926a818-11da-4b6b-bae0-79e6d9e10728\") " pod="openstack/ceilometer-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.773933 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5926a818-11da-4b6b-bae0-79e6d9e10728-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5926a818-11da-4b6b-bae0-79e6d9e10728\") " pod="openstack/ceilometer-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.779076 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5926a818-11da-4b6b-bae0-79e6d9e10728-run-httpd\") pod \"ceilometer-0\" (UID: \"5926a818-11da-4b6b-bae0-79e6d9e10728\") " pod="openstack/ceilometer-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.779299 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5926a818-11da-4b6b-bae0-79e6d9e10728-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5926a818-11da-4b6b-bae0-79e6d9e10728\") " pod="openstack/ceilometer-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.779487 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5926a818-11da-4b6b-bae0-79e6d9e10728-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5926a818-11da-4b6b-bae0-79e6d9e10728\") " pod="openstack/ceilometer-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.779630 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5926a818-11da-4b6b-bae0-79e6d9e10728-run-httpd\") pod \"ceilometer-0\" (UID: \"5926a818-11da-4b6b-bae0-79e6d9e10728\") " pod="openstack/ceilometer-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.779799 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5926a818-11da-4b6b-bae0-79e6d9e10728-log-httpd\") pod \"ceilometer-0\" (UID: \"5926a818-11da-4b6b-bae0-79e6d9e10728\") " pod="openstack/ceilometer-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.779928 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5926a818-11da-4b6b-bae0-79e6d9e10728-config-data\") pod \"ceilometer-0\" (UID: \"5926a818-11da-4b6b-bae0-79e6d9e10728\") " pod="openstack/ceilometer-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.779961 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5926a818-11da-4b6b-bae0-79e6d9e10728-scripts\") pod \"ceilometer-0\" (UID: \"5926a818-11da-4b6b-bae0-79e6d9e10728\") " pod="openstack/ceilometer-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.780082 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6dvp\" (UniqueName: \"kubernetes.io/projected/5926a818-11da-4b6b-bae0-79e6d9e10728-kube-api-access-n6dvp\") pod \"ceilometer-0\" (UID: \"5926a818-11da-4b6b-bae0-79e6d9e10728\") " pod="openstack/ceilometer-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.784444 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5926a818-11da-4b6b-bae0-79e6d9e10728-scripts\") pod \"ceilometer-0\" (UID: \"5926a818-11da-4b6b-bae0-79e6d9e10728\") " pod="openstack/ceilometer-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.784760 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5926a818-11da-4b6b-bae0-79e6d9e10728-log-httpd\") pod \"ceilometer-0\" (UID: \"5926a818-11da-4b6b-bae0-79e6d9e10728\") " pod="openstack/ceilometer-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.784883 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5926a818-11da-4b6b-bae0-79e6d9e10728-config-data\") pod \"ceilometer-0\" (UID: \"5926a818-11da-4b6b-bae0-79e6d9e10728\") " pod="openstack/ceilometer-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.785711 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5926a818-11da-4b6b-bae0-79e6d9e10728-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5926a818-11da-4b6b-bae0-79e6d9e10728\") " pod="openstack/ceilometer-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.786169 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5926a818-11da-4b6b-bae0-79e6d9e10728-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5926a818-11da-4b6b-bae0-79e6d9e10728\") " pod="openstack/ceilometer-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.789390 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5926a818-11da-4b6b-bae0-79e6d9e10728-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5926a818-11da-4b6b-bae0-79e6d9e10728\") " pod="openstack/ceilometer-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.805166 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6dvp\" (UniqueName: \"kubernetes.io/projected/5926a818-11da-4b6b-bae0-79e6d9e10728-kube-api-access-n6dvp\") pod \"ceilometer-0\" (UID: \"5926a818-11da-4b6b-bae0-79e6d9e10728\") " pod="openstack/ceilometer-0" Jan 21 11:23:48 crc kubenswrapper[4881]: I0121 11:23:48.886699 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:23:49 crc kubenswrapper[4881]: I0121 11:23:49.200122 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 11:23:49 crc kubenswrapper[4881]: W0121 11:23:49.205672 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda2439be_4ed2_43a2_adbe_dd4afaa012f3.slice/crio-78fa7e5c3484fc7a90c022f360abd4837962f6679c1a08c1b9fdb22f193c9f13 WatchSource:0}: Error finding container 78fa7e5c3484fc7a90c022f360abd4837962f6679c1a08c1b9fdb22f193c9f13: Status 404 returned error can't find the container with id 78fa7e5c3484fc7a90c022f360abd4837962f6679c1a08c1b9fdb22f193c9f13 Jan 21 11:23:49 crc kubenswrapper[4881]: I0121 11:23:49.337149 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="201fb26a-87ca-4563-a6ae-1279da9cf1d9" path="/var/lib/kubelet/pods/201fb26a-87ca-4563-a6ae-1279da9cf1d9/volumes" Jan 21 11:23:49 crc kubenswrapper[4881]: I0121 11:23:49.339038 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb8d5e00-825f-4df2-9720-3de7be3e0837" path="/var/lib/kubelet/pods/cb8d5e00-825f-4df2-9720-3de7be3e0837/volumes" Jan 21 11:23:49 crc kubenswrapper[4881]: W0121 11:23:49.852090 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5926a818_11da_4b6b_bae0_79e6d9e10728.slice/crio-213d22bfba6e2e90a4613f8839c8270b703a1296c47a8cbf11e9134711d81ca7 WatchSource:0}: Error finding container 213d22bfba6e2e90a4613f8839c8270b703a1296c47a8cbf11e9134711d81ca7: Status 404 returned error can't find the container with id 213d22bfba6e2e90a4613f8839c8270b703a1296c47a8cbf11e9134711d81ca7 Jan 21 11:23:49 crc kubenswrapper[4881]: I0121 11:23:49.852845 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:23:49 crc kubenswrapper[4881]: I0121 11:23:49.987685 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d4b6b54d9-5jzpq" Jan 21 11:23:50 crc kubenswrapper[4881]: I0121 11:23:50.115213 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9f55bccdc-ghvhg"] Jan 21 11:23:50 crc kubenswrapper[4881]: I0121 11:23:50.115508 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-9f55bccdc-ghvhg" podUID="859758f9-0dc2-4397-a75a-b098eaabe613" containerName="dnsmasq-dns" containerID="cri-o://ddbf5564e0fec706a2bc3be62fec290ad0d5c0dccb7ad63e5048139ac59265e0" gracePeriod=10 Jan 21 11:23:50 crc kubenswrapper[4881]: I0121 11:23:50.150092 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5926a818-11da-4b6b-bae0-79e6d9e10728","Type":"ContainerStarted","Data":"213d22bfba6e2e90a4613f8839c8270b703a1296c47a8cbf11e9134711d81ca7"} Jan 21 11:23:50 crc kubenswrapper[4881]: I0121 11:23:50.159177 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"da2439be-4ed2-43a2-adbe-dd4afaa012f3","Type":"ContainerStarted","Data":"80e209a06fa6ebe24f14a7a5f19b6ec4b9439abda270d225d2c57b6f4688cd25"} Jan 21 11:23:50 crc kubenswrapper[4881]: I0121 11:23:50.159240 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"da2439be-4ed2-43a2-adbe-dd4afaa012f3","Type":"ContainerStarted","Data":"78fa7e5c3484fc7a90c022f360abd4837962f6679c1a08c1b9fdb22f193c9f13"} Jan 21 11:23:50 crc kubenswrapper[4881]: I0121 11:23:50.758816 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9f55bccdc-ghvhg" Jan 21 11:23:50 crc kubenswrapper[4881]: I0121 11:23:50.808974 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/859758f9-0dc2-4397-a75a-b098eaabe613-config\") pod \"859758f9-0dc2-4397-a75a-b098eaabe613\" (UID: \"859758f9-0dc2-4397-a75a-b098eaabe613\") " Jan 21 11:23:50 crc kubenswrapper[4881]: I0121 11:23:50.809933 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/859758f9-0dc2-4397-a75a-b098eaabe613-ovsdbserver-sb\") pod \"859758f9-0dc2-4397-a75a-b098eaabe613\" (UID: \"859758f9-0dc2-4397-a75a-b098eaabe613\") " Jan 21 11:23:50 crc kubenswrapper[4881]: I0121 11:23:50.810117 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/859758f9-0dc2-4397-a75a-b098eaabe613-dns-swift-storage-0\") pod \"859758f9-0dc2-4397-a75a-b098eaabe613\" (UID: \"859758f9-0dc2-4397-a75a-b098eaabe613\") " Jan 21 11:23:50 crc kubenswrapper[4881]: I0121 11:23:50.810534 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/859758f9-0dc2-4397-a75a-b098eaabe613-dns-svc\") pod \"859758f9-0dc2-4397-a75a-b098eaabe613\" (UID: \"859758f9-0dc2-4397-a75a-b098eaabe613\") " Jan 21 11:23:50 crc kubenswrapper[4881]: I0121 11:23:50.810949 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prhq6\" (UniqueName: \"kubernetes.io/projected/859758f9-0dc2-4397-a75a-b098eaabe613-kube-api-access-prhq6\") pod \"859758f9-0dc2-4397-a75a-b098eaabe613\" (UID: \"859758f9-0dc2-4397-a75a-b098eaabe613\") " Jan 21 11:23:50 crc kubenswrapper[4881]: I0121 11:23:50.811298 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/859758f9-0dc2-4397-a75a-b098eaabe613-ovsdbserver-nb\") pod \"859758f9-0dc2-4397-a75a-b098eaabe613\" (UID: \"859758f9-0dc2-4397-a75a-b098eaabe613\") " Jan 21 11:23:50 crc kubenswrapper[4881]: I0121 11:23:50.835153 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/859758f9-0dc2-4397-a75a-b098eaabe613-kube-api-access-prhq6" (OuterVolumeSpecName: "kube-api-access-prhq6") pod "859758f9-0dc2-4397-a75a-b098eaabe613" (UID: "859758f9-0dc2-4397-a75a-b098eaabe613"). InnerVolumeSpecName "kube-api-access-prhq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:23:50 crc kubenswrapper[4881]: I0121 11:23:50.901436 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/859758f9-0dc2-4397-a75a-b098eaabe613-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "859758f9-0dc2-4397-a75a-b098eaabe613" (UID: "859758f9-0dc2-4397-a75a-b098eaabe613"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:23:50 crc kubenswrapper[4881]: I0121 11:23:50.915096 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prhq6\" (UniqueName: \"kubernetes.io/projected/859758f9-0dc2-4397-a75a-b098eaabe613-kube-api-access-prhq6\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:50 crc kubenswrapper[4881]: I0121 11:23:50.915128 4881 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/859758f9-0dc2-4397-a75a-b098eaabe613-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:50 crc kubenswrapper[4881]: I0121 11:23:50.915940 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/859758f9-0dc2-4397-a75a-b098eaabe613-config" (OuterVolumeSpecName: "config") pod "859758f9-0dc2-4397-a75a-b098eaabe613" (UID: "859758f9-0dc2-4397-a75a-b098eaabe613"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:23:50 crc kubenswrapper[4881]: I0121 11:23:50.937828 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/859758f9-0dc2-4397-a75a-b098eaabe613-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "859758f9-0dc2-4397-a75a-b098eaabe613" (UID: "859758f9-0dc2-4397-a75a-b098eaabe613"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:23:50 crc kubenswrapper[4881]: I0121 11:23:50.949287 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/859758f9-0dc2-4397-a75a-b098eaabe613-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "859758f9-0dc2-4397-a75a-b098eaabe613" (UID: "859758f9-0dc2-4397-a75a-b098eaabe613"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:23:50 crc kubenswrapper[4881]: I0121 11:23:50.956360 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/859758f9-0dc2-4397-a75a-b098eaabe613-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "859758f9-0dc2-4397-a75a-b098eaabe613" (UID: "859758f9-0dc2-4397-a75a-b098eaabe613"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:23:51 crc kubenswrapper[4881]: I0121 11:23:51.017017 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/859758f9-0dc2-4397-a75a-b098eaabe613-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:51 crc kubenswrapper[4881]: I0121 11:23:51.017462 4881 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/859758f9-0dc2-4397-a75a-b098eaabe613-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:51 crc kubenswrapper[4881]: I0121 11:23:51.017532 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/859758f9-0dc2-4397-a75a-b098eaabe613-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:51 crc kubenswrapper[4881]: I0121 11:23:51.017588 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/859758f9-0dc2-4397-a75a-b098eaabe613-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:51 crc kubenswrapper[4881]: I0121 11:23:51.171721 4881 generic.go:334] "Generic (PLEG): container finished" podID="859758f9-0dc2-4397-a75a-b098eaabe613" containerID="ddbf5564e0fec706a2bc3be62fec290ad0d5c0dccb7ad63e5048139ac59265e0" exitCode=0 Jan 21 11:23:51 crc kubenswrapper[4881]: I0121 11:23:51.171803 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9f55bccdc-ghvhg" Jan 21 11:23:51 crc kubenswrapper[4881]: I0121 11:23:51.171840 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9f55bccdc-ghvhg" event={"ID":"859758f9-0dc2-4397-a75a-b098eaabe613","Type":"ContainerDied","Data":"ddbf5564e0fec706a2bc3be62fec290ad0d5c0dccb7ad63e5048139ac59265e0"} Jan 21 11:23:51 crc kubenswrapper[4881]: I0121 11:23:51.171935 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9f55bccdc-ghvhg" event={"ID":"859758f9-0dc2-4397-a75a-b098eaabe613","Type":"ContainerDied","Data":"f75b793fa7a8fa638c746656a34aafcf67f449119cc5beb64d5b0d6054ef7320"} Jan 21 11:23:51 crc kubenswrapper[4881]: I0121 11:23:51.171969 4881 scope.go:117] "RemoveContainer" containerID="ddbf5564e0fec706a2bc3be62fec290ad0d5c0dccb7ad63e5048139ac59265e0" Jan 21 11:23:51 crc kubenswrapper[4881]: I0121 11:23:51.180113 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5926a818-11da-4b6b-bae0-79e6d9e10728","Type":"ContainerStarted","Data":"bc4b878932d74665a9e8184d3f6d1985e6b6477d872a1d17b86a4fcb8439604e"} Jan 21 11:23:51 crc kubenswrapper[4881]: I0121 11:23:51.180770 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5926a818-11da-4b6b-bae0-79e6d9e10728","Type":"ContainerStarted","Data":"2e7b045b897dc331a89c4051f48a735168a1a248aad4092aef521f1e6ac87e3c"} Jan 21 11:23:51 crc kubenswrapper[4881]: I0121 11:23:51.184321 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"da2439be-4ed2-43a2-adbe-dd4afaa012f3","Type":"ContainerStarted","Data":"5f6a607787d7e9e1eada9a9f91e574513eb5ba0e4548b904cb79b64f1f85f516"} Jan 21 11:23:51 crc kubenswrapper[4881]: I0121 11:23:51.198614 4881 scope.go:117] "RemoveContainer" containerID="14c1d2dd7151297f216e34923a28ce4dc55ea08298e597088fec945419be539f" Jan 21 11:23:51 crc kubenswrapper[4881]: I0121 11:23:51.217664 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.217629894 podStartE2EDuration="3.217629894s" podCreationTimestamp="2026-01-21 11:23:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:23:51.211233427 +0000 UTC m=+1618.471189916" watchObservedRunningTime="2026-01-21 11:23:51.217629894 +0000 UTC m=+1618.477586363" Jan 21 11:23:51 crc kubenswrapper[4881]: I0121 11:23:51.237474 4881 scope.go:117] "RemoveContainer" containerID="ddbf5564e0fec706a2bc3be62fec290ad0d5c0dccb7ad63e5048139ac59265e0" Jan 21 11:23:51 crc kubenswrapper[4881]: E0121 11:23:51.240942 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddbf5564e0fec706a2bc3be62fec290ad0d5c0dccb7ad63e5048139ac59265e0\": container with ID starting with ddbf5564e0fec706a2bc3be62fec290ad0d5c0dccb7ad63e5048139ac59265e0 not found: ID does not exist" containerID="ddbf5564e0fec706a2bc3be62fec290ad0d5c0dccb7ad63e5048139ac59265e0" Jan 21 11:23:51 crc kubenswrapper[4881]: I0121 11:23:51.241145 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddbf5564e0fec706a2bc3be62fec290ad0d5c0dccb7ad63e5048139ac59265e0"} err="failed to get container status \"ddbf5564e0fec706a2bc3be62fec290ad0d5c0dccb7ad63e5048139ac59265e0\": rpc error: code = NotFound desc = could not find container \"ddbf5564e0fec706a2bc3be62fec290ad0d5c0dccb7ad63e5048139ac59265e0\": container with ID starting with ddbf5564e0fec706a2bc3be62fec290ad0d5c0dccb7ad63e5048139ac59265e0 not found: ID does not exist" Jan 21 11:23:51 crc kubenswrapper[4881]: I0121 11:23:51.241277 4881 scope.go:117] "RemoveContainer" containerID="14c1d2dd7151297f216e34923a28ce4dc55ea08298e597088fec945419be539f" Jan 21 11:23:51 crc kubenswrapper[4881]: E0121 11:23:51.241664 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14c1d2dd7151297f216e34923a28ce4dc55ea08298e597088fec945419be539f\": container with ID starting with 14c1d2dd7151297f216e34923a28ce4dc55ea08298e597088fec945419be539f not found: ID does not exist" containerID="14c1d2dd7151297f216e34923a28ce4dc55ea08298e597088fec945419be539f" Jan 21 11:23:51 crc kubenswrapper[4881]: I0121 11:23:51.241693 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14c1d2dd7151297f216e34923a28ce4dc55ea08298e597088fec945419be539f"} err="failed to get container status \"14c1d2dd7151297f216e34923a28ce4dc55ea08298e597088fec945419be539f\": rpc error: code = NotFound desc = could not find container \"14c1d2dd7151297f216e34923a28ce4dc55ea08298e597088fec945419be539f\": container with ID starting with 14c1d2dd7151297f216e34923a28ce4dc55ea08298e597088fec945419be539f not found: ID does not exist" Jan 21 11:23:51 crc kubenswrapper[4881]: I0121 11:23:51.277066 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9f55bccdc-ghvhg"] Jan 21 11:23:51 crc kubenswrapper[4881]: I0121 11:23:51.289670 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9f55bccdc-ghvhg"] Jan 21 11:23:51 crc kubenswrapper[4881]: I0121 11:23:51.352328 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="859758f9-0dc2-4397-a75a-b098eaabe613" path="/var/lib/kubelet/pods/859758f9-0dc2-4397-a75a-b098eaabe613/volumes" Jan 21 11:23:52 crc kubenswrapper[4881]: I0121 11:23:52.400862 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5926a818-11da-4b6b-bae0-79e6d9e10728","Type":"ContainerStarted","Data":"e94b267bf0b2818197fb779d251f358e2b25747ccdf47395bec37b9e7404205b"} Jan 21 11:23:53 crc kubenswrapper[4881]: I0121 11:23:53.170351 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vpxn7" Jan 21 11:23:53 crc kubenswrapper[4881]: I0121 11:23:53.246559 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vpxn7" Jan 21 11:23:53 crc kubenswrapper[4881]: I0121 11:23:53.413109 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5926a818-11da-4b6b-bae0-79e6d9e10728","Type":"ContainerStarted","Data":"f81308efcf994beb460b7755557a1bb954ff571ad24313dfef76a4e4edac553f"} Jan 21 11:23:53 crc kubenswrapper[4881]: I0121 11:23:53.444768 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.3433578 podStartE2EDuration="5.444742199s" podCreationTimestamp="2026-01-21 11:23:48 +0000 UTC" firstStartedPulling="2026-01-21 11:23:49.875114309 +0000 UTC m=+1617.135070768" lastFinishedPulling="2026-01-21 11:23:52.976498698 +0000 UTC m=+1620.236455167" observedRunningTime="2026-01-21 11:23:53.434583889 +0000 UTC m=+1620.694540378" watchObservedRunningTime="2026-01-21 11:23:53.444742199 +0000 UTC m=+1620.704698668" Jan 21 11:23:53 crc kubenswrapper[4881]: I0121 11:23:53.819145 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vpxn7"] Jan 21 11:23:54 crc kubenswrapper[4881]: I0121 11:23:54.422646 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vpxn7" podUID="52706c95-5c29-44cb-bc9d-2873d3a4d437" containerName="registry-server" containerID="cri-o://c7183c2e116e85a5f629f6e5e3ffe4538c40c34d6b8cd108a955a5b4b864a2c6" gracePeriod=2 Jan 21 11:23:54 crc kubenswrapper[4881]: I0121 11:23:54.423060 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 11:23:55 crc kubenswrapper[4881]: I0121 11:23:55.489446 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vpxn7" Jan 21 11:23:55 crc kubenswrapper[4881]: I0121 11:23:55.498937 4881 generic.go:334] "Generic (PLEG): container finished" podID="52706c95-5c29-44cb-bc9d-2873d3a4d437" containerID="c7183c2e116e85a5f629f6e5e3ffe4538c40c34d6b8cd108a955a5b4b864a2c6" exitCode=0 Jan 21 11:23:55 crc kubenswrapper[4881]: I0121 11:23:55.500296 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vpxn7" event={"ID":"52706c95-5c29-44cb-bc9d-2873d3a4d437","Type":"ContainerDied","Data":"c7183c2e116e85a5f629f6e5e3ffe4538c40c34d6b8cd108a955a5b4b864a2c6"} Jan 21 11:23:55 crc kubenswrapper[4881]: I0121 11:23:55.500337 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vpxn7" event={"ID":"52706c95-5c29-44cb-bc9d-2873d3a4d437","Type":"ContainerDied","Data":"1de739443c6dfd6b37749b58394c2360dea5377c680c0b8dae6cbb306ba43ef6"} Jan 21 11:23:55 crc kubenswrapper[4881]: I0121 11:23:55.500362 4881 scope.go:117] "RemoveContainer" containerID="c7183c2e116e85a5f629f6e5e3ffe4538c40c34d6b8cd108a955a5b4b864a2c6" Jan 21 11:23:55 crc kubenswrapper[4881]: I0121 11:23:55.539985 4881 scope.go:117] "RemoveContainer" containerID="fb028c7404b9ff86895c5bd0739f99516ab80f521a872c7d6e2892460b2e7b12" Jan 21 11:23:55 crc kubenswrapper[4881]: I0121 11:23:55.584807 4881 scope.go:117] "RemoveContainer" containerID="839be54c5d528613e443040a57965cbb40c5fa31def7b53542cfe13d609474b7" Jan 21 11:23:55 crc kubenswrapper[4881]: I0121 11:23:55.593888 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52706c95-5c29-44cb-bc9d-2873d3a4d437-utilities\") pod \"52706c95-5c29-44cb-bc9d-2873d3a4d437\" (UID: \"52706c95-5c29-44cb-bc9d-2873d3a4d437\") " Jan 21 11:23:55 crc kubenswrapper[4881]: I0121 11:23:55.594059 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52706c95-5c29-44cb-bc9d-2873d3a4d437-catalog-content\") pod \"52706c95-5c29-44cb-bc9d-2873d3a4d437\" (UID: \"52706c95-5c29-44cb-bc9d-2873d3a4d437\") " Jan 21 11:23:55 crc kubenswrapper[4881]: I0121 11:23:55.594123 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkwt4\" (UniqueName: \"kubernetes.io/projected/52706c95-5c29-44cb-bc9d-2873d3a4d437-kube-api-access-gkwt4\") pod \"52706c95-5c29-44cb-bc9d-2873d3a4d437\" (UID: \"52706c95-5c29-44cb-bc9d-2873d3a4d437\") " Jan 21 11:23:55 crc kubenswrapper[4881]: I0121 11:23:55.595159 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52706c95-5c29-44cb-bc9d-2873d3a4d437-utilities" (OuterVolumeSpecName: "utilities") pod "52706c95-5c29-44cb-bc9d-2873d3a4d437" (UID: "52706c95-5c29-44cb-bc9d-2873d3a4d437"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:23:55 crc kubenswrapper[4881]: I0121 11:23:55.624113 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52706c95-5c29-44cb-bc9d-2873d3a4d437-kube-api-access-gkwt4" (OuterVolumeSpecName: "kube-api-access-gkwt4") pod "52706c95-5c29-44cb-bc9d-2873d3a4d437" (UID: "52706c95-5c29-44cb-bc9d-2873d3a4d437"). InnerVolumeSpecName "kube-api-access-gkwt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:23:55 crc kubenswrapper[4881]: I0121 11:23:55.629954 4881 scope.go:117] "RemoveContainer" containerID="c7183c2e116e85a5f629f6e5e3ffe4538c40c34d6b8cd108a955a5b4b864a2c6" Jan 21 11:23:55 crc kubenswrapper[4881]: E0121 11:23:55.631412 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7183c2e116e85a5f629f6e5e3ffe4538c40c34d6b8cd108a955a5b4b864a2c6\": container with ID starting with c7183c2e116e85a5f629f6e5e3ffe4538c40c34d6b8cd108a955a5b4b864a2c6 not found: ID does not exist" containerID="c7183c2e116e85a5f629f6e5e3ffe4538c40c34d6b8cd108a955a5b4b864a2c6" Jan 21 11:23:55 crc kubenswrapper[4881]: I0121 11:23:55.631455 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7183c2e116e85a5f629f6e5e3ffe4538c40c34d6b8cd108a955a5b4b864a2c6"} err="failed to get container status \"c7183c2e116e85a5f629f6e5e3ffe4538c40c34d6b8cd108a955a5b4b864a2c6\": rpc error: code = NotFound desc = could not find container \"c7183c2e116e85a5f629f6e5e3ffe4538c40c34d6b8cd108a955a5b4b864a2c6\": container with ID starting with c7183c2e116e85a5f629f6e5e3ffe4538c40c34d6b8cd108a955a5b4b864a2c6 not found: ID does not exist" Jan 21 11:23:55 crc kubenswrapper[4881]: I0121 11:23:55.631481 4881 scope.go:117] "RemoveContainer" containerID="fb028c7404b9ff86895c5bd0739f99516ab80f521a872c7d6e2892460b2e7b12" Jan 21 11:23:55 crc kubenswrapper[4881]: E0121 11:23:55.632984 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb028c7404b9ff86895c5bd0739f99516ab80f521a872c7d6e2892460b2e7b12\": container with ID starting with fb028c7404b9ff86895c5bd0739f99516ab80f521a872c7d6e2892460b2e7b12 not found: ID does not exist" containerID="fb028c7404b9ff86895c5bd0739f99516ab80f521a872c7d6e2892460b2e7b12" Jan 21 11:23:55 crc kubenswrapper[4881]: I0121 11:23:55.633009 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb028c7404b9ff86895c5bd0739f99516ab80f521a872c7d6e2892460b2e7b12"} err="failed to get container status \"fb028c7404b9ff86895c5bd0739f99516ab80f521a872c7d6e2892460b2e7b12\": rpc error: code = NotFound desc = could not find container \"fb028c7404b9ff86895c5bd0739f99516ab80f521a872c7d6e2892460b2e7b12\": container with ID starting with fb028c7404b9ff86895c5bd0739f99516ab80f521a872c7d6e2892460b2e7b12 not found: ID does not exist" Jan 21 11:23:55 crc kubenswrapper[4881]: I0121 11:23:55.633024 4881 scope.go:117] "RemoveContainer" containerID="839be54c5d528613e443040a57965cbb40c5fa31def7b53542cfe13d609474b7" Jan 21 11:23:55 crc kubenswrapper[4881]: E0121 11:23:55.636946 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"839be54c5d528613e443040a57965cbb40c5fa31def7b53542cfe13d609474b7\": container with ID starting with 839be54c5d528613e443040a57965cbb40c5fa31def7b53542cfe13d609474b7 not found: ID does not exist" containerID="839be54c5d528613e443040a57965cbb40c5fa31def7b53542cfe13d609474b7" Jan 21 11:23:55 crc kubenswrapper[4881]: I0121 11:23:55.636999 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"839be54c5d528613e443040a57965cbb40c5fa31def7b53542cfe13d609474b7"} err="failed to get container status \"839be54c5d528613e443040a57965cbb40c5fa31def7b53542cfe13d609474b7\": rpc error: code = NotFound desc = could not find container \"839be54c5d528613e443040a57965cbb40c5fa31def7b53542cfe13d609474b7\": container with ID starting with 839be54c5d528613e443040a57965cbb40c5fa31def7b53542cfe13d609474b7 not found: ID does not exist" Jan 21 11:23:55 crc kubenswrapper[4881]: I0121 11:23:55.652269 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52706c95-5c29-44cb-bc9d-2873d3a4d437-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "52706c95-5c29-44cb-bc9d-2873d3a4d437" (UID: "52706c95-5c29-44cb-bc9d-2873d3a4d437"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:23:55 crc kubenswrapper[4881]: I0121 11:23:55.697621 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52706c95-5c29-44cb-bc9d-2873d3a4d437-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:55 crc kubenswrapper[4881]: I0121 11:23:55.697695 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkwt4\" (UniqueName: \"kubernetes.io/projected/52706c95-5c29-44cb-bc9d-2873d3a4d437-kube-api-access-gkwt4\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:55 crc kubenswrapper[4881]: I0121 11:23:55.697719 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52706c95-5c29-44cb-bc9d-2873d3a4d437-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:56 crc kubenswrapper[4881]: I0121 11:23:56.510457 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vpxn7" Jan 21 11:23:56 crc kubenswrapper[4881]: I0121 11:23:56.513051 4881 generic.go:334] "Generic (PLEG): container finished" podID="3d8ffc48-6b0f-48d1-b13d-8a766f5b604a" containerID="62b5fd9972946ab2305558cba9c0d54f5b29b725654cb25337e61434a431d9ea" exitCode=0 Jan 21 11:23:56 crc kubenswrapper[4881]: I0121 11:23:56.513094 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bdc49" event={"ID":"3d8ffc48-6b0f-48d1-b13d-8a766f5b604a","Type":"ContainerDied","Data":"62b5fd9972946ab2305558cba9c0d54f5b29b725654cb25337e61434a431d9ea"} Jan 21 11:23:56 crc kubenswrapper[4881]: I0121 11:23:56.553155 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vpxn7"] Jan 21 11:23:56 crc kubenswrapper[4881]: I0121 11:23:56.561896 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vpxn7"] Jan 21 11:23:57 crc kubenswrapper[4881]: I0121 11:23:57.342926 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52706c95-5c29-44cb-bc9d-2873d3a4d437" path="/var/lib/kubelet/pods/52706c95-5c29-44cb-bc9d-2873d3a4d437/volumes" Jan 21 11:23:58 crc kubenswrapper[4881]: I0121 11:23:58.095056 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bdc49" Jan 21 11:23:58 crc kubenswrapper[4881]: I0121 11:23:58.162292 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d8ffc48-6b0f-48d1-b13d-8a766f5b604a-scripts\") pod \"3d8ffc48-6b0f-48d1-b13d-8a766f5b604a\" (UID: \"3d8ffc48-6b0f-48d1-b13d-8a766f5b604a\") " Jan 21 11:23:58 crc kubenswrapper[4881]: I0121 11:23:58.162553 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d8ffc48-6b0f-48d1-b13d-8a766f5b604a-combined-ca-bundle\") pod \"3d8ffc48-6b0f-48d1-b13d-8a766f5b604a\" (UID: \"3d8ffc48-6b0f-48d1-b13d-8a766f5b604a\") " Jan 21 11:23:58 crc kubenswrapper[4881]: I0121 11:23:58.162649 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvcfc\" (UniqueName: \"kubernetes.io/projected/3d8ffc48-6b0f-48d1-b13d-8a766f5b604a-kube-api-access-qvcfc\") pod \"3d8ffc48-6b0f-48d1-b13d-8a766f5b604a\" (UID: \"3d8ffc48-6b0f-48d1-b13d-8a766f5b604a\") " Jan 21 11:23:58 crc kubenswrapper[4881]: I0121 11:23:58.162850 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d8ffc48-6b0f-48d1-b13d-8a766f5b604a-config-data\") pod \"3d8ffc48-6b0f-48d1-b13d-8a766f5b604a\" (UID: \"3d8ffc48-6b0f-48d1-b13d-8a766f5b604a\") " Jan 21 11:23:58 crc kubenswrapper[4881]: I0121 11:23:58.174137 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d8ffc48-6b0f-48d1-b13d-8a766f5b604a-kube-api-access-qvcfc" (OuterVolumeSpecName: "kube-api-access-qvcfc") pod "3d8ffc48-6b0f-48d1-b13d-8a766f5b604a" (UID: "3d8ffc48-6b0f-48d1-b13d-8a766f5b604a"). InnerVolumeSpecName "kube-api-access-qvcfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:23:58 crc kubenswrapper[4881]: I0121 11:23:58.182372 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d8ffc48-6b0f-48d1-b13d-8a766f5b604a-scripts" (OuterVolumeSpecName: "scripts") pod "3d8ffc48-6b0f-48d1-b13d-8a766f5b604a" (UID: "3d8ffc48-6b0f-48d1-b13d-8a766f5b604a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:23:58 crc kubenswrapper[4881]: I0121 11:23:58.199027 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d8ffc48-6b0f-48d1-b13d-8a766f5b604a-config-data" (OuterVolumeSpecName: "config-data") pod "3d8ffc48-6b0f-48d1-b13d-8a766f5b604a" (UID: "3d8ffc48-6b0f-48d1-b13d-8a766f5b604a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:23:58 crc kubenswrapper[4881]: I0121 11:23:58.203943 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d8ffc48-6b0f-48d1-b13d-8a766f5b604a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3d8ffc48-6b0f-48d1-b13d-8a766f5b604a" (UID: "3d8ffc48-6b0f-48d1-b13d-8a766f5b604a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:23:58 crc kubenswrapper[4881]: I0121 11:23:58.264400 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d8ffc48-6b0f-48d1-b13d-8a766f5b604a-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:58 crc kubenswrapper[4881]: I0121 11:23:58.264448 4881 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d8ffc48-6b0f-48d1-b13d-8a766f5b604a-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:58 crc kubenswrapper[4881]: I0121 11:23:58.264462 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d8ffc48-6b0f-48d1-b13d-8a766f5b604a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:58 crc kubenswrapper[4881]: I0121 11:23:58.264476 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvcfc\" (UniqueName: \"kubernetes.io/projected/3d8ffc48-6b0f-48d1-b13d-8a766f5b604a-kube-api-access-qvcfc\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:58 crc kubenswrapper[4881]: I0121 11:23:58.634774 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 11:23:58 crc kubenswrapper[4881]: I0121 11:23:58.634861 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 11:23:58 crc kubenswrapper[4881]: I0121 11:23:58.682909 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bdc49" event={"ID":"3d8ffc48-6b0f-48d1-b13d-8a766f5b604a","Type":"ContainerDied","Data":"319a1c0ca170ca90fa0753a5c20774856788050e89dac7393a9beb4d1a3b2bec"} Jan 21 11:23:58 crc kubenswrapper[4881]: I0121 11:23:58.682956 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="319a1c0ca170ca90fa0753a5c20774856788050e89dac7393a9beb4d1a3b2bec" Jan 21 11:23:58 crc kubenswrapper[4881]: I0121 11:23:58.683023 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bdc49" Jan 21 11:23:58 crc kubenswrapper[4881]: I0121 11:23:58.847414 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 11:23:58 crc kubenswrapper[4881]: I0121 11:23:58.847677 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="da2439be-4ed2-43a2-adbe-dd4afaa012f3" containerName="nova-api-log" containerID="cri-o://80e209a06fa6ebe24f14a7a5f19b6ec4b9439abda270d225d2c57b6f4688cd25" gracePeriod=30 Jan 21 11:23:58 crc kubenswrapper[4881]: I0121 11:23:58.847810 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="da2439be-4ed2-43a2-adbe-dd4afaa012f3" containerName="nova-api-api" containerID="cri-o://5f6a607787d7e9e1eada9a9f91e574513eb5ba0e4548b904cb79b64f1f85f516" gracePeriod=30 Jan 21 11:23:58 crc kubenswrapper[4881]: I0121 11:23:58.853169 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="da2439be-4ed2-43a2-adbe-dd4afaa012f3" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.225:8774/\": EOF" Jan 21 11:23:58 crc kubenswrapper[4881]: I0121 11:23:58.853243 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="da2439be-4ed2-43a2-adbe-dd4afaa012f3" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.225:8774/\": EOF" Jan 21 11:23:58 crc kubenswrapper[4881]: I0121 11:23:58.870878 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 11:23:58 crc kubenswrapper[4881]: I0121 11:23:58.871114 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="0f1fb00c-903a-48c9-95e5-8ad34c731f41" containerName="nova-scheduler-scheduler" containerID="cri-o://e52d14e5b47ceff7047b0b43cb94e03af0a112544f5fe0cee4d41a4bd236c070" gracePeriod=30 Jan 21 11:23:58 crc kubenswrapper[4881]: I0121 11:23:58.939638 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 11:23:58 crc kubenswrapper[4881]: I0121 11:23:58.940032 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52" containerName="nova-metadata-log" containerID="cri-o://5317a19a5c6fd411002c22415e0ba75ced188c533ac4cf93ad9bafb7600cfba0" gracePeriod=30 Jan 21 11:23:58 crc kubenswrapper[4881]: I0121 11:23:58.940045 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52" containerName="nova-metadata-metadata" containerID="cri-o://77ab3e90f4bd352be1d58beb21ac3b7c5b6ccdc4776384b4fd7529acffc8aa21" gracePeriod=30 Jan 21 11:23:59 crc kubenswrapper[4881]: E0121 11:23:59.275365 4881 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e3b0813_d7bc_4e2e_aa18_fe1e00c75f52.slice/crio-conmon-5317a19a5c6fd411002c22415e0ba75ced188c533ac4cf93ad9bafb7600cfba0.scope\": RecentStats: unable to find data in memory cache]" Jan 21 11:23:59 crc kubenswrapper[4881]: I0121 11:23:59.694440 4881 generic.go:334] "Generic (PLEG): container finished" podID="da2439be-4ed2-43a2-adbe-dd4afaa012f3" containerID="80e209a06fa6ebe24f14a7a5f19b6ec4b9439abda270d225d2c57b6f4688cd25" exitCode=143 Jan 21 11:23:59 crc kubenswrapper[4881]: I0121 11:23:59.694538 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"da2439be-4ed2-43a2-adbe-dd4afaa012f3","Type":"ContainerDied","Data":"80e209a06fa6ebe24f14a7a5f19b6ec4b9439abda270d225d2c57b6f4688cd25"} Jan 21 11:23:59 crc kubenswrapper[4881]: I0121 11:23:59.697699 4881 generic.go:334] "Generic (PLEG): container finished" podID="7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52" containerID="5317a19a5c6fd411002c22415e0ba75ced188c533ac4cf93ad9bafb7600cfba0" exitCode=143 Jan 21 11:23:59 crc kubenswrapper[4881]: I0121 11:23:59.697750 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52","Type":"ContainerDied","Data":"5317a19a5c6fd411002c22415e0ba75ced188c533ac4cf93ad9bafb7600cfba0"} Jan 21 11:23:59 crc kubenswrapper[4881]: I0121 11:23:59.851011 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:23:59 crc kubenswrapper[4881]: I0121 11:23:59.851070 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.544600 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.693215 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.709160 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52-nova-metadata-tls-certs\") pod \"7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52\" (UID: \"7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52\") " Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.709272 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52-combined-ca-bundle\") pod \"7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52\" (UID: \"7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52\") " Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.709351 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52-config-data\") pod \"7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52\" (UID: \"7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52\") " Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.709495 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52-logs\") pod \"7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52\" (UID: \"7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52\") " Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.709895 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmm59\" (UniqueName: \"kubernetes.io/projected/7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52-kube-api-access-xmm59\") pod \"7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52\" (UID: \"7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52\") " Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.712015 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52-logs" (OuterVolumeSpecName: "logs") pod "7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52" (UID: "7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.719288 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52-kube-api-access-xmm59" (OuterVolumeSpecName: "kube-api-access-xmm59") pod "7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52" (UID: "7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52"). InnerVolumeSpecName "kube-api-access-xmm59". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.720226 4881 generic.go:334] "Generic (PLEG): container finished" podID="0f1fb00c-903a-48c9-95e5-8ad34c731f41" containerID="e52d14e5b47ceff7047b0b43cb94e03af0a112544f5fe0cee4d41a4bd236c070" exitCode=0 Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.720329 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0f1fb00c-903a-48c9-95e5-8ad34c731f41","Type":"ContainerDied","Data":"e52d14e5b47ceff7047b0b43cb94e03af0a112544f5fe0cee4d41a4bd236c070"} Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.720366 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0f1fb00c-903a-48c9-95e5-8ad34c731f41","Type":"ContainerDied","Data":"b3157e678fa44dfdf1c50a29c3af5b7c20661b982fcfdccdd420bdba43c8cf36"} Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.720389 4881 scope.go:117] "RemoveContainer" containerID="e52d14e5b47ceff7047b0b43cb94e03af0a112544f5fe0cee4d41a4bd236c070" Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.720423 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.732709 4881 generic.go:334] "Generic (PLEG): container finished" podID="7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52" containerID="77ab3e90f4bd352be1d58beb21ac3b7c5b6ccdc4776384b4fd7529acffc8aa21" exitCode=0 Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.732759 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52","Type":"ContainerDied","Data":"77ab3e90f4bd352be1d58beb21ac3b7c5b6ccdc4776384b4fd7529acffc8aa21"} Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.732812 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52","Type":"ContainerDied","Data":"94be8c422811e4e8ba1078eb2e0e3d71d40e6f5e6c07d283df8a7544b7b7a114"} Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.732881 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.785970 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52-config-data" (OuterVolumeSpecName: "config-data") pod "7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52" (UID: "7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.787243 4881 scope.go:117] "RemoveContainer" containerID="e52d14e5b47ceff7047b0b43cb94e03af0a112544f5fe0cee4d41a4bd236c070" Jan 21 11:24:01 crc kubenswrapper[4881]: E0121 11:24:01.788271 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e52d14e5b47ceff7047b0b43cb94e03af0a112544f5fe0cee4d41a4bd236c070\": container with ID starting with e52d14e5b47ceff7047b0b43cb94e03af0a112544f5fe0cee4d41a4bd236c070 not found: ID does not exist" containerID="e52d14e5b47ceff7047b0b43cb94e03af0a112544f5fe0cee4d41a4bd236c070" Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.788335 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e52d14e5b47ceff7047b0b43cb94e03af0a112544f5fe0cee4d41a4bd236c070"} err="failed to get container status \"e52d14e5b47ceff7047b0b43cb94e03af0a112544f5fe0cee4d41a4bd236c070\": rpc error: code = NotFound desc = could not find container \"e52d14e5b47ceff7047b0b43cb94e03af0a112544f5fe0cee4d41a4bd236c070\": container with ID starting with e52d14e5b47ceff7047b0b43cb94e03af0a112544f5fe0cee4d41a4bd236c070 not found: ID does not exist" Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.788358 4881 scope.go:117] "RemoveContainer" containerID="77ab3e90f4bd352be1d58beb21ac3b7c5b6ccdc4776384b4fd7529acffc8aa21" Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.792986 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52" (UID: "7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.812476 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f1fb00c-903a-48c9-95e5-8ad34c731f41-combined-ca-bundle\") pod \"0f1fb00c-903a-48c9-95e5-8ad34c731f41\" (UID: \"0f1fb00c-903a-48c9-95e5-8ad34c731f41\") " Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.812607 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpprt\" (UniqueName: \"kubernetes.io/projected/0f1fb00c-903a-48c9-95e5-8ad34c731f41-kube-api-access-zpprt\") pod \"0f1fb00c-903a-48c9-95e5-8ad34c731f41\" (UID: \"0f1fb00c-903a-48c9-95e5-8ad34c731f41\") " Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.812734 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f1fb00c-903a-48c9-95e5-8ad34c731f41-config-data\") pod \"0f1fb00c-903a-48c9-95e5-8ad34c731f41\" (UID: \"0f1fb00c-903a-48c9-95e5-8ad34c731f41\") " Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.813378 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.813413 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.813430 4881 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52-logs\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.813527 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmm59\" (UniqueName: \"kubernetes.io/projected/7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52-kube-api-access-xmm59\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.818008 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f1fb00c-903a-48c9-95e5-8ad34c731f41-kube-api-access-zpprt" (OuterVolumeSpecName: "kube-api-access-zpprt") pod "0f1fb00c-903a-48c9-95e5-8ad34c731f41" (UID: "0f1fb00c-903a-48c9-95e5-8ad34c731f41"). InnerVolumeSpecName "kube-api-access-zpprt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.820619 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52" (UID: "7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.832277 4881 scope.go:117] "RemoveContainer" containerID="5317a19a5c6fd411002c22415e0ba75ced188c533ac4cf93ad9bafb7600cfba0" Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.859197 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f1fb00c-903a-48c9-95e5-8ad34c731f41-config-data" (OuterVolumeSpecName: "config-data") pod "0f1fb00c-903a-48c9-95e5-8ad34c731f41" (UID: "0f1fb00c-903a-48c9-95e5-8ad34c731f41"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.871134 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f1fb00c-903a-48c9-95e5-8ad34c731f41-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0f1fb00c-903a-48c9-95e5-8ad34c731f41" (UID: "0f1fb00c-903a-48c9-95e5-8ad34c731f41"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.875118 4881 scope.go:117] "RemoveContainer" containerID="77ab3e90f4bd352be1d58beb21ac3b7c5b6ccdc4776384b4fd7529acffc8aa21" Jan 21 11:24:01 crc kubenswrapper[4881]: E0121 11:24:01.875726 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77ab3e90f4bd352be1d58beb21ac3b7c5b6ccdc4776384b4fd7529acffc8aa21\": container with ID starting with 77ab3e90f4bd352be1d58beb21ac3b7c5b6ccdc4776384b4fd7529acffc8aa21 not found: ID does not exist" containerID="77ab3e90f4bd352be1d58beb21ac3b7c5b6ccdc4776384b4fd7529acffc8aa21" Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.875768 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77ab3e90f4bd352be1d58beb21ac3b7c5b6ccdc4776384b4fd7529acffc8aa21"} err="failed to get container status \"77ab3e90f4bd352be1d58beb21ac3b7c5b6ccdc4776384b4fd7529acffc8aa21\": rpc error: code = NotFound desc = could not find container \"77ab3e90f4bd352be1d58beb21ac3b7c5b6ccdc4776384b4fd7529acffc8aa21\": container with ID starting with 77ab3e90f4bd352be1d58beb21ac3b7c5b6ccdc4776384b4fd7529acffc8aa21 not found: ID does not exist" Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.875817 4881 scope.go:117] "RemoveContainer" containerID="5317a19a5c6fd411002c22415e0ba75ced188c533ac4cf93ad9bafb7600cfba0" Jan 21 11:24:01 crc kubenswrapper[4881]: E0121 11:24:01.879668 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5317a19a5c6fd411002c22415e0ba75ced188c533ac4cf93ad9bafb7600cfba0\": container with ID starting with 5317a19a5c6fd411002c22415e0ba75ced188c533ac4cf93ad9bafb7600cfba0 not found: ID does not exist" containerID="5317a19a5c6fd411002c22415e0ba75ced188c533ac4cf93ad9bafb7600cfba0" Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.879925 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5317a19a5c6fd411002c22415e0ba75ced188c533ac4cf93ad9bafb7600cfba0"} err="failed to get container status \"5317a19a5c6fd411002c22415e0ba75ced188c533ac4cf93ad9bafb7600cfba0\": rpc error: code = NotFound desc = could not find container \"5317a19a5c6fd411002c22415e0ba75ced188c533ac4cf93ad9bafb7600cfba0\": container with ID starting with 5317a19a5c6fd411002c22415e0ba75ced188c533ac4cf93ad9bafb7600cfba0 not found: ID does not exist" Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.915839 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpprt\" (UniqueName: \"kubernetes.io/projected/0f1fb00c-903a-48c9-95e5-8ad34c731f41-kube-api-access-zpprt\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.915886 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f1fb00c-903a-48c9-95e5-8ad34c731f41-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.915905 4881 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:01 crc kubenswrapper[4881]: I0121 11:24:01.915916 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f1fb00c-903a-48c9-95e5-8ad34c731f41-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.068374 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.082026 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.100077 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 11:24:02 crc kubenswrapper[4881]: E0121 11:24:02.139252 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52706c95-5c29-44cb-bc9d-2873d3a4d437" containerName="extract-content" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.139442 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="52706c95-5c29-44cb-bc9d-2873d3a4d437" containerName="extract-content" Jan 21 11:24:02 crc kubenswrapper[4881]: E0121 11:24:02.139548 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="859758f9-0dc2-4397-a75a-b098eaabe613" containerName="dnsmasq-dns" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.139612 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="859758f9-0dc2-4397-a75a-b098eaabe613" containerName="dnsmasq-dns" Jan 21 11:24:02 crc kubenswrapper[4881]: E0121 11:24:02.139723 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52" containerName="nova-metadata-log" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.139815 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52" containerName="nova-metadata-log" Jan 21 11:24:02 crc kubenswrapper[4881]: E0121 11:24:02.139908 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52706c95-5c29-44cb-bc9d-2873d3a4d437" containerName="registry-server" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.139984 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="52706c95-5c29-44cb-bc9d-2873d3a4d437" containerName="registry-server" Jan 21 11:24:02 crc kubenswrapper[4881]: E0121 11:24:02.140301 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d8ffc48-6b0f-48d1-b13d-8a766f5b604a" containerName="nova-manage" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.140391 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d8ffc48-6b0f-48d1-b13d-8a766f5b604a" containerName="nova-manage" Jan 21 11:24:02 crc kubenswrapper[4881]: E0121 11:24:02.140471 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52706c95-5c29-44cb-bc9d-2873d3a4d437" containerName="extract-utilities" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.140557 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="52706c95-5c29-44cb-bc9d-2873d3a4d437" containerName="extract-utilities" Jan 21 11:24:02 crc kubenswrapper[4881]: E0121 11:24:02.140662 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f1fb00c-903a-48c9-95e5-8ad34c731f41" containerName="nova-scheduler-scheduler" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.140723 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f1fb00c-903a-48c9-95e5-8ad34c731f41" containerName="nova-scheduler-scheduler" Jan 21 11:24:02 crc kubenswrapper[4881]: E0121 11:24:02.140825 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="859758f9-0dc2-4397-a75a-b098eaabe613" containerName="init" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.145157 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="859758f9-0dc2-4397-a75a-b098eaabe613" containerName="init" Jan 21 11:24:02 crc kubenswrapper[4881]: E0121 11:24:02.145532 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52" containerName="nova-metadata-metadata" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.145628 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52" containerName="nova-metadata-metadata" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.148357 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="52706c95-5c29-44cb-bc9d-2873d3a4d437" containerName="registry-server" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.149073 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d8ffc48-6b0f-48d1-b13d-8a766f5b604a" containerName="nova-manage" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.149261 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f1fb00c-903a-48c9-95e5-8ad34c731f41" containerName="nova-scheduler-scheduler" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.149365 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="859758f9-0dc2-4397-a75a-b098eaabe613" containerName="dnsmasq-dns" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.149462 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52" containerName="nova-metadata-metadata" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.149552 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52" containerName="nova-metadata-log" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.152023 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.155640 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.166918 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.206403 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.263721 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.347859 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vthnm\" (UniqueName: \"kubernetes.io/projected/6f6e9d1b-902e-450b-8202-337c04c265ba-kube-api-access-vthnm\") pod \"nova-scheduler-0\" (UID: \"6f6e9d1b-902e-450b-8202-337c04c265ba\") " pod="openstack/nova-scheduler-0" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.347985 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f6e9d1b-902e-450b-8202-337c04c265ba-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6f6e9d1b-902e-450b-8202-337c04c265ba\") " pod="openstack/nova-scheduler-0" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.348071 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f6e9d1b-902e-450b-8202-337c04c265ba-config-data\") pod \"nova-scheduler-0\" (UID: \"6f6e9d1b-902e-450b-8202-337c04c265ba\") " pod="openstack/nova-scheduler-0" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.636713 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vthnm\" (UniqueName: \"kubernetes.io/projected/6f6e9d1b-902e-450b-8202-337c04c265ba-kube-api-access-vthnm\") pod \"nova-scheduler-0\" (UID: \"6f6e9d1b-902e-450b-8202-337c04c265ba\") " pod="openstack/nova-scheduler-0" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.636855 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f6e9d1b-902e-450b-8202-337c04c265ba-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6f6e9d1b-902e-450b-8202-337c04c265ba\") " pod="openstack/nova-scheduler-0" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.636932 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f6e9d1b-902e-450b-8202-337c04c265ba-config-data\") pod \"nova-scheduler-0\" (UID: \"6f6e9d1b-902e-450b-8202-337c04c265ba\") " pod="openstack/nova-scheduler-0" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.656067 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.657184 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f6e9d1b-902e-450b-8202-337c04c265ba-config-data\") pod \"nova-scheduler-0\" (UID: \"6f6e9d1b-902e-450b-8202-337c04c265ba\") " pod="openstack/nova-scheduler-0" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.668109 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.670654 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f6e9d1b-902e-450b-8202-337c04c265ba-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6f6e9d1b-902e-450b-8202-337c04c265ba\") " pod="openstack/nova-scheduler-0" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.675630 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.678602 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.679055 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.709272 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vthnm\" (UniqueName: \"kubernetes.io/projected/6f6e9d1b-902e-450b-8202-337c04c265ba-kube-api-access-vthnm\") pod \"nova-scheduler-0\" (UID: \"6f6e9d1b-902e-450b-8202-337c04c265ba\") " pod="openstack/nova-scheduler-0" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.742374 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba03e9fe-3ad6-4c52-bde7-bd41fca63834-logs\") pod \"nova-metadata-0\" (UID: \"ba03e9fe-3ad6-4c52-bde7-bd41fca63834\") " pod="openstack/nova-metadata-0" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.742616 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba03e9fe-3ad6-4c52-bde7-bd41fca63834-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ba03e9fe-3ad6-4c52-bde7-bd41fca63834\") " pod="openstack/nova-metadata-0" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.742693 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba03e9fe-3ad6-4c52-bde7-bd41fca63834-config-data\") pod \"nova-metadata-0\" (UID: \"ba03e9fe-3ad6-4c52-bde7-bd41fca63834\") " pod="openstack/nova-metadata-0" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.742814 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba03e9fe-3ad6-4c52-bde7-bd41fca63834-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ba03e9fe-3ad6-4c52-bde7-bd41fca63834\") " pod="openstack/nova-metadata-0" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.742939 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xjtc\" (UniqueName: \"kubernetes.io/projected/ba03e9fe-3ad6-4c52-bde7-bd41fca63834-kube-api-access-7xjtc\") pod \"nova-metadata-0\" (UID: \"ba03e9fe-3ad6-4c52-bde7-bd41fca63834\") " pod="openstack/nova-metadata-0" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.814966 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.849297 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xjtc\" (UniqueName: \"kubernetes.io/projected/ba03e9fe-3ad6-4c52-bde7-bd41fca63834-kube-api-access-7xjtc\") pod \"nova-metadata-0\" (UID: \"ba03e9fe-3ad6-4c52-bde7-bd41fca63834\") " pod="openstack/nova-metadata-0" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.849421 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba03e9fe-3ad6-4c52-bde7-bd41fca63834-logs\") pod \"nova-metadata-0\" (UID: \"ba03e9fe-3ad6-4c52-bde7-bd41fca63834\") " pod="openstack/nova-metadata-0" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.849516 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba03e9fe-3ad6-4c52-bde7-bd41fca63834-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ba03e9fe-3ad6-4c52-bde7-bd41fca63834\") " pod="openstack/nova-metadata-0" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.849555 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba03e9fe-3ad6-4c52-bde7-bd41fca63834-config-data\") pod \"nova-metadata-0\" (UID: \"ba03e9fe-3ad6-4c52-bde7-bd41fca63834\") " pod="openstack/nova-metadata-0" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.849604 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba03e9fe-3ad6-4c52-bde7-bd41fca63834-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ba03e9fe-3ad6-4c52-bde7-bd41fca63834\") " pod="openstack/nova-metadata-0" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.851034 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba03e9fe-3ad6-4c52-bde7-bd41fca63834-logs\") pod \"nova-metadata-0\" (UID: \"ba03e9fe-3ad6-4c52-bde7-bd41fca63834\") " pod="openstack/nova-metadata-0" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.860449 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba03e9fe-3ad6-4c52-bde7-bd41fca63834-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ba03e9fe-3ad6-4c52-bde7-bd41fca63834\") " pod="openstack/nova-metadata-0" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.863709 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba03e9fe-3ad6-4c52-bde7-bd41fca63834-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ba03e9fe-3ad6-4c52-bde7-bd41fca63834\") " pod="openstack/nova-metadata-0" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.885550 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba03e9fe-3ad6-4c52-bde7-bd41fca63834-config-data\") pod \"nova-metadata-0\" (UID: \"ba03e9fe-3ad6-4c52-bde7-bd41fca63834\") " pod="openstack/nova-metadata-0" Jan 21 11:24:02 crc kubenswrapper[4881]: I0121 11:24:02.898694 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xjtc\" (UniqueName: \"kubernetes.io/projected/ba03e9fe-3ad6-4c52-bde7-bd41fca63834-kube-api-access-7xjtc\") pod \"nova-metadata-0\" (UID: \"ba03e9fe-3ad6-4c52-bde7-bd41fca63834\") " pod="openstack/nova-metadata-0" Jan 21 11:24:03 crc kubenswrapper[4881]: I0121 11:24:03.103091 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 11:24:03 crc kubenswrapper[4881]: I0121 11:24:03.356569 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f1fb00c-903a-48c9-95e5-8ad34c731f41" path="/var/lib/kubelet/pods/0f1fb00c-903a-48c9-95e5-8ad34c731f41/volumes" Jan 21 11:24:03 crc kubenswrapper[4881]: I0121 11:24:03.357703 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52" path="/var/lib/kubelet/pods/7e3b0813-d7bc-4e2e-aa18-fe1e00c75f52/volumes" Jan 21 11:24:03 crc kubenswrapper[4881]: I0121 11:24:03.572278 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 11:24:03 crc kubenswrapper[4881]: I0121 11:24:03.709166 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 11:24:03 crc kubenswrapper[4881]: I0121 11:24:03.797674 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ba03e9fe-3ad6-4c52-bde7-bd41fca63834","Type":"ContainerStarted","Data":"fe53cb2b73cf131ba87702f82293ec55e430d03c07c71539649567f45f53874f"} Jan 21 11:24:03 crc kubenswrapper[4881]: I0121 11:24:03.799339 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6f6e9d1b-902e-450b-8202-337c04c265ba","Type":"ContainerStarted","Data":"cd96095a65ce65b2d4398d0e24880f414fefff1c1599cbf11f9f33b12e6a1147"} Jan 21 11:24:05 crc kubenswrapper[4881]: I0121 11:24:05.140166 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ba03e9fe-3ad6-4c52-bde7-bd41fca63834","Type":"ContainerStarted","Data":"63d974af0b35962ab93c677bcb1af29aa9625d09e0c3792308c7143381283bc1"} Jan 21 11:24:05 crc kubenswrapper[4881]: I0121 11:24:05.143826 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6f6e9d1b-902e-450b-8202-337c04c265ba","Type":"ContainerStarted","Data":"6e4c353ef2b04f1523052293a5ef253ea031d72f0dc74ed199971d7c3de6e601"} Jan 21 11:24:05 crc kubenswrapper[4881]: I0121 11:24:05.176576 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.176539222 podStartE2EDuration="3.176539222s" podCreationTimestamp="2026-01-21 11:24:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:24:05.161932194 +0000 UTC m=+1632.421888673" watchObservedRunningTime="2026-01-21 11:24:05.176539222 +0000 UTC m=+1632.436495701" Jan 21 11:24:06 crc kubenswrapper[4881]: I0121 11:24:06.922354 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.080120 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da2439be-4ed2-43a2-adbe-dd4afaa012f3-logs\") pod \"da2439be-4ed2-43a2-adbe-dd4afaa012f3\" (UID: \"da2439be-4ed2-43a2-adbe-dd4afaa012f3\") " Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.080218 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57mlh\" (UniqueName: \"kubernetes.io/projected/da2439be-4ed2-43a2-adbe-dd4afaa012f3-kube-api-access-57mlh\") pod \"da2439be-4ed2-43a2-adbe-dd4afaa012f3\" (UID: \"da2439be-4ed2-43a2-adbe-dd4afaa012f3\") " Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.080336 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/da2439be-4ed2-43a2-adbe-dd4afaa012f3-public-tls-certs\") pod \"da2439be-4ed2-43a2-adbe-dd4afaa012f3\" (UID: \"da2439be-4ed2-43a2-adbe-dd4afaa012f3\") " Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.080361 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da2439be-4ed2-43a2-adbe-dd4afaa012f3-config-data\") pod \"da2439be-4ed2-43a2-adbe-dd4afaa012f3\" (UID: \"da2439be-4ed2-43a2-adbe-dd4afaa012f3\") " Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.080419 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da2439be-4ed2-43a2-adbe-dd4afaa012f3-combined-ca-bundle\") pod \"da2439be-4ed2-43a2-adbe-dd4afaa012f3\" (UID: \"da2439be-4ed2-43a2-adbe-dd4afaa012f3\") " Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.080448 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/da2439be-4ed2-43a2-adbe-dd4afaa012f3-internal-tls-certs\") pod \"da2439be-4ed2-43a2-adbe-dd4afaa012f3\" (UID: \"da2439be-4ed2-43a2-adbe-dd4afaa012f3\") " Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.081578 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da2439be-4ed2-43a2-adbe-dd4afaa012f3-logs" (OuterVolumeSpecName: "logs") pod "da2439be-4ed2-43a2-adbe-dd4afaa012f3" (UID: "da2439be-4ed2-43a2-adbe-dd4afaa012f3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.104232 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da2439be-4ed2-43a2-adbe-dd4afaa012f3-kube-api-access-57mlh" (OuterVolumeSpecName: "kube-api-access-57mlh") pod "da2439be-4ed2-43a2-adbe-dd4afaa012f3" (UID: "da2439be-4ed2-43a2-adbe-dd4afaa012f3"). InnerVolumeSpecName "kube-api-access-57mlh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.178962 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da2439be-4ed2-43a2-adbe-dd4afaa012f3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "da2439be-4ed2-43a2-adbe-dd4afaa012f3" (UID: "da2439be-4ed2-43a2-adbe-dd4afaa012f3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.186497 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da2439be-4ed2-43a2-adbe-dd4afaa012f3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.186537 4881 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da2439be-4ed2-43a2-adbe-dd4afaa012f3-logs\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.186547 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57mlh\" (UniqueName: \"kubernetes.io/projected/da2439be-4ed2-43a2-adbe-dd4afaa012f3-kube-api-access-57mlh\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.200096 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da2439be-4ed2-43a2-adbe-dd4afaa012f3-config-data" (OuterVolumeSpecName: "config-data") pod "da2439be-4ed2-43a2-adbe-dd4afaa012f3" (UID: "da2439be-4ed2-43a2-adbe-dd4afaa012f3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.200285 4881 generic.go:334] "Generic (PLEG): container finished" podID="da2439be-4ed2-43a2-adbe-dd4afaa012f3" containerID="5f6a607787d7e9e1eada9a9f91e574513eb5ba0e4548b904cb79b64f1f85f516" exitCode=0 Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.200421 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.200554 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"da2439be-4ed2-43a2-adbe-dd4afaa012f3","Type":"ContainerDied","Data":"5f6a607787d7e9e1eada9a9f91e574513eb5ba0e4548b904cb79b64f1f85f516"} Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.200613 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"da2439be-4ed2-43a2-adbe-dd4afaa012f3","Type":"ContainerDied","Data":"78fa7e5c3484fc7a90c022f360abd4837962f6679c1a08c1b9fdb22f193c9f13"} Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.200636 4881 scope.go:117] "RemoveContainer" containerID="5f6a607787d7e9e1eada9a9f91e574513eb5ba0e4548b904cb79b64f1f85f516" Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.200611 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da2439be-4ed2-43a2-adbe-dd4afaa012f3-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "da2439be-4ed2-43a2-adbe-dd4afaa012f3" (UID: "da2439be-4ed2-43a2-adbe-dd4afaa012f3"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.226932 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da2439be-4ed2-43a2-adbe-dd4afaa012f3-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "da2439be-4ed2-43a2-adbe-dd4afaa012f3" (UID: "da2439be-4ed2-43a2-adbe-dd4afaa012f3"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.274428 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ba03e9fe-3ad6-4c52-bde7-bd41fca63834","Type":"ContainerStarted","Data":"331e9ee82d9defd168492b00a085b92acc44c562d368709fdd82fedce4f5fc8b"} Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.300632 4881 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/da2439be-4ed2-43a2-adbe-dd4afaa012f3-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.300675 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da2439be-4ed2-43a2-adbe-dd4afaa012f3-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.300692 4881 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/da2439be-4ed2-43a2-adbe-dd4afaa012f3-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.794777 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=5.794751427 podStartE2EDuration="5.794751427s" podCreationTimestamp="2026-01-21 11:24:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:24:07.759363439 +0000 UTC m=+1635.019319918" watchObservedRunningTime="2026-01-21 11:24:07.794751427 +0000 UTC m=+1635.054707896" Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.824457 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.824500 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.824515 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.867897 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 11:24:07 crc kubenswrapper[4881]: E0121 11:24:07.868394 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da2439be-4ed2-43a2-adbe-dd4afaa012f3" containerName="nova-api-log" Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.868417 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="da2439be-4ed2-43a2-adbe-dd4afaa012f3" containerName="nova-api-log" Jan 21 11:24:07 crc kubenswrapper[4881]: E0121 11:24:07.868444 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da2439be-4ed2-43a2-adbe-dd4afaa012f3" containerName="nova-api-api" Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.868450 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="da2439be-4ed2-43a2-adbe-dd4afaa012f3" containerName="nova-api-api" Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.868654 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="da2439be-4ed2-43a2-adbe-dd4afaa012f3" containerName="nova-api-log" Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.868678 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="da2439be-4ed2-43a2-adbe-dd4afaa012f3" containerName="nova-api-api" Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.869946 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.875221 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.875442 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.875596 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.887269 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.916885 4881 scope.go:117] "RemoveContainer" containerID="80e209a06fa6ebe24f14a7a5f19b6ec4b9439abda270d225d2c57b6f4688cd25" Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.965811 4881 scope.go:117] "RemoveContainer" containerID="5f6a607787d7e9e1eada9a9f91e574513eb5ba0e4548b904cb79b64f1f85f516" Jan 21 11:24:07 crc kubenswrapper[4881]: E0121 11:24:07.966630 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f6a607787d7e9e1eada9a9f91e574513eb5ba0e4548b904cb79b64f1f85f516\": container with ID starting with 5f6a607787d7e9e1eada9a9f91e574513eb5ba0e4548b904cb79b64f1f85f516 not found: ID does not exist" containerID="5f6a607787d7e9e1eada9a9f91e574513eb5ba0e4548b904cb79b64f1f85f516" Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.966671 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f6a607787d7e9e1eada9a9f91e574513eb5ba0e4548b904cb79b64f1f85f516"} err="failed to get container status \"5f6a607787d7e9e1eada9a9f91e574513eb5ba0e4548b904cb79b64f1f85f516\": rpc error: code = NotFound desc = could not find container \"5f6a607787d7e9e1eada9a9f91e574513eb5ba0e4548b904cb79b64f1f85f516\": container with ID starting with 5f6a607787d7e9e1eada9a9f91e574513eb5ba0e4548b904cb79b64f1f85f516 not found: ID does not exist" Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.966815 4881 scope.go:117] "RemoveContainer" containerID="80e209a06fa6ebe24f14a7a5f19b6ec4b9439abda270d225d2c57b6f4688cd25" Jan 21 11:24:07 crc kubenswrapper[4881]: E0121 11:24:07.967056 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80e209a06fa6ebe24f14a7a5f19b6ec4b9439abda270d225d2c57b6f4688cd25\": container with ID starting with 80e209a06fa6ebe24f14a7a5f19b6ec4b9439abda270d225d2c57b6f4688cd25 not found: ID does not exist" containerID="80e209a06fa6ebe24f14a7a5f19b6ec4b9439abda270d225d2c57b6f4688cd25" Jan 21 11:24:07 crc kubenswrapper[4881]: I0121 11:24:07.967084 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80e209a06fa6ebe24f14a7a5f19b6ec4b9439abda270d225d2c57b6f4688cd25"} err="failed to get container status \"80e209a06fa6ebe24f14a7a5f19b6ec4b9439abda270d225d2c57b6f4688cd25\": rpc error: code = NotFound desc = could not find container \"80e209a06fa6ebe24f14a7a5f19b6ec4b9439abda270d225d2c57b6f4688cd25\": container with ID starting with 80e209a06fa6ebe24f14a7a5f19b6ec4b9439abda270d225d2c57b6f4688cd25 not found: ID does not exist" Jan 21 11:24:08 crc kubenswrapper[4881]: I0121 11:24:08.022131 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1188227a-462c-4c61-ae6e-96b55ffacd71-config-data\") pod \"nova-api-0\" (UID: \"1188227a-462c-4c61-ae6e-96b55ffacd71\") " pod="openstack/nova-api-0" Jan 21 11:24:08 crc kubenswrapper[4881]: I0121 11:24:08.022460 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9g7s\" (UniqueName: \"kubernetes.io/projected/1188227a-462c-4c61-ae6e-96b55ffacd71-kube-api-access-q9g7s\") pod \"nova-api-0\" (UID: \"1188227a-462c-4c61-ae6e-96b55ffacd71\") " pod="openstack/nova-api-0" Jan 21 11:24:08 crc kubenswrapper[4881]: I0121 11:24:08.022515 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1188227a-462c-4c61-ae6e-96b55ffacd71-public-tls-certs\") pod \"nova-api-0\" (UID: \"1188227a-462c-4c61-ae6e-96b55ffacd71\") " pod="openstack/nova-api-0" Jan 21 11:24:08 crc kubenswrapper[4881]: I0121 11:24:08.022549 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1188227a-462c-4c61-ae6e-96b55ffacd71-logs\") pod \"nova-api-0\" (UID: \"1188227a-462c-4c61-ae6e-96b55ffacd71\") " pod="openstack/nova-api-0" Jan 21 11:24:08 crc kubenswrapper[4881]: I0121 11:24:08.022605 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1188227a-462c-4c61-ae6e-96b55ffacd71-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1188227a-462c-4c61-ae6e-96b55ffacd71\") " pod="openstack/nova-api-0" Jan 21 11:24:08 crc kubenswrapper[4881]: I0121 11:24:08.022681 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1188227a-462c-4c61-ae6e-96b55ffacd71-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1188227a-462c-4c61-ae6e-96b55ffacd71\") " pod="openstack/nova-api-0" Jan 21 11:24:08 crc kubenswrapper[4881]: I0121 11:24:08.103944 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 11:24:08 crc kubenswrapper[4881]: I0121 11:24:08.104006 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 11:24:08 crc kubenswrapper[4881]: I0121 11:24:08.125206 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9g7s\" (UniqueName: \"kubernetes.io/projected/1188227a-462c-4c61-ae6e-96b55ffacd71-kube-api-access-q9g7s\") pod \"nova-api-0\" (UID: \"1188227a-462c-4c61-ae6e-96b55ffacd71\") " pod="openstack/nova-api-0" Jan 21 11:24:08 crc kubenswrapper[4881]: I0121 11:24:08.125273 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1188227a-462c-4c61-ae6e-96b55ffacd71-public-tls-certs\") pod \"nova-api-0\" (UID: \"1188227a-462c-4c61-ae6e-96b55ffacd71\") " pod="openstack/nova-api-0" Jan 21 11:24:08 crc kubenswrapper[4881]: I0121 11:24:08.125295 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1188227a-462c-4c61-ae6e-96b55ffacd71-logs\") pod \"nova-api-0\" (UID: \"1188227a-462c-4c61-ae6e-96b55ffacd71\") " pod="openstack/nova-api-0" Jan 21 11:24:08 crc kubenswrapper[4881]: I0121 11:24:08.125326 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1188227a-462c-4c61-ae6e-96b55ffacd71-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1188227a-462c-4c61-ae6e-96b55ffacd71\") " pod="openstack/nova-api-0" Jan 21 11:24:08 crc kubenswrapper[4881]: I0121 11:24:08.125363 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1188227a-462c-4c61-ae6e-96b55ffacd71-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1188227a-462c-4c61-ae6e-96b55ffacd71\") " pod="openstack/nova-api-0" Jan 21 11:24:08 crc kubenswrapper[4881]: I0121 11:24:08.125394 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1188227a-462c-4c61-ae6e-96b55ffacd71-config-data\") pod \"nova-api-0\" (UID: \"1188227a-462c-4c61-ae6e-96b55ffacd71\") " pod="openstack/nova-api-0" Jan 21 11:24:08 crc kubenswrapper[4881]: I0121 11:24:08.126531 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1188227a-462c-4c61-ae6e-96b55ffacd71-logs\") pod \"nova-api-0\" (UID: \"1188227a-462c-4c61-ae6e-96b55ffacd71\") " pod="openstack/nova-api-0" Jan 21 11:24:08 crc kubenswrapper[4881]: I0121 11:24:08.130500 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1188227a-462c-4c61-ae6e-96b55ffacd71-config-data\") pod \"nova-api-0\" (UID: \"1188227a-462c-4c61-ae6e-96b55ffacd71\") " pod="openstack/nova-api-0" Jan 21 11:24:08 crc kubenswrapper[4881]: I0121 11:24:08.138339 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1188227a-462c-4c61-ae6e-96b55ffacd71-public-tls-certs\") pod \"nova-api-0\" (UID: \"1188227a-462c-4c61-ae6e-96b55ffacd71\") " pod="openstack/nova-api-0" Jan 21 11:24:08 crc kubenswrapper[4881]: I0121 11:24:08.138457 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1188227a-462c-4c61-ae6e-96b55ffacd71-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1188227a-462c-4c61-ae6e-96b55ffacd71\") " pod="openstack/nova-api-0" Jan 21 11:24:08 crc kubenswrapper[4881]: I0121 11:24:08.139939 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1188227a-462c-4c61-ae6e-96b55ffacd71-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1188227a-462c-4c61-ae6e-96b55ffacd71\") " pod="openstack/nova-api-0" Jan 21 11:24:08 crc kubenswrapper[4881]: I0121 11:24:08.145918 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9g7s\" (UniqueName: \"kubernetes.io/projected/1188227a-462c-4c61-ae6e-96b55ffacd71-kube-api-access-q9g7s\") pod \"nova-api-0\" (UID: \"1188227a-462c-4c61-ae6e-96b55ffacd71\") " pod="openstack/nova-api-0" Jan 21 11:24:08 crc kubenswrapper[4881]: I0121 11:24:08.206201 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 11:24:08 crc kubenswrapper[4881]: I0121 11:24:08.713583 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 11:24:09 crc kubenswrapper[4881]: I0121 11:24:09.327601 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da2439be-4ed2-43a2-adbe-dd4afaa012f3" path="/var/lib/kubelet/pods/da2439be-4ed2-43a2-adbe-dd4afaa012f3/volumes" Jan 21 11:24:09 crc kubenswrapper[4881]: I0121 11:24:09.340570 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1188227a-462c-4c61-ae6e-96b55ffacd71","Type":"ContainerStarted","Data":"187947b5be610e4479183060e11dc95bdb009b7bf23c7effe8224cce0ad8dde2"} Jan 21 11:24:09 crc kubenswrapper[4881]: I0121 11:24:09.340666 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1188227a-462c-4c61-ae6e-96b55ffacd71","Type":"ContainerStarted","Data":"cdb49b5096b541660e1071519ec1a626dc191064b2d9b0bfbd67bf05ca6786b2"} Jan 21 11:24:09 crc kubenswrapper[4881]: I0121 11:24:09.340687 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1188227a-462c-4c61-ae6e-96b55ffacd71","Type":"ContainerStarted","Data":"43a8184c7c3fcc42843ef748dac7eaa2aeb72b11c53692db3d99c6d69892dd0a"} Jan 21 11:24:09 crc kubenswrapper[4881]: I0121 11:24:09.397900 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.397876045 podStartE2EDuration="2.397876045s" podCreationTimestamp="2026-01-21 11:24:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:24:09.367896461 +0000 UTC m=+1636.627852950" watchObservedRunningTime="2026-01-21 11:24:09.397876045 +0000 UTC m=+1636.657832514" Jan 21 11:24:12 crc kubenswrapper[4881]: I0121 11:24:12.816625 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 21 11:24:12 crc kubenswrapper[4881]: I0121 11:24:12.847512 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 21 11:24:13 crc kubenswrapper[4881]: I0121 11:24:13.104211 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 11:24:13 crc kubenswrapper[4881]: I0121 11:24:13.104289 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 11:24:13 crc kubenswrapper[4881]: I0121 11:24:13.581764 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 21 11:24:14 crc kubenswrapper[4881]: I0121 11:24:14.120020 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ba03e9fe-3ad6-4c52-bde7-bd41fca63834" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.228:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 11:24:14 crc kubenswrapper[4881]: I0121 11:24:14.120020 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ba03e9fe-3ad6-4c52-bde7-bd41fca63834" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.228:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 11:24:18 crc kubenswrapper[4881]: I0121 11:24:18.206963 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 11:24:18 crc kubenswrapper[4881]: I0121 11:24:18.207488 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 11:24:18 crc kubenswrapper[4881]: I0121 11:24:18.902167 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 21 11:24:19 crc kubenswrapper[4881]: I0121 11:24:19.219081 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1188227a-462c-4c61-ae6e-96b55ffacd71" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.229:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 11:24:19 crc kubenswrapper[4881]: I0121 11:24:19.219108 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1188227a-462c-4c61-ae6e-96b55ffacd71" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.229:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 11:24:23 crc kubenswrapper[4881]: I0121 11:24:23.110518 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 11:24:23 crc kubenswrapper[4881]: I0121 11:24:23.111416 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 11:24:23 crc kubenswrapper[4881]: I0121 11:24:23.117077 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 11:24:23 crc kubenswrapper[4881]: I0121 11:24:23.663473 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 11:24:27 crc kubenswrapper[4881]: I0121 11:24:27.500264 4881 scope.go:117] "RemoveContainer" containerID="4b32abc6871e628e297cbe463288501e5adf49f03da08854de77bfb91714eedb" Jan 21 11:24:27 crc kubenswrapper[4881]: I0121 11:24:27.537850 4881 scope.go:117] "RemoveContainer" containerID="ce6a2cc0cc6379a9f8ed18cfa5d64954b4b7fdd11d37db77a73b2856418b87db" Jan 21 11:24:28 crc kubenswrapper[4881]: I0121 11:24:28.216495 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 11:24:28 crc kubenswrapper[4881]: I0121 11:24:28.217011 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 11:24:28 crc kubenswrapper[4881]: I0121 11:24:28.220464 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 11:24:28 crc kubenswrapper[4881]: I0121 11:24:28.227622 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 11:24:28 crc kubenswrapper[4881]: I0121 11:24:28.733520 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 11:24:28 crc kubenswrapper[4881]: I0121 11:24:28.743225 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 11:24:29 crc kubenswrapper[4881]: I0121 11:24:29.851267 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:24:29 crc kubenswrapper[4881]: I0121 11:24:29.852096 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:24:29 crc kubenswrapper[4881]: I0121 11:24:29.852177 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 11:24:29 crc kubenswrapper[4881]: I0121 11:24:29.853485 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8d01a71cc3cebcfb692e7385a1f123f20c56f33df75b2fdeed7ba4c65dcb43ca"} pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 11:24:29 crc kubenswrapper[4881]: I0121 11:24:29.853570 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" containerID="cri-o://8d01a71cc3cebcfb692e7385a1f123f20c56f33df75b2fdeed7ba4c65dcb43ca" gracePeriod=600 Jan 21 11:24:29 crc kubenswrapper[4881]: E0121 11:24:29.985495 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:24:30 crc kubenswrapper[4881]: I0121 11:24:30.757577 4881 generic.go:334] "Generic (PLEG): container finished" podID="3687b313-1df2-4274-80db-8c758b51bf2d" containerID="8d01a71cc3cebcfb692e7385a1f123f20c56f33df75b2fdeed7ba4c65dcb43ca" exitCode=0 Jan 21 11:24:30 crc kubenswrapper[4881]: I0121 11:24:30.757665 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerDied","Data":"8d01a71cc3cebcfb692e7385a1f123f20c56f33df75b2fdeed7ba4c65dcb43ca"} Jan 21 11:24:30 crc kubenswrapper[4881]: I0121 11:24:30.757732 4881 scope.go:117] "RemoveContainer" containerID="7331cbf4e5c1ebad90ff508798581f83536e17ac3c1ee9a79afc3f65f6e8ad1a" Jan 21 11:24:30 crc kubenswrapper[4881]: I0121 11:24:30.758619 4881 scope.go:117] "RemoveContainer" containerID="8d01a71cc3cebcfb692e7385a1f123f20c56f33df75b2fdeed7ba4c65dcb43ca" Jan 21 11:24:30 crc kubenswrapper[4881]: E0121 11:24:30.758874 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:24:38 crc kubenswrapper[4881]: I0121 11:24:38.325420 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 11:24:40 crc kubenswrapper[4881]: I0121 11:24:40.222871 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 11:24:42 crc kubenswrapper[4881]: I0121 11:24:42.196370 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="f7e90972-9be1-4d3e-852e-e7f7df6e6623" containerName="rabbitmq" containerID="cri-o://8a0e4e5a99ef920688a0d7a6463ea9c0a7db6ff987fcbf667df0b4f98b3356bf" gracePeriod=604797 Jan 21 11:24:43 crc kubenswrapper[4881]: I0121 11:24:43.625139 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="078c2368-b247-49d4-8723-fd93918e99b1" containerName="rabbitmq" containerID="cri-o://023f57aba22657f38c9822a9fcfbabd9eb5513e10f1d131208e251a7df31b2a0" gracePeriod=604797 Jan 21 11:24:43 crc kubenswrapper[4881]: I0121 11:24:43.949806 4881 generic.go:334] "Generic (PLEG): container finished" podID="f7e90972-9be1-4d3e-852e-e7f7df6e6623" containerID="8a0e4e5a99ef920688a0d7a6463ea9c0a7db6ff987fcbf667df0b4f98b3356bf" exitCode=0 Jan 21 11:24:43 crc kubenswrapper[4881]: I0121 11:24:43.949888 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f7e90972-9be1-4d3e-852e-e7f7df6e6623","Type":"ContainerDied","Data":"8a0e4e5a99ef920688a0d7a6463ea9c0a7db6ff987fcbf667df0b4f98b3356bf"} Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.091148 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.399427 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f7e90972-9be1-4d3e-852e-e7f7df6e6623-rabbitmq-erlang-cookie\") pod \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.399588 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.399716 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f7e90972-9be1-4d3e-852e-e7f7df6e6623-pod-info\") pod \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.400578 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e90972-9be1-4d3e-852e-e7f7df6e6623-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "f7e90972-9be1-4d3e-852e-e7f7df6e6623" (UID: "f7e90972-9be1-4d3e-852e-e7f7df6e6623"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.414876 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "f7e90972-9be1-4d3e-852e-e7f7df6e6623" (UID: "f7e90972-9be1-4d3e-852e-e7f7df6e6623"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.415199 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f7e90972-9be1-4d3e-852e-e7f7df6e6623-server-conf\") pod \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.415260 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f7e90972-9be1-4d3e-852e-e7f7df6e6623-plugins-conf\") pod \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.415359 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjgnd\" (UniqueName: \"kubernetes.io/projected/f7e90972-9be1-4d3e-852e-e7f7df6e6623-kube-api-access-tjgnd\") pod \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.415384 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f7e90972-9be1-4d3e-852e-e7f7df6e6623-rabbitmq-confd\") pod \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.415492 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f7e90972-9be1-4d3e-852e-e7f7df6e6623-rabbitmq-tls\") pod \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.415559 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f7e90972-9be1-4d3e-852e-e7f7df6e6623-config-data\") pod \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.415595 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f7e90972-9be1-4d3e-852e-e7f7df6e6623-rabbitmq-plugins\") pod \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.415636 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f7e90972-9be1-4d3e-852e-e7f7df6e6623-erlang-cookie-secret\") pod \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\" (UID: \"f7e90972-9be1-4d3e-852e-e7f7df6e6623\") " Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.416571 4881 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f7e90972-9be1-4d3e-852e-e7f7df6e6623-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.416596 4881 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.419057 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/f7e90972-9be1-4d3e-852e-e7f7df6e6623-pod-info" (OuterVolumeSpecName: "pod-info") pod "f7e90972-9be1-4d3e-852e-e7f7df6e6623" (UID: "f7e90972-9be1-4d3e-852e-e7f7df6e6623"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.422615 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e90972-9be1-4d3e-852e-e7f7df6e6623-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "f7e90972-9be1-4d3e-852e-e7f7df6e6623" (UID: "f7e90972-9be1-4d3e-852e-e7f7df6e6623"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.423957 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7e90972-9be1-4d3e-852e-e7f7df6e6623-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "f7e90972-9be1-4d3e-852e-e7f7df6e6623" (UID: "f7e90972-9be1-4d3e-852e-e7f7df6e6623"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.450287 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e90972-9be1-4d3e-852e-e7f7df6e6623-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "f7e90972-9be1-4d3e-852e-e7f7df6e6623" (UID: "f7e90972-9be1-4d3e-852e-e7f7df6e6623"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.452568 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7e90972-9be1-4d3e-852e-e7f7df6e6623-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "f7e90972-9be1-4d3e-852e-e7f7df6e6623" (UID: "f7e90972-9be1-4d3e-852e-e7f7df6e6623"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.470614 4881 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.471697 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e90972-9be1-4d3e-852e-e7f7df6e6623-kube-api-access-tjgnd" (OuterVolumeSpecName: "kube-api-access-tjgnd") pod "f7e90972-9be1-4d3e-852e-e7f7df6e6623" (UID: "f7e90972-9be1-4d3e-852e-e7f7df6e6623"). InnerVolumeSpecName "kube-api-access-tjgnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.507027 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7e90972-9be1-4d3e-852e-e7f7df6e6623-config-data" (OuterVolumeSpecName: "config-data") pod "f7e90972-9be1-4d3e-852e-e7f7df6e6623" (UID: "f7e90972-9be1-4d3e-852e-e7f7df6e6623"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.519846 4881 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.519881 4881 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f7e90972-9be1-4d3e-852e-e7f7df6e6623-pod-info\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.519895 4881 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f7e90972-9be1-4d3e-852e-e7f7df6e6623-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.519908 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjgnd\" (UniqueName: \"kubernetes.io/projected/f7e90972-9be1-4d3e-852e-e7f7df6e6623-kube-api-access-tjgnd\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.519917 4881 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f7e90972-9be1-4d3e-852e-e7f7df6e6623-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.519925 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f7e90972-9be1-4d3e-852e-e7f7df6e6623-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.519934 4881 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f7e90972-9be1-4d3e-852e-e7f7df6e6623-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.519942 4881 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f7e90972-9be1-4d3e-852e-e7f7df6e6623-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.610892 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7e90972-9be1-4d3e-852e-e7f7df6e6623-server-conf" (OuterVolumeSpecName: "server-conf") pod "f7e90972-9be1-4d3e-852e-e7f7df6e6623" (UID: "f7e90972-9be1-4d3e-852e-e7f7df6e6623"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.622520 4881 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f7e90972-9be1-4d3e-852e-e7f7df6e6623-server-conf\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.630344 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e90972-9be1-4d3e-852e-e7f7df6e6623-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "f7e90972-9be1-4d3e-852e-e7f7df6e6623" (UID: "f7e90972-9be1-4d3e-852e-e7f7df6e6623"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.724159 4881 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f7e90972-9be1-4d3e-852e-e7f7df6e6623-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.997611 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.998089 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f7e90972-9be1-4d3e-852e-e7f7df6e6623","Type":"ContainerDied","Data":"0407be0eb8897677e11cb341e14b52b133b745f624185504d845fdccc7ff50c4"} Jan 21 11:24:44 crc kubenswrapper[4881]: I0121 11:24:44.998129 4881 scope.go:117] "RemoveContainer" containerID="8a0e4e5a99ef920688a0d7a6463ea9c0a7db6ff987fcbf667df0b4f98b3356bf" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.005651 4881 generic.go:334] "Generic (PLEG): container finished" podID="078c2368-b247-49d4-8723-fd93918e99b1" containerID="023f57aba22657f38c9822a9fcfbabd9eb5513e10f1d131208e251a7df31b2a0" exitCode=0 Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.005704 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"078c2368-b247-49d4-8723-fd93918e99b1","Type":"ContainerDied","Data":"023f57aba22657f38c9822a9fcfbabd9eb5513e10f1d131208e251a7df31b2a0"} Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.046521 4881 scope.go:117] "RemoveContainer" containerID="b30e547e2506fcebf2f8ac627808ad3f0382510a160b2079a570164ee838adfc" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.061965 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.089711 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.110419 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 11:24:45 crc kubenswrapper[4881]: E0121 11:24:45.111048 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7e90972-9be1-4d3e-852e-e7f7df6e6623" containerName="setup-container" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.111068 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7e90972-9be1-4d3e-852e-e7f7df6e6623" containerName="setup-container" Jan 21 11:24:45 crc kubenswrapper[4881]: E0121 11:24:45.111092 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7e90972-9be1-4d3e-852e-e7f7df6e6623" containerName="rabbitmq" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.111098 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7e90972-9be1-4d3e-852e-e7f7df6e6623" containerName="rabbitmq" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.111324 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7e90972-9be1-4d3e-852e-e7f7df6e6623" containerName="rabbitmq" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.112503 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.116095 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.116403 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.116449 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-x9qrf" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.116517 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.116449 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.116860 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.117018 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.123765 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.236243 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/35a19b99-eed0-4383-bea5-cf43d84a5a3e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.236298 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.236323 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/35a19b99-eed0-4383-bea5-cf43d84a5a3e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.236496 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6mrv\" (UniqueName: \"kubernetes.io/projected/35a19b99-eed0-4383-bea5-cf43d84a5a3e-kube-api-access-p6mrv\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.236808 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/35a19b99-eed0-4383-bea5-cf43d84a5a3e-config-data\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.236947 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/35a19b99-eed0-4383-bea5-cf43d84a5a3e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.237013 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/35a19b99-eed0-4383-bea5-cf43d84a5a3e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.237049 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/35a19b99-eed0-4383-bea5-cf43d84a5a3e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.237077 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/35a19b99-eed0-4383-bea5-cf43d84a5a3e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.237106 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/35a19b99-eed0-4383-bea5-cf43d84a5a3e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.237138 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/35a19b99-eed0-4383-bea5-cf43d84a5a3e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.315903 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.321988 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e90972-9be1-4d3e-852e-e7f7df6e6623" path="/var/lib/kubelet/pods/f7e90972-9be1-4d3e-852e-e7f7df6e6623/volumes" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.342126 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/35a19b99-eed0-4383-bea5-cf43d84a5a3e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.342226 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/35a19b99-eed0-4383-bea5-cf43d84a5a3e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.342270 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/35a19b99-eed0-4383-bea5-cf43d84a5a3e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.342298 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/35a19b99-eed0-4383-bea5-cf43d84a5a3e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.342331 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/35a19b99-eed0-4383-bea5-cf43d84a5a3e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.342369 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/35a19b99-eed0-4383-bea5-cf43d84a5a3e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.342436 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/35a19b99-eed0-4383-bea5-cf43d84a5a3e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.342474 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.342500 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/35a19b99-eed0-4383-bea5-cf43d84a5a3e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.342537 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6mrv\" (UniqueName: \"kubernetes.io/projected/35a19b99-eed0-4383-bea5-cf43d84a5a3e-kube-api-access-p6mrv\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.342688 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/35a19b99-eed0-4383-bea5-cf43d84a5a3e-config-data\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.342798 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/35a19b99-eed0-4383-bea5-cf43d84a5a3e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.344116 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/35a19b99-eed0-4383-bea5-cf43d84a5a3e-config-data\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.344780 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/35a19b99-eed0-4383-bea5-cf43d84a5a3e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.345693 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/35a19b99-eed0-4383-bea5-cf43d84a5a3e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.348642 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/35a19b99-eed0-4383-bea5-cf43d84a5a3e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.348891 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.349460 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/35a19b99-eed0-4383-bea5-cf43d84a5a3e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.350515 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/35a19b99-eed0-4383-bea5-cf43d84a5a3e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.350934 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/35a19b99-eed0-4383-bea5-cf43d84a5a3e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.362094 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/35a19b99-eed0-4383-bea5-cf43d84a5a3e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.403104 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6mrv\" (UniqueName: \"kubernetes.io/projected/35a19b99-eed0-4383-bea5-cf43d84a5a3e-kube-api-access-p6mrv\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.442105 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"35a19b99-eed0-4383-bea5-cf43d84a5a3e\") " pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.444930 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/078c2368-b247-49d4-8723-fd93918e99b1-rabbitmq-erlang-cookie\") pod \"078c2368-b247-49d4-8723-fd93918e99b1\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.445077 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/078c2368-b247-49d4-8723-fd93918e99b1-pod-info\") pod \"078c2368-b247-49d4-8723-fd93918e99b1\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.445203 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/078c2368-b247-49d4-8723-fd93918e99b1-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "078c2368-b247-49d4-8723-fd93918e99b1" (UID: "078c2368-b247-49d4-8723-fd93918e99b1"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.445553 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"078c2368-b247-49d4-8723-fd93918e99b1\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.446486 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmd5s\" (UniqueName: \"kubernetes.io/projected/078c2368-b247-49d4-8723-fd93918e99b1-kube-api-access-bmd5s\") pod \"078c2368-b247-49d4-8723-fd93918e99b1\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.446563 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/078c2368-b247-49d4-8723-fd93918e99b1-server-conf\") pod \"078c2368-b247-49d4-8723-fd93918e99b1\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.447433 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/078c2368-b247-49d4-8723-fd93918e99b1-plugins-conf\") pod \"078c2368-b247-49d4-8723-fd93918e99b1\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.447551 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/078c2368-b247-49d4-8723-fd93918e99b1-config-data\") pod \"078c2368-b247-49d4-8723-fd93918e99b1\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.447583 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/078c2368-b247-49d4-8723-fd93918e99b1-rabbitmq-plugins\") pod \"078c2368-b247-49d4-8723-fd93918e99b1\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.447626 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/078c2368-b247-49d4-8723-fd93918e99b1-rabbitmq-confd\") pod \"078c2368-b247-49d4-8723-fd93918e99b1\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.447655 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/078c2368-b247-49d4-8723-fd93918e99b1-rabbitmq-tls\") pod \"078c2368-b247-49d4-8723-fd93918e99b1\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.447682 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/078c2368-b247-49d4-8723-fd93918e99b1-erlang-cookie-secret\") pod \"078c2368-b247-49d4-8723-fd93918e99b1\" (UID: \"078c2368-b247-49d4-8723-fd93918e99b1\") " Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.448288 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/078c2368-b247-49d4-8723-fd93918e99b1-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "078c2368-b247-49d4-8723-fd93918e99b1" (UID: "078c2368-b247-49d4-8723-fd93918e99b1"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.448307 4881 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/078c2368-b247-49d4-8723-fd93918e99b1-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.449166 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/078c2368-b247-49d4-8723-fd93918e99b1-pod-info" (OuterVolumeSpecName: "pod-info") pod "078c2368-b247-49d4-8723-fd93918e99b1" (UID: "078c2368-b247-49d4-8723-fd93918e99b1"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.449688 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/078c2368-b247-49d4-8723-fd93918e99b1-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "078c2368-b247-49d4-8723-fd93918e99b1" (UID: "078c2368-b247-49d4-8723-fd93918e99b1"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.453844 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/078c2368-b247-49d4-8723-fd93918e99b1-kube-api-access-bmd5s" (OuterVolumeSpecName: "kube-api-access-bmd5s") pod "078c2368-b247-49d4-8723-fd93918e99b1" (UID: "078c2368-b247-49d4-8723-fd93918e99b1"). InnerVolumeSpecName "kube-api-access-bmd5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.456619 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "078c2368-b247-49d4-8723-fd93918e99b1" (UID: "078c2368-b247-49d4-8723-fd93918e99b1"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.459464 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/078c2368-b247-49d4-8723-fd93918e99b1-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "078c2368-b247-49d4-8723-fd93918e99b1" (UID: "078c2368-b247-49d4-8723-fd93918e99b1"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.461668 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/078c2368-b247-49d4-8723-fd93918e99b1-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "078c2368-b247-49d4-8723-fd93918e99b1" (UID: "078c2368-b247-49d4-8723-fd93918e99b1"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.461996 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.497819 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/078c2368-b247-49d4-8723-fd93918e99b1-config-data" (OuterVolumeSpecName: "config-data") pod "078c2368-b247-49d4-8723-fd93918e99b1" (UID: "078c2368-b247-49d4-8723-fd93918e99b1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.517434 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/078c2368-b247-49d4-8723-fd93918e99b1-server-conf" (OuterVolumeSpecName: "server-conf") pod "078c2368-b247-49d4-8723-fd93918e99b1" (UID: "078c2368-b247-49d4-8723-fd93918e99b1"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.549543 4881 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/078c2368-b247-49d4-8723-fd93918e99b1-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.549572 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/078c2368-b247-49d4-8723-fd93918e99b1-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.549581 4881 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/078c2368-b247-49d4-8723-fd93918e99b1-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.549589 4881 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/078c2368-b247-49d4-8723-fd93918e99b1-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.549598 4881 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/078c2368-b247-49d4-8723-fd93918e99b1-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.549675 4881 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/078c2368-b247-49d4-8723-fd93918e99b1-pod-info\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.549699 4881 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.549709 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmd5s\" (UniqueName: \"kubernetes.io/projected/078c2368-b247-49d4-8723-fd93918e99b1-kube-api-access-bmd5s\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.549718 4881 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/078c2368-b247-49d4-8723-fd93918e99b1-server-conf\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.581493 4881 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.605072 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/078c2368-b247-49d4-8723-fd93918e99b1-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "078c2368-b247-49d4-8723-fd93918e99b1" (UID: "078c2368-b247-49d4-8723-fd93918e99b1"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.653324 4881 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.653375 4881 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/078c2368-b247-49d4-8723-fd93918e99b1-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:45 crc kubenswrapper[4881]: I0121 11:24:45.938548 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.018861 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"35a19b99-eed0-4383-bea5-cf43d84a5a3e","Type":"ContainerStarted","Data":"b61ec4ecd31391566c8185e90cc9bde05f33548160425c605a2a9789abeeafd4"} Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.021778 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"078c2368-b247-49d4-8723-fd93918e99b1","Type":"ContainerDied","Data":"cb426b0ea6a917959cdcac6b6915e9a598cb2f51672af4e37994bc672acc84c9"} Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.021855 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.022104 4881 scope.go:117] "RemoveContainer" containerID="023f57aba22657f38c9822a9fcfbabd9eb5513e10f1d131208e251a7df31b2a0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.111051 4881 scope.go:117] "RemoveContainer" containerID="26f697deade0e9783aed3c09129f2f0589fbb10b53e3501c212b7fcc5f5b5d86" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.143236 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.157714 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.175967 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 11:24:46 crc kubenswrapper[4881]: E0121 11:24:46.177045 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="078c2368-b247-49d4-8723-fd93918e99b1" containerName="rabbitmq" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.177180 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="078c2368-b247-49d4-8723-fd93918e99b1" containerName="rabbitmq" Jan 21 11:24:46 crc kubenswrapper[4881]: E0121 11:24:46.177314 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="078c2368-b247-49d4-8723-fd93918e99b1" containerName="setup-container" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.177411 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="078c2368-b247-49d4-8723-fd93918e99b1" containerName="setup-container" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.177811 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="078c2368-b247-49d4-8723-fd93918e99b1" containerName="rabbitmq" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.183004 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.189228 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.189307 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.189522 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.189635 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.189749 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.189893 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.190489 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-tt7xn" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.201159 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.311807 4881 scope.go:117] "RemoveContainer" containerID="8d01a71cc3cebcfb692e7385a1f123f20c56f33df75b2fdeed7ba4c65dcb43ca" Jan 21 11:24:46 crc kubenswrapper[4881]: E0121 11:24:46.312183 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.674204 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/de7ea801-d184-48cf-a602-c82ff20892ff-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.674307 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/de7ea801-d184-48cf-a602-c82ff20892ff-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.674353 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/de7ea801-d184-48cf-a602-c82ff20892ff-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.674376 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/de7ea801-d184-48cf-a602-c82ff20892ff-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.674447 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/de7ea801-d184-48cf-a602-c82ff20892ff-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.674464 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/de7ea801-d184-48cf-a602-c82ff20892ff-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.674513 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.674535 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cftw\" (UniqueName: \"kubernetes.io/projected/de7ea801-d184-48cf-a602-c82ff20892ff-kube-api-access-6cftw\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.674602 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/de7ea801-d184-48cf-a602-c82ff20892ff-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.674677 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/de7ea801-d184-48cf-a602-c82ff20892ff-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.674708 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/de7ea801-d184-48cf-a602-c82ff20892ff-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.776566 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/de7ea801-d184-48cf-a602-c82ff20892ff-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.776673 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/de7ea801-d184-48cf-a602-c82ff20892ff-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.777722 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/de7ea801-d184-48cf-a602-c82ff20892ff-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.777754 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/de7ea801-d184-48cf-a602-c82ff20892ff-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.777922 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/de7ea801-d184-48cf-a602-c82ff20892ff-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.778769 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/de7ea801-d184-48cf-a602-c82ff20892ff-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.778936 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/de7ea801-d184-48cf-a602-c82ff20892ff-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.778987 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.779022 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cftw\" (UniqueName: \"kubernetes.io/projected/de7ea801-d184-48cf-a602-c82ff20892ff-kube-api-access-6cftw\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.779192 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/de7ea801-d184-48cf-a602-c82ff20892ff-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.779383 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.780024 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/de7ea801-d184-48cf-a602-c82ff20892ff-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.780149 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/de7ea801-d184-48cf-a602-c82ff20892ff-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.780210 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/de7ea801-d184-48cf-a602-c82ff20892ff-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.780574 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/de7ea801-d184-48cf-a602-c82ff20892ff-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.780600 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/de7ea801-d184-48cf-a602-c82ff20892ff-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.780923 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/de7ea801-d184-48cf-a602-c82ff20892ff-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.784389 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/de7ea801-d184-48cf-a602-c82ff20892ff-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.784774 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/de7ea801-d184-48cf-a602-c82ff20892ff-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.785293 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/de7ea801-d184-48cf-a602-c82ff20892ff-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.793158 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/de7ea801-d184-48cf-a602-c82ff20892ff-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.798454 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cftw\" (UniqueName: \"kubernetes.io/projected/de7ea801-d184-48cf-a602-c82ff20892ff-kube-api-access-6cftw\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:46 crc kubenswrapper[4881]: I0121 11:24:46.824717 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"de7ea801-d184-48cf-a602-c82ff20892ff\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:47 crc kubenswrapper[4881]: I0121 11:24:47.120564 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:24:47 crc kubenswrapper[4881]: I0121 11:24:47.336079 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="078c2368-b247-49d4-8723-fd93918e99b1" path="/var/lib/kubelet/pods/078c2368-b247-49d4-8723-fd93918e99b1/volumes" Jan 21 11:24:47 crc kubenswrapper[4881]: I0121 11:24:47.614161 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 11:24:48 crc kubenswrapper[4881]: I0121 11:24:48.055247 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"de7ea801-d184-48cf-a602-c82ff20892ff","Type":"ContainerStarted","Data":"dc4b5e0e4224dd4ec733e65a2e91278b819f3625a2a848cb9582dcac2e68f27e"} Jan 21 11:24:49 crc kubenswrapper[4881]: I0121 11:24:49.068063 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"35a19b99-eed0-4383-bea5-cf43d84a5a3e","Type":"ContainerStarted","Data":"634428d31431025fdccf3934e18d58dc33fc9e53d8e3c10e3fc62735d4af9040"} Jan 21 11:24:56 crc kubenswrapper[4881]: I0121 11:24:56.135930 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8676bcc57f-wp596"] Jan 21 11:24:56 crc kubenswrapper[4881]: I0121 11:24:56.139099 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8676bcc57f-wp596" Jan 21 11:24:56 crc kubenswrapper[4881]: I0121 11:24:56.141321 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 21 11:24:56 crc kubenswrapper[4881]: I0121 11:24:56.156947 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8676bcc57f-wp596"] Jan 21 11:24:56 crc kubenswrapper[4881]: I0121 11:24:56.273381 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-openstack-edpm-ipam\") pod \"dnsmasq-dns-8676bcc57f-wp596\" (UID: \"ec2fab32-4eac-4a26-9ddb-40132e94976f\") " pod="openstack/dnsmasq-dns-8676bcc57f-wp596" Jan 21 11:24:56 crc kubenswrapper[4881]: I0121 11:24:56.273775 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-ovsdbserver-nb\") pod \"dnsmasq-dns-8676bcc57f-wp596\" (UID: \"ec2fab32-4eac-4a26-9ddb-40132e94976f\") " pod="openstack/dnsmasq-dns-8676bcc57f-wp596" Jan 21 11:24:56 crc kubenswrapper[4881]: I0121 11:24:56.273866 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-ovsdbserver-sb\") pod \"dnsmasq-dns-8676bcc57f-wp596\" (UID: \"ec2fab32-4eac-4a26-9ddb-40132e94976f\") " pod="openstack/dnsmasq-dns-8676bcc57f-wp596" Jan 21 11:24:56 crc kubenswrapper[4881]: I0121 11:24:56.273940 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-dns-swift-storage-0\") pod \"dnsmasq-dns-8676bcc57f-wp596\" (UID: \"ec2fab32-4eac-4a26-9ddb-40132e94976f\") " pod="openstack/dnsmasq-dns-8676bcc57f-wp596" Jan 21 11:24:56 crc kubenswrapper[4881]: I0121 11:24:56.273979 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-dns-svc\") pod \"dnsmasq-dns-8676bcc57f-wp596\" (UID: \"ec2fab32-4eac-4a26-9ddb-40132e94976f\") " pod="openstack/dnsmasq-dns-8676bcc57f-wp596" Jan 21 11:24:56 crc kubenswrapper[4881]: I0121 11:24:56.274057 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-config\") pod \"dnsmasq-dns-8676bcc57f-wp596\" (UID: \"ec2fab32-4eac-4a26-9ddb-40132e94976f\") " pod="openstack/dnsmasq-dns-8676bcc57f-wp596" Jan 21 11:24:56 crc kubenswrapper[4881]: I0121 11:24:56.274112 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvnw9\" (UniqueName: \"kubernetes.io/projected/ec2fab32-4eac-4a26-9ddb-40132e94976f-kube-api-access-bvnw9\") pod \"dnsmasq-dns-8676bcc57f-wp596\" (UID: \"ec2fab32-4eac-4a26-9ddb-40132e94976f\") " pod="openstack/dnsmasq-dns-8676bcc57f-wp596" Jan 21 11:24:56 crc kubenswrapper[4881]: I0121 11:24:56.376216 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-ovsdbserver-nb\") pod \"dnsmasq-dns-8676bcc57f-wp596\" (UID: \"ec2fab32-4eac-4a26-9ddb-40132e94976f\") " pod="openstack/dnsmasq-dns-8676bcc57f-wp596" Jan 21 11:24:56 crc kubenswrapper[4881]: I0121 11:24:56.376372 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-ovsdbserver-sb\") pod \"dnsmasq-dns-8676bcc57f-wp596\" (UID: \"ec2fab32-4eac-4a26-9ddb-40132e94976f\") " pod="openstack/dnsmasq-dns-8676bcc57f-wp596" Jan 21 11:24:56 crc kubenswrapper[4881]: I0121 11:24:56.376465 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-dns-swift-storage-0\") pod \"dnsmasq-dns-8676bcc57f-wp596\" (UID: \"ec2fab32-4eac-4a26-9ddb-40132e94976f\") " pod="openstack/dnsmasq-dns-8676bcc57f-wp596" Jan 21 11:24:56 crc kubenswrapper[4881]: I0121 11:24:56.376510 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-dns-svc\") pod \"dnsmasq-dns-8676bcc57f-wp596\" (UID: \"ec2fab32-4eac-4a26-9ddb-40132e94976f\") " pod="openstack/dnsmasq-dns-8676bcc57f-wp596" Jan 21 11:24:56 crc kubenswrapper[4881]: I0121 11:24:56.376596 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-config\") pod \"dnsmasq-dns-8676bcc57f-wp596\" (UID: \"ec2fab32-4eac-4a26-9ddb-40132e94976f\") " pod="openstack/dnsmasq-dns-8676bcc57f-wp596" Jan 21 11:24:56 crc kubenswrapper[4881]: I0121 11:24:56.376652 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvnw9\" (UniqueName: \"kubernetes.io/projected/ec2fab32-4eac-4a26-9ddb-40132e94976f-kube-api-access-bvnw9\") pod \"dnsmasq-dns-8676bcc57f-wp596\" (UID: \"ec2fab32-4eac-4a26-9ddb-40132e94976f\") " pod="openstack/dnsmasq-dns-8676bcc57f-wp596" Jan 21 11:24:56 crc kubenswrapper[4881]: I0121 11:24:56.377925 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-dns-svc\") pod \"dnsmasq-dns-8676bcc57f-wp596\" (UID: \"ec2fab32-4eac-4a26-9ddb-40132e94976f\") " pod="openstack/dnsmasq-dns-8676bcc57f-wp596" Jan 21 11:24:56 crc kubenswrapper[4881]: I0121 11:24:56.378008 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-dns-swift-storage-0\") pod \"dnsmasq-dns-8676bcc57f-wp596\" (UID: \"ec2fab32-4eac-4a26-9ddb-40132e94976f\") " pod="openstack/dnsmasq-dns-8676bcc57f-wp596" Jan 21 11:24:56 crc kubenswrapper[4881]: I0121 11:24:56.378513 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-config\") pod \"dnsmasq-dns-8676bcc57f-wp596\" (UID: \"ec2fab32-4eac-4a26-9ddb-40132e94976f\") " pod="openstack/dnsmasq-dns-8676bcc57f-wp596" Jan 21 11:24:56 crc kubenswrapper[4881]: I0121 11:24:56.378851 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-ovsdbserver-sb\") pod \"dnsmasq-dns-8676bcc57f-wp596\" (UID: \"ec2fab32-4eac-4a26-9ddb-40132e94976f\") " pod="openstack/dnsmasq-dns-8676bcc57f-wp596" Jan 21 11:24:56 crc kubenswrapper[4881]: I0121 11:24:56.379201 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-ovsdbserver-nb\") pod \"dnsmasq-dns-8676bcc57f-wp596\" (UID: \"ec2fab32-4eac-4a26-9ddb-40132e94976f\") " pod="openstack/dnsmasq-dns-8676bcc57f-wp596" Jan 21 11:24:56 crc kubenswrapper[4881]: I0121 11:24:56.379250 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-openstack-edpm-ipam\") pod \"dnsmasq-dns-8676bcc57f-wp596\" (UID: \"ec2fab32-4eac-4a26-9ddb-40132e94976f\") " pod="openstack/dnsmasq-dns-8676bcc57f-wp596" Jan 21 11:24:56 crc kubenswrapper[4881]: I0121 11:24:56.380123 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-openstack-edpm-ipam\") pod \"dnsmasq-dns-8676bcc57f-wp596\" (UID: \"ec2fab32-4eac-4a26-9ddb-40132e94976f\") " pod="openstack/dnsmasq-dns-8676bcc57f-wp596" Jan 21 11:24:56 crc kubenswrapper[4881]: I0121 11:24:56.403358 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvnw9\" (UniqueName: \"kubernetes.io/projected/ec2fab32-4eac-4a26-9ddb-40132e94976f-kube-api-access-bvnw9\") pod \"dnsmasq-dns-8676bcc57f-wp596\" (UID: \"ec2fab32-4eac-4a26-9ddb-40132e94976f\") " pod="openstack/dnsmasq-dns-8676bcc57f-wp596" Jan 21 11:24:56 crc kubenswrapper[4881]: I0121 11:24:56.461625 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8676bcc57f-wp596" Jan 21 11:24:57 crc kubenswrapper[4881]: I0121 11:24:57.186251 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"de7ea801-d184-48cf-a602-c82ff20892ff","Type":"ContainerStarted","Data":"8e68b25c764b9e0b867a6f82b7e2e448c02c2d37267bc95d906ed96df4996747"} Jan 21 11:24:57 crc kubenswrapper[4881]: I0121 11:24:57.228162 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8676bcc57f-wp596"] Jan 21 11:24:58 crc kubenswrapper[4881]: I0121 11:24:58.195653 4881 generic.go:334] "Generic (PLEG): container finished" podID="ec2fab32-4eac-4a26-9ddb-40132e94976f" containerID="65cfd2dd1128a88bc70d491463496b79cfb2dcc5abc049d917dd83ad5f45761a" exitCode=0 Jan 21 11:24:58 crc kubenswrapper[4881]: I0121 11:24:58.195703 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8676bcc57f-wp596" event={"ID":"ec2fab32-4eac-4a26-9ddb-40132e94976f","Type":"ContainerDied","Data":"65cfd2dd1128a88bc70d491463496b79cfb2dcc5abc049d917dd83ad5f45761a"} Jan 21 11:24:58 crc kubenswrapper[4881]: I0121 11:24:58.196216 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8676bcc57f-wp596" event={"ID":"ec2fab32-4eac-4a26-9ddb-40132e94976f","Type":"ContainerStarted","Data":"81049c0c5e8e6d15434e36288df117ccffe86a12005f731fb0b39ecb31197cdc"} Jan 21 11:24:59 crc kubenswrapper[4881]: I0121 11:24:59.207846 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8676bcc57f-wp596" event={"ID":"ec2fab32-4eac-4a26-9ddb-40132e94976f","Type":"ContainerStarted","Data":"3ebc49c540ff95bec9f3779f43c3effaa601aed7e73346317b526874af0e6390"} Jan 21 11:24:59 crc kubenswrapper[4881]: I0121 11:24:59.208153 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8676bcc57f-wp596" Jan 21 11:24:59 crc kubenswrapper[4881]: I0121 11:24:59.227705 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8676bcc57f-wp596" podStartSLOduration=3.227685072 podStartE2EDuration="3.227685072s" podCreationTimestamp="2026-01-21 11:24:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:24:59.227081776 +0000 UTC m=+1686.487038265" watchObservedRunningTime="2026-01-21 11:24:59.227685072 +0000 UTC m=+1686.487641541" Jan 21 11:25:01 crc kubenswrapper[4881]: I0121 11:25:01.311440 4881 scope.go:117] "RemoveContainer" containerID="8d01a71cc3cebcfb692e7385a1f123f20c56f33df75b2fdeed7ba4c65dcb43ca" Jan 21 11:25:01 crc kubenswrapper[4881]: E0121 11:25:01.313090 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:25:06 crc kubenswrapper[4881]: I0121 11:25:06.463995 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8676bcc57f-wp596" Jan 21 11:25:06 crc kubenswrapper[4881]: I0121 11:25:06.531984 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d4b6b54d9-5jzpq"] Jan 21 11:25:06 crc kubenswrapper[4881]: I0121 11:25:06.532289 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d4b6b54d9-5jzpq" podUID="81dbec06-59d7-4c42-a558-910811fb3811" containerName="dnsmasq-dns" containerID="cri-o://a807273d95c9864f3ecabade018dc0a91eb28a83bcfcbef9786d9473502a12a5" gracePeriod=10 Jan 21 11:25:06 crc kubenswrapper[4881]: I0121 11:25:06.698092 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59596cff49-cpxcq"] Jan 21 11:25:06 crc kubenswrapper[4881]: I0121 11:25:06.706068 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59596cff49-cpxcq" Jan 21 11:25:06 crc kubenswrapper[4881]: I0121 11:25:06.729351 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59596cff49-cpxcq"] Jan 21 11:25:06 crc kubenswrapper[4881]: I0121 11:25:06.797961 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a08dbd57-125f-4ca2-b166-434068ee9432-openstack-edpm-ipam\") pod \"dnsmasq-dns-59596cff49-cpxcq\" (UID: \"a08dbd57-125f-4ca2-b166-434068ee9432\") " pod="openstack/dnsmasq-dns-59596cff49-cpxcq" Jan 21 11:25:06 crc kubenswrapper[4881]: I0121 11:25:06.798020 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a08dbd57-125f-4ca2-b166-434068ee9432-dns-svc\") pod \"dnsmasq-dns-59596cff49-cpxcq\" (UID: \"a08dbd57-125f-4ca2-b166-434068ee9432\") " pod="openstack/dnsmasq-dns-59596cff49-cpxcq" Jan 21 11:25:06 crc kubenswrapper[4881]: I0121 11:25:06.798089 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a08dbd57-125f-4ca2-b166-434068ee9432-config\") pod \"dnsmasq-dns-59596cff49-cpxcq\" (UID: \"a08dbd57-125f-4ca2-b166-434068ee9432\") " pod="openstack/dnsmasq-dns-59596cff49-cpxcq" Jan 21 11:25:06 crc kubenswrapper[4881]: I0121 11:25:06.798133 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a08dbd57-125f-4ca2-b166-434068ee9432-ovsdbserver-nb\") pod \"dnsmasq-dns-59596cff49-cpxcq\" (UID: \"a08dbd57-125f-4ca2-b166-434068ee9432\") " pod="openstack/dnsmasq-dns-59596cff49-cpxcq" Jan 21 11:25:06 crc kubenswrapper[4881]: I0121 11:25:06.798168 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a08dbd57-125f-4ca2-b166-434068ee9432-ovsdbserver-sb\") pod \"dnsmasq-dns-59596cff49-cpxcq\" (UID: \"a08dbd57-125f-4ca2-b166-434068ee9432\") " pod="openstack/dnsmasq-dns-59596cff49-cpxcq" Jan 21 11:25:06 crc kubenswrapper[4881]: I0121 11:25:06.798239 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4x2d\" (UniqueName: \"kubernetes.io/projected/a08dbd57-125f-4ca2-b166-434068ee9432-kube-api-access-g4x2d\") pod \"dnsmasq-dns-59596cff49-cpxcq\" (UID: \"a08dbd57-125f-4ca2-b166-434068ee9432\") " pod="openstack/dnsmasq-dns-59596cff49-cpxcq" Jan 21 11:25:06 crc kubenswrapper[4881]: I0121 11:25:06.798281 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a08dbd57-125f-4ca2-b166-434068ee9432-dns-swift-storage-0\") pod \"dnsmasq-dns-59596cff49-cpxcq\" (UID: \"a08dbd57-125f-4ca2-b166-434068ee9432\") " pod="openstack/dnsmasq-dns-59596cff49-cpxcq" Jan 21 11:25:06 crc kubenswrapper[4881]: I0121 11:25:06.900860 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a08dbd57-125f-4ca2-b166-434068ee9432-ovsdbserver-sb\") pod \"dnsmasq-dns-59596cff49-cpxcq\" (UID: \"a08dbd57-125f-4ca2-b166-434068ee9432\") " pod="openstack/dnsmasq-dns-59596cff49-cpxcq" Jan 21 11:25:06 crc kubenswrapper[4881]: I0121 11:25:06.900972 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4x2d\" (UniqueName: \"kubernetes.io/projected/a08dbd57-125f-4ca2-b166-434068ee9432-kube-api-access-g4x2d\") pod \"dnsmasq-dns-59596cff49-cpxcq\" (UID: \"a08dbd57-125f-4ca2-b166-434068ee9432\") " pod="openstack/dnsmasq-dns-59596cff49-cpxcq" Jan 21 11:25:06 crc kubenswrapper[4881]: I0121 11:25:06.900993 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a08dbd57-125f-4ca2-b166-434068ee9432-dns-swift-storage-0\") pod \"dnsmasq-dns-59596cff49-cpxcq\" (UID: \"a08dbd57-125f-4ca2-b166-434068ee9432\") " pod="openstack/dnsmasq-dns-59596cff49-cpxcq" Jan 21 11:25:06 crc kubenswrapper[4881]: I0121 11:25:06.901034 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a08dbd57-125f-4ca2-b166-434068ee9432-openstack-edpm-ipam\") pod \"dnsmasq-dns-59596cff49-cpxcq\" (UID: \"a08dbd57-125f-4ca2-b166-434068ee9432\") " pod="openstack/dnsmasq-dns-59596cff49-cpxcq" Jan 21 11:25:06 crc kubenswrapper[4881]: I0121 11:25:06.901058 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a08dbd57-125f-4ca2-b166-434068ee9432-dns-svc\") pod \"dnsmasq-dns-59596cff49-cpxcq\" (UID: \"a08dbd57-125f-4ca2-b166-434068ee9432\") " pod="openstack/dnsmasq-dns-59596cff49-cpxcq" Jan 21 11:25:06 crc kubenswrapper[4881]: I0121 11:25:06.901116 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a08dbd57-125f-4ca2-b166-434068ee9432-config\") pod \"dnsmasq-dns-59596cff49-cpxcq\" (UID: \"a08dbd57-125f-4ca2-b166-434068ee9432\") " pod="openstack/dnsmasq-dns-59596cff49-cpxcq" Jan 21 11:25:06 crc kubenswrapper[4881]: I0121 11:25:06.901158 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a08dbd57-125f-4ca2-b166-434068ee9432-ovsdbserver-nb\") pod \"dnsmasq-dns-59596cff49-cpxcq\" (UID: \"a08dbd57-125f-4ca2-b166-434068ee9432\") " pod="openstack/dnsmasq-dns-59596cff49-cpxcq" Jan 21 11:25:06 crc kubenswrapper[4881]: I0121 11:25:06.902285 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a08dbd57-125f-4ca2-b166-434068ee9432-ovsdbserver-sb\") pod \"dnsmasq-dns-59596cff49-cpxcq\" (UID: \"a08dbd57-125f-4ca2-b166-434068ee9432\") " pod="openstack/dnsmasq-dns-59596cff49-cpxcq" Jan 21 11:25:06 crc kubenswrapper[4881]: I0121 11:25:06.902545 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a08dbd57-125f-4ca2-b166-434068ee9432-ovsdbserver-nb\") pod \"dnsmasq-dns-59596cff49-cpxcq\" (UID: \"a08dbd57-125f-4ca2-b166-434068ee9432\") " pod="openstack/dnsmasq-dns-59596cff49-cpxcq" Jan 21 11:25:06 crc kubenswrapper[4881]: I0121 11:25:06.902607 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/a08dbd57-125f-4ca2-b166-434068ee9432-openstack-edpm-ipam\") pod \"dnsmasq-dns-59596cff49-cpxcq\" (UID: \"a08dbd57-125f-4ca2-b166-434068ee9432\") " pod="openstack/dnsmasq-dns-59596cff49-cpxcq" Jan 21 11:25:06 crc kubenswrapper[4881]: I0121 11:25:06.902684 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a08dbd57-125f-4ca2-b166-434068ee9432-dns-svc\") pod \"dnsmasq-dns-59596cff49-cpxcq\" (UID: \"a08dbd57-125f-4ca2-b166-434068ee9432\") " pod="openstack/dnsmasq-dns-59596cff49-cpxcq" Jan 21 11:25:06 crc kubenswrapper[4881]: I0121 11:25:06.902770 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a08dbd57-125f-4ca2-b166-434068ee9432-config\") pod \"dnsmasq-dns-59596cff49-cpxcq\" (UID: \"a08dbd57-125f-4ca2-b166-434068ee9432\") " pod="openstack/dnsmasq-dns-59596cff49-cpxcq" Jan 21 11:25:06 crc kubenswrapper[4881]: I0121 11:25:06.902774 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a08dbd57-125f-4ca2-b166-434068ee9432-dns-swift-storage-0\") pod \"dnsmasq-dns-59596cff49-cpxcq\" (UID: \"a08dbd57-125f-4ca2-b166-434068ee9432\") " pod="openstack/dnsmasq-dns-59596cff49-cpxcq" Jan 21 11:25:06 crc kubenswrapper[4881]: I0121 11:25:06.926499 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4x2d\" (UniqueName: \"kubernetes.io/projected/a08dbd57-125f-4ca2-b166-434068ee9432-kube-api-access-g4x2d\") pod \"dnsmasq-dns-59596cff49-cpxcq\" (UID: \"a08dbd57-125f-4ca2-b166-434068ee9432\") " pod="openstack/dnsmasq-dns-59596cff49-cpxcq" Jan 21 11:25:07 crc kubenswrapper[4881]: I0121 11:25:07.046633 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59596cff49-cpxcq" Jan 21 11:25:07 crc kubenswrapper[4881]: I0121 11:25:07.372360 4881 generic.go:334] "Generic (PLEG): container finished" podID="81dbec06-59d7-4c42-a558-910811fb3811" containerID="a807273d95c9864f3ecabade018dc0a91eb28a83bcfcbef9786d9473502a12a5" exitCode=0 Jan 21 11:25:07 crc kubenswrapper[4881]: I0121 11:25:07.395109 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d4b6b54d9-5jzpq" event={"ID":"81dbec06-59d7-4c42-a558-910811fb3811","Type":"ContainerDied","Data":"a807273d95c9864f3ecabade018dc0a91eb28a83bcfcbef9786d9473502a12a5"} Jan 21 11:25:07 crc kubenswrapper[4881]: I0121 11:25:07.395159 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d4b6b54d9-5jzpq" event={"ID":"81dbec06-59d7-4c42-a558-910811fb3811","Type":"ContainerDied","Data":"14e34995d6813b59d5fbddbd68a531e00edeb5c9ae370d72d56de9da156f7345"} Jan 21 11:25:07 crc kubenswrapper[4881]: I0121 11:25:07.395172 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14e34995d6813b59d5fbddbd68a531e00edeb5c9ae370d72d56de9da156f7345" Jan 21 11:25:07 crc kubenswrapper[4881]: I0121 11:25:07.396633 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d4b6b54d9-5jzpq" Jan 21 11:25:07 crc kubenswrapper[4881]: I0121 11:25:07.546602 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81dbec06-59d7-4c42-a558-910811fb3811-ovsdbserver-nb\") pod \"81dbec06-59d7-4c42-a558-910811fb3811\" (UID: \"81dbec06-59d7-4c42-a558-910811fb3811\") " Jan 21 11:25:07 crc kubenswrapper[4881]: I0121 11:25:07.547493 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwg4c\" (UniqueName: \"kubernetes.io/projected/81dbec06-59d7-4c42-a558-910811fb3811-kube-api-access-lwg4c\") pod \"81dbec06-59d7-4c42-a558-910811fb3811\" (UID: \"81dbec06-59d7-4c42-a558-910811fb3811\") " Jan 21 11:25:07 crc kubenswrapper[4881]: I0121 11:25:07.547714 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/81dbec06-59d7-4c42-a558-910811fb3811-dns-swift-storage-0\") pod \"81dbec06-59d7-4c42-a558-910811fb3811\" (UID: \"81dbec06-59d7-4c42-a558-910811fb3811\") " Jan 21 11:25:07 crc kubenswrapper[4881]: I0121 11:25:07.547821 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81dbec06-59d7-4c42-a558-910811fb3811-config\") pod \"81dbec06-59d7-4c42-a558-910811fb3811\" (UID: \"81dbec06-59d7-4c42-a558-910811fb3811\") " Jan 21 11:25:07 crc kubenswrapper[4881]: I0121 11:25:07.547875 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81dbec06-59d7-4c42-a558-910811fb3811-dns-svc\") pod \"81dbec06-59d7-4c42-a558-910811fb3811\" (UID: \"81dbec06-59d7-4c42-a558-910811fb3811\") " Jan 21 11:25:07 crc kubenswrapper[4881]: I0121 11:25:07.547927 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81dbec06-59d7-4c42-a558-910811fb3811-ovsdbserver-sb\") pod \"81dbec06-59d7-4c42-a558-910811fb3811\" (UID: \"81dbec06-59d7-4c42-a558-910811fb3811\") " Jan 21 11:25:07 crc kubenswrapper[4881]: I0121 11:25:07.574682 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81dbec06-59d7-4c42-a558-910811fb3811-kube-api-access-lwg4c" (OuterVolumeSpecName: "kube-api-access-lwg4c") pod "81dbec06-59d7-4c42-a558-910811fb3811" (UID: "81dbec06-59d7-4c42-a558-910811fb3811"). InnerVolumeSpecName "kube-api-access-lwg4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:25:07 crc kubenswrapper[4881]: I0121 11:25:07.617631 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81dbec06-59d7-4c42-a558-910811fb3811-config" (OuterVolumeSpecName: "config") pod "81dbec06-59d7-4c42-a558-910811fb3811" (UID: "81dbec06-59d7-4c42-a558-910811fb3811"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:25:07 crc kubenswrapper[4881]: I0121 11:25:07.623136 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81dbec06-59d7-4c42-a558-910811fb3811-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "81dbec06-59d7-4c42-a558-910811fb3811" (UID: "81dbec06-59d7-4c42-a558-910811fb3811"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:25:07 crc kubenswrapper[4881]: I0121 11:25:07.637679 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81dbec06-59d7-4c42-a558-910811fb3811-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "81dbec06-59d7-4c42-a558-910811fb3811" (UID: "81dbec06-59d7-4c42-a558-910811fb3811"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:25:07 crc kubenswrapper[4881]: I0121 11:25:07.638904 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81dbec06-59d7-4c42-a558-910811fb3811-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "81dbec06-59d7-4c42-a558-910811fb3811" (UID: "81dbec06-59d7-4c42-a558-910811fb3811"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:25:07 crc kubenswrapper[4881]: I0121 11:25:07.651247 4881 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/81dbec06-59d7-4c42-a558-910811fb3811-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:25:07 crc kubenswrapper[4881]: I0121 11:25:07.651295 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81dbec06-59d7-4c42-a558-910811fb3811-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:25:07 crc kubenswrapper[4881]: I0121 11:25:07.651312 4881 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/81dbec06-59d7-4c42-a558-910811fb3811-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 11:25:07 crc kubenswrapper[4881]: I0121 11:25:07.651324 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/81dbec06-59d7-4c42-a558-910811fb3811-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 11:25:07 crc kubenswrapper[4881]: I0121 11:25:07.651336 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwg4c\" (UniqueName: \"kubernetes.io/projected/81dbec06-59d7-4c42-a558-910811fb3811-kube-api-access-lwg4c\") on node \"crc\" DevicePath \"\"" Jan 21 11:25:07 crc kubenswrapper[4881]: I0121 11:25:07.683474 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81dbec06-59d7-4c42-a558-910811fb3811-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "81dbec06-59d7-4c42-a558-910811fb3811" (UID: "81dbec06-59d7-4c42-a558-910811fb3811"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:25:07 crc kubenswrapper[4881]: I0121 11:25:07.753458 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/81dbec06-59d7-4c42-a558-910811fb3811-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 11:25:07 crc kubenswrapper[4881]: I0121 11:25:07.798821 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59596cff49-cpxcq"] Jan 21 11:25:07 crc kubenswrapper[4881]: W0121 11:25:07.803617 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda08dbd57_125f_4ca2_b166_434068ee9432.slice/crio-963f766e0019513a258fc50bf0d251df7fbc1e6635d9d8cab51e022c49eee27b WatchSource:0}: Error finding container 963f766e0019513a258fc50bf0d251df7fbc1e6635d9d8cab51e022c49eee27b: Status 404 returned error can't find the container with id 963f766e0019513a258fc50bf0d251df7fbc1e6635d9d8cab51e022c49eee27b Jan 21 11:25:08 crc kubenswrapper[4881]: I0121 11:25:08.406987 4881 generic.go:334] "Generic (PLEG): container finished" podID="a08dbd57-125f-4ca2-b166-434068ee9432" containerID="bba85260be07f097ed4444f9ead41161f18f05f9b642a209ac057f05e683cd36" exitCode=0 Jan 21 11:25:08 crc kubenswrapper[4881]: I0121 11:25:08.407529 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d4b6b54d9-5jzpq" Jan 21 11:25:08 crc kubenswrapper[4881]: I0121 11:25:08.407239 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59596cff49-cpxcq" event={"ID":"a08dbd57-125f-4ca2-b166-434068ee9432","Type":"ContainerDied","Data":"bba85260be07f097ed4444f9ead41161f18f05f9b642a209ac057f05e683cd36"} Jan 21 11:25:08 crc kubenswrapper[4881]: I0121 11:25:08.409412 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59596cff49-cpxcq" event={"ID":"a08dbd57-125f-4ca2-b166-434068ee9432","Type":"ContainerStarted","Data":"963f766e0019513a258fc50bf0d251df7fbc1e6635d9d8cab51e022c49eee27b"} Jan 21 11:25:08 crc kubenswrapper[4881]: I0121 11:25:08.505035 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d4b6b54d9-5jzpq"] Jan 21 11:25:08 crc kubenswrapper[4881]: I0121 11:25:08.515911 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d4b6b54d9-5jzpq"] Jan 21 11:25:09 crc kubenswrapper[4881]: I0121 11:25:09.325003 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81dbec06-59d7-4c42-a558-910811fb3811" path="/var/lib/kubelet/pods/81dbec06-59d7-4c42-a558-910811fb3811/volumes" Jan 21 11:25:09 crc kubenswrapper[4881]: I0121 11:25:09.423200 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59596cff49-cpxcq" event={"ID":"a08dbd57-125f-4ca2-b166-434068ee9432","Type":"ContainerStarted","Data":"818f72e3c5f9d0f5c6e8c41d19fec30d6ec474a92c13b5e8032090ea9a66c126"} Jan 21 11:25:09 crc kubenswrapper[4881]: I0121 11:25:09.423620 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59596cff49-cpxcq" Jan 21 11:25:09 crc kubenswrapper[4881]: I0121 11:25:09.467747 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59596cff49-cpxcq" podStartSLOduration=3.467713663 podStartE2EDuration="3.467713663s" podCreationTimestamp="2026-01-21 11:25:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:25:09.456726164 +0000 UTC m=+1696.716682633" watchObservedRunningTime="2026-01-21 11:25:09.467713663 +0000 UTC m=+1696.727670152" Jan 21 11:25:13 crc kubenswrapper[4881]: I0121 11:25:13.317993 4881 scope.go:117] "RemoveContainer" containerID="8d01a71cc3cebcfb692e7385a1f123f20c56f33df75b2fdeed7ba4c65dcb43ca" Jan 21 11:25:13 crc kubenswrapper[4881]: E0121 11:25:13.318728 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:25:17 crc kubenswrapper[4881]: I0121 11:25:17.048017 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-59596cff49-cpxcq" Jan 21 11:25:17 crc kubenswrapper[4881]: I0121 11:25:17.134906 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8676bcc57f-wp596"] Jan 21 11:25:17 crc kubenswrapper[4881]: I0121 11:25:17.135191 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8676bcc57f-wp596" podUID="ec2fab32-4eac-4a26-9ddb-40132e94976f" containerName="dnsmasq-dns" containerID="cri-o://3ebc49c540ff95bec9f3779f43c3effaa601aed7e73346317b526874af0e6390" gracePeriod=10 Jan 21 11:25:17 crc kubenswrapper[4881]: I0121 11:25:17.558900 4881 generic.go:334] "Generic (PLEG): container finished" podID="ec2fab32-4eac-4a26-9ddb-40132e94976f" containerID="3ebc49c540ff95bec9f3779f43c3effaa601aed7e73346317b526874af0e6390" exitCode=0 Jan 21 11:25:17 crc kubenswrapper[4881]: I0121 11:25:17.558981 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8676bcc57f-wp596" event={"ID":"ec2fab32-4eac-4a26-9ddb-40132e94976f","Type":"ContainerDied","Data":"3ebc49c540ff95bec9f3779f43c3effaa601aed7e73346317b526874af0e6390"} Jan 21 11:25:17 crc kubenswrapper[4881]: I0121 11:25:17.644395 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8676bcc57f-wp596" Jan 21 11:25:17 crc kubenswrapper[4881]: I0121 11:25:17.707665 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-dns-swift-storage-0\") pod \"ec2fab32-4eac-4a26-9ddb-40132e94976f\" (UID: \"ec2fab32-4eac-4a26-9ddb-40132e94976f\") " Jan 21 11:25:17 crc kubenswrapper[4881]: I0121 11:25:17.707981 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-ovsdbserver-nb\") pod \"ec2fab32-4eac-4a26-9ddb-40132e94976f\" (UID: \"ec2fab32-4eac-4a26-9ddb-40132e94976f\") " Jan 21 11:25:17 crc kubenswrapper[4881]: I0121 11:25:17.708028 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-dns-svc\") pod \"ec2fab32-4eac-4a26-9ddb-40132e94976f\" (UID: \"ec2fab32-4eac-4a26-9ddb-40132e94976f\") " Jan 21 11:25:17 crc kubenswrapper[4881]: I0121 11:25:17.708109 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-openstack-edpm-ipam\") pod \"ec2fab32-4eac-4a26-9ddb-40132e94976f\" (UID: \"ec2fab32-4eac-4a26-9ddb-40132e94976f\") " Jan 21 11:25:17 crc kubenswrapper[4881]: I0121 11:25:17.708206 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvnw9\" (UniqueName: \"kubernetes.io/projected/ec2fab32-4eac-4a26-9ddb-40132e94976f-kube-api-access-bvnw9\") pod \"ec2fab32-4eac-4a26-9ddb-40132e94976f\" (UID: \"ec2fab32-4eac-4a26-9ddb-40132e94976f\") " Jan 21 11:25:17 crc kubenswrapper[4881]: I0121 11:25:17.708306 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-ovsdbserver-sb\") pod \"ec2fab32-4eac-4a26-9ddb-40132e94976f\" (UID: \"ec2fab32-4eac-4a26-9ddb-40132e94976f\") " Jan 21 11:25:17 crc kubenswrapper[4881]: I0121 11:25:17.708338 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-config\") pod \"ec2fab32-4eac-4a26-9ddb-40132e94976f\" (UID: \"ec2fab32-4eac-4a26-9ddb-40132e94976f\") " Jan 21 11:25:17 crc kubenswrapper[4881]: I0121 11:25:17.715184 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec2fab32-4eac-4a26-9ddb-40132e94976f-kube-api-access-bvnw9" (OuterVolumeSpecName: "kube-api-access-bvnw9") pod "ec2fab32-4eac-4a26-9ddb-40132e94976f" (UID: "ec2fab32-4eac-4a26-9ddb-40132e94976f"). InnerVolumeSpecName "kube-api-access-bvnw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:25:17 crc kubenswrapper[4881]: I0121 11:25:17.779227 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-config" (OuterVolumeSpecName: "config") pod "ec2fab32-4eac-4a26-9ddb-40132e94976f" (UID: "ec2fab32-4eac-4a26-9ddb-40132e94976f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:25:17 crc kubenswrapper[4881]: I0121 11:25:17.781161 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ec2fab32-4eac-4a26-9ddb-40132e94976f" (UID: "ec2fab32-4eac-4a26-9ddb-40132e94976f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:25:17 crc kubenswrapper[4881]: I0121 11:25:17.802041 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ec2fab32-4eac-4a26-9ddb-40132e94976f" (UID: "ec2fab32-4eac-4a26-9ddb-40132e94976f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:25:17 crc kubenswrapper[4881]: I0121 11:25:17.802081 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ec2fab32-4eac-4a26-9ddb-40132e94976f" (UID: "ec2fab32-4eac-4a26-9ddb-40132e94976f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:25:17 crc kubenswrapper[4881]: I0121 11:25:17.806806 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ec2fab32-4eac-4a26-9ddb-40132e94976f" (UID: "ec2fab32-4eac-4a26-9ddb-40132e94976f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:25:17 crc kubenswrapper[4881]: I0121 11:25:17.811360 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvnw9\" (UniqueName: \"kubernetes.io/projected/ec2fab32-4eac-4a26-9ddb-40132e94976f-kube-api-access-bvnw9\") on node \"crc\" DevicePath \"\"" Jan 21 11:25:17 crc kubenswrapper[4881]: I0121 11:25:17.811408 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 11:25:17 crc kubenswrapper[4881]: I0121 11:25:17.811421 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:25:17 crc kubenswrapper[4881]: I0121 11:25:17.811433 4881 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:25:17 crc kubenswrapper[4881]: I0121 11:25:17.811444 4881 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 11:25:17 crc kubenswrapper[4881]: I0121 11:25:17.811455 4881 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 11:25:17 crc kubenswrapper[4881]: I0121 11:25:17.829480 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "ec2fab32-4eac-4a26-9ddb-40132e94976f" (UID: "ec2fab32-4eac-4a26-9ddb-40132e94976f"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:25:17 crc kubenswrapper[4881]: I0121 11:25:17.914133 4881 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ec2fab32-4eac-4a26-9ddb-40132e94976f-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 11:25:18 crc kubenswrapper[4881]: I0121 11:25:18.574381 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8676bcc57f-wp596" event={"ID":"ec2fab32-4eac-4a26-9ddb-40132e94976f","Type":"ContainerDied","Data":"81049c0c5e8e6d15434e36288df117ccffe86a12005f731fb0b39ecb31197cdc"} Jan 21 11:25:18 crc kubenswrapper[4881]: I0121 11:25:18.574713 4881 scope.go:117] "RemoveContainer" containerID="3ebc49c540ff95bec9f3779f43c3effaa601aed7e73346317b526874af0e6390" Jan 21 11:25:18 crc kubenswrapper[4881]: I0121 11:25:18.574440 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8676bcc57f-wp596" Jan 21 11:25:18 crc kubenswrapper[4881]: I0121 11:25:18.607309 4881 scope.go:117] "RemoveContainer" containerID="65cfd2dd1128a88bc70d491463496b79cfb2dcc5abc049d917dd83ad5f45761a" Jan 21 11:25:18 crc kubenswrapper[4881]: I0121 11:25:18.613427 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8676bcc57f-wp596"] Jan 21 11:25:18 crc kubenswrapper[4881]: I0121 11:25:18.624007 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8676bcc57f-wp596"] Jan 21 11:25:19 crc kubenswrapper[4881]: I0121 11:25:19.328601 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec2fab32-4eac-4a26-9ddb-40132e94976f" path="/var/lib/kubelet/pods/ec2fab32-4eac-4a26-9ddb-40132e94976f/volumes" Jan 21 11:25:20 crc kubenswrapper[4881]: I0121 11:25:20.599347 4881 generic.go:334] "Generic (PLEG): container finished" podID="35a19b99-eed0-4383-bea5-cf43d84a5a3e" containerID="634428d31431025fdccf3934e18d58dc33fc9e53d8e3c10e3fc62735d4af9040" exitCode=0 Jan 21 11:25:20 crc kubenswrapper[4881]: I0121 11:25:20.599490 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"35a19b99-eed0-4383-bea5-cf43d84a5a3e","Type":"ContainerDied","Data":"634428d31431025fdccf3934e18d58dc33fc9e53d8e3c10e3fc62735d4af9040"} Jan 21 11:25:21 crc kubenswrapper[4881]: I0121 11:25:21.614816 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"35a19b99-eed0-4383-bea5-cf43d84a5a3e","Type":"ContainerStarted","Data":"fe68fbf9120089c1e7cd6dc6a3d745261c371e91187628d27a7621185c38f5cd"} Jan 21 11:25:21 crc kubenswrapper[4881]: I0121 11:25:21.615309 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 21 11:25:21 crc kubenswrapper[4881]: I0121 11:25:21.647750 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.647723535 podStartE2EDuration="36.647723535s" podCreationTimestamp="2026-01-21 11:24:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:25:21.637674779 +0000 UTC m=+1708.897631248" watchObservedRunningTime="2026-01-21 11:25:21.647723535 +0000 UTC m=+1708.907680004" Jan 21 11:25:26 crc kubenswrapper[4881]: I0121 11:25:26.311427 4881 scope.go:117] "RemoveContainer" containerID="8d01a71cc3cebcfb692e7385a1f123f20c56f33df75b2fdeed7ba4c65dcb43ca" Jan 21 11:25:26 crc kubenswrapper[4881]: E0121 11:25:26.312381 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:25:29 crc kubenswrapper[4881]: I0121 11:25:29.690981 4881 generic.go:334] "Generic (PLEG): container finished" podID="de7ea801-d184-48cf-a602-c82ff20892ff" containerID="8e68b25c764b9e0b867a6f82b7e2e448c02c2d37267bc95d906ed96df4996747" exitCode=0 Jan 21 11:25:29 crc kubenswrapper[4881]: I0121 11:25:29.691272 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"de7ea801-d184-48cf-a602-c82ff20892ff","Type":"ContainerDied","Data":"8e68b25c764b9e0b867a6f82b7e2e448c02c2d37267bc95d906ed96df4996747"} Jan 21 11:25:30 crc kubenswrapper[4881]: I0121 11:25:30.704993 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"de7ea801-d184-48cf-a602-c82ff20892ff","Type":"ContainerStarted","Data":"ec707c548b6f8c2a6983970dd435a8fafbf0658a06bfa5f5b4657e3f98f9908d"} Jan 21 11:25:30 crc kubenswrapper[4881]: I0121 11:25:30.705512 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:25:30 crc kubenswrapper[4881]: I0121 11:25:30.739863 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=44.73984155 podStartE2EDuration="44.73984155s" podCreationTimestamp="2026-01-21 11:24:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:25:30.732592923 +0000 UTC m=+1717.992549452" watchObservedRunningTime="2026-01-21 11:25:30.73984155 +0000 UTC m=+1717.999798029" Jan 21 11:25:35 crc kubenswrapper[4881]: I0121 11:25:35.467139 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 21 11:25:35 crc kubenswrapper[4881]: I0121 11:25:35.962663 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c"] Jan 21 11:25:35 crc kubenswrapper[4881]: E0121 11:25:35.963166 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec2fab32-4eac-4a26-9ddb-40132e94976f" containerName="init" Jan 21 11:25:35 crc kubenswrapper[4881]: I0121 11:25:35.963182 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec2fab32-4eac-4a26-9ddb-40132e94976f" containerName="init" Jan 21 11:25:35 crc kubenswrapper[4881]: E0121 11:25:35.963194 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81dbec06-59d7-4c42-a558-910811fb3811" containerName="init" Jan 21 11:25:35 crc kubenswrapper[4881]: I0121 11:25:35.963201 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="81dbec06-59d7-4c42-a558-910811fb3811" containerName="init" Jan 21 11:25:35 crc kubenswrapper[4881]: E0121 11:25:35.963223 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec2fab32-4eac-4a26-9ddb-40132e94976f" containerName="dnsmasq-dns" Jan 21 11:25:35 crc kubenswrapper[4881]: I0121 11:25:35.963229 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec2fab32-4eac-4a26-9ddb-40132e94976f" containerName="dnsmasq-dns" Jan 21 11:25:35 crc kubenswrapper[4881]: E0121 11:25:35.963244 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81dbec06-59d7-4c42-a558-910811fb3811" containerName="dnsmasq-dns" Jan 21 11:25:35 crc kubenswrapper[4881]: I0121 11:25:35.963249 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="81dbec06-59d7-4c42-a558-910811fb3811" containerName="dnsmasq-dns" Jan 21 11:25:35 crc kubenswrapper[4881]: I0121 11:25:35.963425 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec2fab32-4eac-4a26-9ddb-40132e94976f" containerName="dnsmasq-dns" Jan 21 11:25:35 crc kubenswrapper[4881]: I0121 11:25:35.963445 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="81dbec06-59d7-4c42-a558-910811fb3811" containerName="dnsmasq-dns" Jan 21 11:25:35 crc kubenswrapper[4881]: I0121 11:25:35.964186 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c" Jan 21 11:25:35 crc kubenswrapper[4881]: I0121 11:25:35.973651 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 11:25:35 crc kubenswrapper[4881]: I0121 11:25:35.973848 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fd7zg" Jan 21 11:25:35 crc kubenswrapper[4881]: I0121 11:25:35.974182 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 11:25:35 crc kubenswrapper[4881]: I0121 11:25:35.974315 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 11:25:35 crc kubenswrapper[4881]: I0121 11:25:35.988463 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c"] Jan 21 11:25:36 crc kubenswrapper[4881]: I0121 11:25:36.073296 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a9e212c-bc4b-4dae-9c97-cbc48686c8fc-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c\" (UID: \"4a9e212c-bc4b-4dae-9c97-cbc48686c8fc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c" Jan 21 11:25:36 crc kubenswrapper[4881]: I0121 11:25:36.075175 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a9e212c-bc4b-4dae-9c97-cbc48686c8fc-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c\" (UID: \"4a9e212c-bc4b-4dae-9c97-cbc48686c8fc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c" Jan 21 11:25:36 crc kubenswrapper[4881]: I0121 11:25:36.075252 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l5pl\" (UniqueName: \"kubernetes.io/projected/4a9e212c-bc4b-4dae-9c97-cbc48686c8fc-kube-api-access-9l5pl\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c\" (UID: \"4a9e212c-bc4b-4dae-9c97-cbc48686c8fc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c" Jan 21 11:25:36 crc kubenswrapper[4881]: I0121 11:25:36.075330 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4a9e212c-bc4b-4dae-9c97-cbc48686c8fc-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c\" (UID: \"4a9e212c-bc4b-4dae-9c97-cbc48686c8fc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c" Jan 21 11:25:36 crc kubenswrapper[4881]: I0121 11:25:36.177785 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a9e212c-bc4b-4dae-9c97-cbc48686c8fc-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c\" (UID: \"4a9e212c-bc4b-4dae-9c97-cbc48686c8fc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c" Jan 21 11:25:36 crc kubenswrapper[4881]: I0121 11:25:36.178064 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a9e212c-bc4b-4dae-9c97-cbc48686c8fc-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c\" (UID: \"4a9e212c-bc4b-4dae-9c97-cbc48686c8fc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c" Jan 21 11:25:36 crc kubenswrapper[4881]: I0121 11:25:36.178104 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9l5pl\" (UniqueName: \"kubernetes.io/projected/4a9e212c-bc4b-4dae-9c97-cbc48686c8fc-kube-api-access-9l5pl\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c\" (UID: \"4a9e212c-bc4b-4dae-9c97-cbc48686c8fc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c" Jan 21 11:25:36 crc kubenswrapper[4881]: I0121 11:25:36.178138 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4a9e212c-bc4b-4dae-9c97-cbc48686c8fc-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c\" (UID: \"4a9e212c-bc4b-4dae-9c97-cbc48686c8fc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c" Jan 21 11:25:36 crc kubenswrapper[4881]: I0121 11:25:36.187496 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a9e212c-bc4b-4dae-9c97-cbc48686c8fc-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c\" (UID: \"4a9e212c-bc4b-4dae-9c97-cbc48686c8fc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c" Jan 21 11:25:36 crc kubenswrapper[4881]: I0121 11:25:36.187541 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4a9e212c-bc4b-4dae-9c97-cbc48686c8fc-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c\" (UID: \"4a9e212c-bc4b-4dae-9c97-cbc48686c8fc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c" Jan 21 11:25:36 crc kubenswrapper[4881]: I0121 11:25:36.189129 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a9e212c-bc4b-4dae-9c97-cbc48686c8fc-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c\" (UID: \"4a9e212c-bc4b-4dae-9c97-cbc48686c8fc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c" Jan 21 11:25:36 crc kubenswrapper[4881]: I0121 11:25:36.200364 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9l5pl\" (UniqueName: \"kubernetes.io/projected/4a9e212c-bc4b-4dae-9c97-cbc48686c8fc-kube-api-access-9l5pl\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c\" (UID: \"4a9e212c-bc4b-4dae-9c97-cbc48686c8fc\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c" Jan 21 11:25:36 crc kubenswrapper[4881]: I0121 11:25:36.293226 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c" Jan 21 11:25:36 crc kubenswrapper[4881]: I0121 11:25:36.966431 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c"] Jan 21 11:25:36 crc kubenswrapper[4881]: W0121 11:25:36.968561 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a9e212c_bc4b_4dae_9c97_cbc48686c8fc.slice/crio-e7b289017d9a64d186168fcb4d0e1368afa9ea9c6525c60f59a683b8fdfe939a WatchSource:0}: Error finding container e7b289017d9a64d186168fcb4d0e1368afa9ea9c6525c60f59a683b8fdfe939a: Status 404 returned error can't find the container with id e7b289017d9a64d186168fcb4d0e1368afa9ea9c6525c60f59a683b8fdfe939a Jan 21 11:25:36 crc kubenswrapper[4881]: I0121 11:25:36.972890 4881 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 11:25:37 crc kubenswrapper[4881]: I0121 11:25:37.778818 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c" event={"ID":"4a9e212c-bc4b-4dae-9c97-cbc48686c8fc","Type":"ContainerStarted","Data":"e7b289017d9a64d186168fcb4d0e1368afa9ea9c6525c60f59a683b8fdfe939a"} Jan 21 11:25:40 crc kubenswrapper[4881]: I0121 11:25:40.311240 4881 scope.go:117] "RemoveContainer" containerID="8d01a71cc3cebcfb692e7385a1f123f20c56f33df75b2fdeed7ba4c65dcb43ca" Jan 21 11:25:40 crc kubenswrapper[4881]: E0121 11:25:40.311987 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:25:47 crc kubenswrapper[4881]: I0121 11:25:47.124987 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:25:49 crc kubenswrapper[4881]: I0121 11:25:49.957717 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c" event={"ID":"4a9e212c-bc4b-4dae-9c97-cbc48686c8fc","Type":"ContainerStarted","Data":"45f878b3ab9ad3bdced1034ce00243ffdba515159045ff6c402974179b384bcb"} Jan 21 11:25:49 crc kubenswrapper[4881]: I0121 11:25:49.978763 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c" podStartSLOduration=3.065983895 podStartE2EDuration="14.978742501s" podCreationTimestamp="2026-01-21 11:25:35 +0000 UTC" firstStartedPulling="2026-01-21 11:25:36.972590422 +0000 UTC m=+1724.232546891" lastFinishedPulling="2026-01-21 11:25:48.885349028 +0000 UTC m=+1736.145305497" observedRunningTime="2026-01-21 11:25:49.97215908 +0000 UTC m=+1737.232115559" watchObservedRunningTime="2026-01-21 11:25:49.978742501 +0000 UTC m=+1737.238698970" Jan 21 11:25:53 crc kubenswrapper[4881]: I0121 11:25:53.310431 4881 scope.go:117] "RemoveContainer" containerID="8d01a71cc3cebcfb692e7385a1f123f20c56f33df75b2fdeed7ba4c65dcb43ca" Jan 21 11:25:53 crc kubenswrapper[4881]: E0121 11:25:53.312327 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:26:03 crc kubenswrapper[4881]: I0121 11:26:03.104036 4881 generic.go:334] "Generic (PLEG): container finished" podID="4a9e212c-bc4b-4dae-9c97-cbc48686c8fc" containerID="45f878b3ab9ad3bdced1034ce00243ffdba515159045ff6c402974179b384bcb" exitCode=0 Jan 21 11:26:03 crc kubenswrapper[4881]: I0121 11:26:03.104208 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c" event={"ID":"4a9e212c-bc4b-4dae-9c97-cbc48686c8fc","Type":"ContainerDied","Data":"45f878b3ab9ad3bdced1034ce00243ffdba515159045ff6c402974179b384bcb"} Jan 21 11:26:04 crc kubenswrapper[4881]: I0121 11:26:04.637430 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c" Jan 21 11:26:04 crc kubenswrapper[4881]: I0121 11:26:04.786021 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a9e212c-bc4b-4dae-9c97-cbc48686c8fc-inventory\") pod \"4a9e212c-bc4b-4dae-9c97-cbc48686c8fc\" (UID: \"4a9e212c-bc4b-4dae-9c97-cbc48686c8fc\") " Jan 21 11:26:04 crc kubenswrapper[4881]: I0121 11:26:04.786149 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4a9e212c-bc4b-4dae-9c97-cbc48686c8fc-ssh-key-openstack-edpm-ipam\") pod \"4a9e212c-bc4b-4dae-9c97-cbc48686c8fc\" (UID: \"4a9e212c-bc4b-4dae-9c97-cbc48686c8fc\") " Jan 21 11:26:04 crc kubenswrapper[4881]: I0121 11:26:04.786247 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9l5pl\" (UniqueName: \"kubernetes.io/projected/4a9e212c-bc4b-4dae-9c97-cbc48686c8fc-kube-api-access-9l5pl\") pod \"4a9e212c-bc4b-4dae-9c97-cbc48686c8fc\" (UID: \"4a9e212c-bc4b-4dae-9c97-cbc48686c8fc\") " Jan 21 11:26:04 crc kubenswrapper[4881]: I0121 11:26:04.786397 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a9e212c-bc4b-4dae-9c97-cbc48686c8fc-repo-setup-combined-ca-bundle\") pod \"4a9e212c-bc4b-4dae-9c97-cbc48686c8fc\" (UID: \"4a9e212c-bc4b-4dae-9c97-cbc48686c8fc\") " Jan 21 11:26:04 crc kubenswrapper[4881]: I0121 11:26:04.792117 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a9e212c-bc4b-4dae-9c97-cbc48686c8fc-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "4a9e212c-bc4b-4dae-9c97-cbc48686c8fc" (UID: "4a9e212c-bc4b-4dae-9c97-cbc48686c8fc"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:26:04 crc kubenswrapper[4881]: I0121 11:26:04.793006 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a9e212c-bc4b-4dae-9c97-cbc48686c8fc-kube-api-access-9l5pl" (OuterVolumeSpecName: "kube-api-access-9l5pl") pod "4a9e212c-bc4b-4dae-9c97-cbc48686c8fc" (UID: "4a9e212c-bc4b-4dae-9c97-cbc48686c8fc"). InnerVolumeSpecName "kube-api-access-9l5pl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:26:04 crc kubenswrapper[4881]: I0121 11:26:04.818719 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a9e212c-bc4b-4dae-9c97-cbc48686c8fc-inventory" (OuterVolumeSpecName: "inventory") pod "4a9e212c-bc4b-4dae-9c97-cbc48686c8fc" (UID: "4a9e212c-bc4b-4dae-9c97-cbc48686c8fc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:26:04 crc kubenswrapper[4881]: I0121 11:26:04.889898 4881 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a9e212c-bc4b-4dae-9c97-cbc48686c8fc-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:26:04 crc kubenswrapper[4881]: I0121 11:26:04.889935 4881 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a9e212c-bc4b-4dae-9c97-cbc48686c8fc-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 11:26:04 crc kubenswrapper[4881]: I0121 11:26:04.889968 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9l5pl\" (UniqueName: \"kubernetes.io/projected/4a9e212c-bc4b-4dae-9c97-cbc48686c8fc-kube-api-access-9l5pl\") on node \"crc\" DevicePath \"\"" Jan 21 11:26:04 crc kubenswrapper[4881]: I0121 11:26:04.893075 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a9e212c-bc4b-4dae-9c97-cbc48686c8fc-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4a9e212c-bc4b-4dae-9c97-cbc48686c8fc" (UID: "4a9e212c-bc4b-4dae-9c97-cbc48686c8fc"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:26:04 crc kubenswrapper[4881]: I0121 11:26:04.991117 4881 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4a9e212c-bc4b-4dae-9c97-cbc48686c8fc-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 11:26:05 crc kubenswrapper[4881]: I0121 11:26:05.147346 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c" event={"ID":"4a9e212c-bc4b-4dae-9c97-cbc48686c8fc","Type":"ContainerDied","Data":"e7b289017d9a64d186168fcb4d0e1368afa9ea9c6525c60f59a683b8fdfe939a"} Jan 21 11:26:05 crc kubenswrapper[4881]: I0121 11:26:05.147385 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7b289017d9a64d186168fcb4d0e1368afa9ea9c6525c60f59a683b8fdfe939a" Jan 21 11:26:05 crc kubenswrapper[4881]: I0121 11:26:05.147410 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c" Jan 21 11:26:05 crc kubenswrapper[4881]: I0121 11:26:05.217589 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-vqzdk"] Jan 21 11:26:05 crc kubenswrapper[4881]: E0121 11:26:05.218097 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a9e212c-bc4b-4dae-9c97-cbc48686c8fc" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 21 11:26:05 crc kubenswrapper[4881]: I0121 11:26:05.218120 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a9e212c-bc4b-4dae-9c97-cbc48686c8fc" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 21 11:26:05 crc kubenswrapper[4881]: I0121 11:26:05.218323 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a9e212c-bc4b-4dae-9c97-cbc48686c8fc" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 21 11:26:05 crc kubenswrapper[4881]: I0121 11:26:05.219059 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vqzdk" Jan 21 11:26:05 crc kubenswrapper[4881]: I0121 11:26:05.221359 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 11:26:05 crc kubenswrapper[4881]: I0121 11:26:05.222027 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 11:26:05 crc kubenswrapper[4881]: I0121 11:26:05.222771 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fd7zg" Jan 21 11:26:05 crc kubenswrapper[4881]: I0121 11:26:05.223919 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 11:26:05 crc kubenswrapper[4881]: I0121 11:26:05.235226 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-vqzdk"] Jan 21 11:26:05 crc kubenswrapper[4881]: I0121 11:26:05.299280 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd495475-04cc-47b2-ad0e-7e3b83917ece-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vqzdk\" (UID: \"dd495475-04cc-47b2-ad0e-7e3b83917ece\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vqzdk" Jan 21 11:26:05 crc kubenswrapper[4881]: I0121 11:26:05.300117 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drqnn\" (UniqueName: \"kubernetes.io/projected/dd495475-04cc-47b2-ad0e-7e3b83917ece-kube-api-access-drqnn\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vqzdk\" (UID: \"dd495475-04cc-47b2-ad0e-7e3b83917ece\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vqzdk" Jan 21 11:26:05 crc kubenswrapper[4881]: I0121 11:26:05.300325 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd495475-04cc-47b2-ad0e-7e3b83917ece-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vqzdk\" (UID: \"dd495475-04cc-47b2-ad0e-7e3b83917ece\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vqzdk" Jan 21 11:26:05 crc kubenswrapper[4881]: I0121 11:26:05.403256 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drqnn\" (UniqueName: \"kubernetes.io/projected/dd495475-04cc-47b2-ad0e-7e3b83917ece-kube-api-access-drqnn\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vqzdk\" (UID: \"dd495475-04cc-47b2-ad0e-7e3b83917ece\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vqzdk" Jan 21 11:26:05 crc kubenswrapper[4881]: I0121 11:26:05.403381 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd495475-04cc-47b2-ad0e-7e3b83917ece-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vqzdk\" (UID: \"dd495475-04cc-47b2-ad0e-7e3b83917ece\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vqzdk" Jan 21 11:26:05 crc kubenswrapper[4881]: I0121 11:26:05.403499 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd495475-04cc-47b2-ad0e-7e3b83917ece-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vqzdk\" (UID: \"dd495475-04cc-47b2-ad0e-7e3b83917ece\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vqzdk" Jan 21 11:26:05 crc kubenswrapper[4881]: I0121 11:26:05.410086 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd495475-04cc-47b2-ad0e-7e3b83917ece-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vqzdk\" (UID: \"dd495475-04cc-47b2-ad0e-7e3b83917ece\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vqzdk" Jan 21 11:26:05 crc kubenswrapper[4881]: I0121 11:26:05.410525 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd495475-04cc-47b2-ad0e-7e3b83917ece-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vqzdk\" (UID: \"dd495475-04cc-47b2-ad0e-7e3b83917ece\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vqzdk" Jan 21 11:26:05 crc kubenswrapper[4881]: I0121 11:26:05.422995 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drqnn\" (UniqueName: \"kubernetes.io/projected/dd495475-04cc-47b2-ad0e-7e3b83917ece-kube-api-access-drqnn\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-vqzdk\" (UID: \"dd495475-04cc-47b2-ad0e-7e3b83917ece\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vqzdk" Jan 21 11:26:05 crc kubenswrapper[4881]: I0121 11:26:05.538966 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vqzdk" Jan 21 11:26:06 crc kubenswrapper[4881]: I0121 11:26:06.165613 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-vqzdk"] Jan 21 11:26:07 crc kubenswrapper[4881]: I0121 11:26:07.169381 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vqzdk" event={"ID":"dd495475-04cc-47b2-ad0e-7e3b83917ece","Type":"ContainerStarted","Data":"6a245fb772e4935c1de8be83ad0500624a0c81034e16a0c1338a7e61426ac137"} Jan 21 11:26:07 crc kubenswrapper[4881]: I0121 11:26:07.169431 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vqzdk" event={"ID":"dd495475-04cc-47b2-ad0e-7e3b83917ece","Type":"ContainerStarted","Data":"e47b110be76f9e83fffaaa8ac4df5ba04674f85999916283750b5ea0d29b4303"} Jan 21 11:26:07 crc kubenswrapper[4881]: I0121 11:26:07.193380 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vqzdk" podStartSLOduration=1.715362059 podStartE2EDuration="2.193354958s" podCreationTimestamp="2026-01-21 11:26:05 +0000 UTC" firstStartedPulling="2026-01-21 11:26:06.160918558 +0000 UTC m=+1753.420875027" lastFinishedPulling="2026-01-21 11:26:06.638911447 +0000 UTC m=+1753.898867926" observedRunningTime="2026-01-21 11:26:07.184604923 +0000 UTC m=+1754.444561432" watchObservedRunningTime="2026-01-21 11:26:07.193354958 +0000 UTC m=+1754.453311437" Jan 21 11:26:07 crc kubenswrapper[4881]: I0121 11:26:07.313344 4881 scope.go:117] "RemoveContainer" containerID="8d01a71cc3cebcfb692e7385a1f123f20c56f33df75b2fdeed7ba4c65dcb43ca" Jan 21 11:26:07 crc kubenswrapper[4881]: E0121 11:26:07.313673 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:26:10 crc kubenswrapper[4881]: I0121 11:26:10.201615 4881 generic.go:334] "Generic (PLEG): container finished" podID="dd495475-04cc-47b2-ad0e-7e3b83917ece" containerID="6a245fb772e4935c1de8be83ad0500624a0c81034e16a0c1338a7e61426ac137" exitCode=0 Jan 21 11:26:10 crc kubenswrapper[4881]: I0121 11:26:10.201685 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vqzdk" event={"ID":"dd495475-04cc-47b2-ad0e-7e3b83917ece","Type":"ContainerDied","Data":"6a245fb772e4935c1de8be83ad0500624a0c81034e16a0c1338a7e61426ac137"} Jan 21 11:26:11 crc kubenswrapper[4881]: I0121 11:26:11.658942 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vqzdk" Jan 21 11:26:11 crc kubenswrapper[4881]: I0121 11:26:11.751965 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd495475-04cc-47b2-ad0e-7e3b83917ece-ssh-key-openstack-edpm-ipam\") pod \"dd495475-04cc-47b2-ad0e-7e3b83917ece\" (UID: \"dd495475-04cc-47b2-ad0e-7e3b83917ece\") " Jan 21 11:26:11 crc kubenswrapper[4881]: I0121 11:26:11.752115 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd495475-04cc-47b2-ad0e-7e3b83917ece-inventory\") pod \"dd495475-04cc-47b2-ad0e-7e3b83917ece\" (UID: \"dd495475-04cc-47b2-ad0e-7e3b83917ece\") " Jan 21 11:26:11 crc kubenswrapper[4881]: I0121 11:26:11.752265 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drqnn\" (UniqueName: \"kubernetes.io/projected/dd495475-04cc-47b2-ad0e-7e3b83917ece-kube-api-access-drqnn\") pod \"dd495475-04cc-47b2-ad0e-7e3b83917ece\" (UID: \"dd495475-04cc-47b2-ad0e-7e3b83917ece\") " Jan 21 11:26:11 crc kubenswrapper[4881]: I0121 11:26:11.764039 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd495475-04cc-47b2-ad0e-7e3b83917ece-kube-api-access-drqnn" (OuterVolumeSpecName: "kube-api-access-drqnn") pod "dd495475-04cc-47b2-ad0e-7e3b83917ece" (UID: "dd495475-04cc-47b2-ad0e-7e3b83917ece"). InnerVolumeSpecName "kube-api-access-drqnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:26:11 crc kubenswrapper[4881]: I0121 11:26:11.780839 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd495475-04cc-47b2-ad0e-7e3b83917ece-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "dd495475-04cc-47b2-ad0e-7e3b83917ece" (UID: "dd495475-04cc-47b2-ad0e-7e3b83917ece"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:26:11 crc kubenswrapper[4881]: I0121 11:26:11.801418 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd495475-04cc-47b2-ad0e-7e3b83917ece-inventory" (OuterVolumeSpecName: "inventory") pod "dd495475-04cc-47b2-ad0e-7e3b83917ece" (UID: "dd495475-04cc-47b2-ad0e-7e3b83917ece"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:26:11 crc kubenswrapper[4881]: I0121 11:26:11.854696 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drqnn\" (UniqueName: \"kubernetes.io/projected/dd495475-04cc-47b2-ad0e-7e3b83917ece-kube-api-access-drqnn\") on node \"crc\" DevicePath \"\"" Jan 21 11:26:11 crc kubenswrapper[4881]: I0121 11:26:11.854746 4881 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd495475-04cc-47b2-ad0e-7e3b83917ece-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 11:26:11 crc kubenswrapper[4881]: I0121 11:26:11.854762 4881 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd495475-04cc-47b2-ad0e-7e3b83917ece-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 11:26:12 crc kubenswrapper[4881]: I0121 11:26:12.226661 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vqzdk" event={"ID":"dd495475-04cc-47b2-ad0e-7e3b83917ece","Type":"ContainerDied","Data":"e47b110be76f9e83fffaaa8ac4df5ba04674f85999916283750b5ea0d29b4303"} Jan 21 11:26:12 crc kubenswrapper[4881]: I0121 11:26:12.226716 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e47b110be76f9e83fffaaa8ac4df5ba04674f85999916283750b5ea0d29b4303" Jan 21 11:26:12 crc kubenswrapper[4881]: I0121 11:26:12.226758 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-vqzdk" Jan 21 11:26:12 crc kubenswrapper[4881]: I0121 11:26:12.314023 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5"] Jan 21 11:26:12 crc kubenswrapper[4881]: E0121 11:26:12.315084 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd495475-04cc-47b2-ad0e-7e3b83917ece" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 21 11:26:12 crc kubenswrapper[4881]: I0121 11:26:12.315114 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd495475-04cc-47b2-ad0e-7e3b83917ece" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 21 11:26:12 crc kubenswrapper[4881]: I0121 11:26:12.315457 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd495475-04cc-47b2-ad0e-7e3b83917ece" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 21 11:26:12 crc kubenswrapper[4881]: I0121 11:26:12.316628 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5" Jan 21 11:26:12 crc kubenswrapper[4881]: I0121 11:26:12.319847 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 11:26:12 crc kubenswrapper[4881]: I0121 11:26:12.320198 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 11:26:12 crc kubenswrapper[4881]: I0121 11:26:12.322406 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 11:26:12 crc kubenswrapper[4881]: I0121 11:26:12.322475 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fd7zg" Jan 21 11:26:12 crc kubenswrapper[4881]: I0121 11:26:12.332676 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5"] Jan 21 11:26:12 crc kubenswrapper[4881]: I0121 11:26:12.467889 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5930ee4f-c104-4ac5-9440-2a24d110fae5-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5\" (UID: \"5930ee4f-c104-4ac5-9440-2a24d110fae5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5" Jan 21 11:26:12 crc kubenswrapper[4881]: I0121 11:26:12.468188 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5930ee4f-c104-4ac5-9440-2a24d110fae5-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5\" (UID: \"5930ee4f-c104-4ac5-9440-2a24d110fae5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5" Jan 21 11:26:12 crc kubenswrapper[4881]: I0121 11:26:12.468295 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6r5w\" (UniqueName: \"kubernetes.io/projected/5930ee4f-c104-4ac5-9440-2a24d110fae5-kube-api-access-q6r5w\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5\" (UID: \"5930ee4f-c104-4ac5-9440-2a24d110fae5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5" Jan 21 11:26:12 crc kubenswrapper[4881]: I0121 11:26:12.468532 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5930ee4f-c104-4ac5-9440-2a24d110fae5-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5\" (UID: \"5930ee4f-c104-4ac5-9440-2a24d110fae5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5" Jan 21 11:26:12 crc kubenswrapper[4881]: I0121 11:26:12.572507 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5930ee4f-c104-4ac5-9440-2a24d110fae5-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5\" (UID: \"5930ee4f-c104-4ac5-9440-2a24d110fae5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5" Jan 21 11:26:12 crc kubenswrapper[4881]: I0121 11:26:12.572877 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6r5w\" (UniqueName: \"kubernetes.io/projected/5930ee4f-c104-4ac5-9440-2a24d110fae5-kube-api-access-q6r5w\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5\" (UID: \"5930ee4f-c104-4ac5-9440-2a24d110fae5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5" Jan 21 11:26:12 crc kubenswrapper[4881]: I0121 11:26:12.572952 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5930ee4f-c104-4ac5-9440-2a24d110fae5-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5\" (UID: \"5930ee4f-c104-4ac5-9440-2a24d110fae5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5" Jan 21 11:26:12 crc kubenswrapper[4881]: I0121 11:26:12.573124 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5930ee4f-c104-4ac5-9440-2a24d110fae5-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5\" (UID: \"5930ee4f-c104-4ac5-9440-2a24d110fae5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5" Jan 21 11:26:12 crc kubenswrapper[4881]: I0121 11:26:12.576777 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5930ee4f-c104-4ac5-9440-2a24d110fae5-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5\" (UID: \"5930ee4f-c104-4ac5-9440-2a24d110fae5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5" Jan 21 11:26:12 crc kubenswrapper[4881]: I0121 11:26:12.579992 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5930ee4f-c104-4ac5-9440-2a24d110fae5-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5\" (UID: \"5930ee4f-c104-4ac5-9440-2a24d110fae5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5" Jan 21 11:26:12 crc kubenswrapper[4881]: I0121 11:26:12.580810 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5930ee4f-c104-4ac5-9440-2a24d110fae5-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5\" (UID: \"5930ee4f-c104-4ac5-9440-2a24d110fae5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5" Jan 21 11:26:12 crc kubenswrapper[4881]: I0121 11:26:12.592524 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6r5w\" (UniqueName: \"kubernetes.io/projected/5930ee4f-c104-4ac5-9440-2a24d110fae5-kube-api-access-q6r5w\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5\" (UID: \"5930ee4f-c104-4ac5-9440-2a24d110fae5\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5" Jan 21 11:26:12 crc kubenswrapper[4881]: I0121 11:26:12.651283 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5" Jan 21 11:26:13 crc kubenswrapper[4881]: I0121 11:26:13.203112 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5"] Jan 21 11:26:13 crc kubenswrapper[4881]: I0121 11:26:13.235972 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5" event={"ID":"5930ee4f-c104-4ac5-9440-2a24d110fae5","Type":"ContainerStarted","Data":"bde4706bcd913ba3323d2b1125ba1ee7475a762ce3f9d0c4ef8b30b43d404e6b"} Jan 21 11:26:14 crc kubenswrapper[4881]: I0121 11:26:14.247864 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5" event={"ID":"5930ee4f-c104-4ac5-9440-2a24d110fae5","Type":"ContainerStarted","Data":"670294433e01fe33af9fd85b65d810eef8d3617ee467e8afa32a1e27221cc5ca"} Jan 21 11:26:14 crc kubenswrapper[4881]: I0121 11:26:14.272207 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5" podStartSLOduration=1.758139683 podStartE2EDuration="2.272185034s" podCreationTimestamp="2026-01-21 11:26:12 +0000 UTC" firstStartedPulling="2026-01-21 11:26:13.204032991 +0000 UTC m=+1760.463989460" lastFinishedPulling="2026-01-21 11:26:13.718078342 +0000 UTC m=+1760.978034811" observedRunningTime="2026-01-21 11:26:14.270465623 +0000 UTC m=+1761.530422092" watchObservedRunningTime="2026-01-21 11:26:14.272185034 +0000 UTC m=+1761.532141503" Jan 21 11:26:19 crc kubenswrapper[4881]: I0121 11:26:19.311632 4881 scope.go:117] "RemoveContainer" containerID="8d01a71cc3cebcfb692e7385a1f123f20c56f33df75b2fdeed7ba4c65dcb43ca" Jan 21 11:26:19 crc kubenswrapper[4881]: E0121 11:26:19.312452 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:26:28 crc kubenswrapper[4881]: I0121 11:26:28.041082 4881 scope.go:117] "RemoveContainer" containerID="20252506bf2921633b620e12ae73d258d135c6a818c92bcf4d604ddbc1f5e46d" Jan 21 11:26:31 crc kubenswrapper[4881]: I0121 11:26:31.311751 4881 scope.go:117] "RemoveContainer" containerID="8d01a71cc3cebcfb692e7385a1f123f20c56f33df75b2fdeed7ba4c65dcb43ca" Jan 21 11:26:31 crc kubenswrapper[4881]: E0121 11:26:31.312695 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:26:43 crc kubenswrapper[4881]: I0121 11:26:43.317964 4881 scope.go:117] "RemoveContainer" containerID="8d01a71cc3cebcfb692e7385a1f123f20c56f33df75b2fdeed7ba4c65dcb43ca" Jan 21 11:26:43 crc kubenswrapper[4881]: E0121 11:26:43.318641 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:26:56 crc kubenswrapper[4881]: I0121 11:26:56.310847 4881 scope.go:117] "RemoveContainer" containerID="8d01a71cc3cebcfb692e7385a1f123f20c56f33df75b2fdeed7ba4c65dcb43ca" Jan 21 11:26:56 crc kubenswrapper[4881]: E0121 11:26:56.311627 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:27:11 crc kubenswrapper[4881]: I0121 11:27:11.310938 4881 scope.go:117] "RemoveContainer" containerID="8d01a71cc3cebcfb692e7385a1f123f20c56f33df75b2fdeed7ba4c65dcb43ca" Jan 21 11:27:11 crc kubenswrapper[4881]: E0121 11:27:11.312032 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:27:25 crc kubenswrapper[4881]: I0121 11:27:25.315820 4881 scope.go:117] "RemoveContainer" containerID="8d01a71cc3cebcfb692e7385a1f123f20c56f33df75b2fdeed7ba4c65dcb43ca" Jan 21 11:27:25 crc kubenswrapper[4881]: E0121 11:27:25.319388 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:27:28 crc kubenswrapper[4881]: I0121 11:27:28.136411 4881 scope.go:117] "RemoveContainer" containerID="c7d5411076516ac1067feb6fa2326814efce9d04ded39d593fa3f53c461d73dc" Jan 21 11:27:28 crc kubenswrapper[4881]: I0121 11:27:28.164487 4881 scope.go:117] "RemoveContainer" containerID="243391ce37046a98efbd843bc1e6f28fda173bffe3ce05b733b63f613224e766" Jan 21 11:27:36 crc kubenswrapper[4881]: I0121 11:27:36.310904 4881 scope.go:117] "RemoveContainer" containerID="8d01a71cc3cebcfb692e7385a1f123f20c56f33df75b2fdeed7ba4c65dcb43ca" Jan 21 11:27:36 crc kubenswrapper[4881]: E0121 11:27:36.311558 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:27:49 crc kubenswrapper[4881]: I0121 11:27:49.312841 4881 scope.go:117] "RemoveContainer" containerID="8d01a71cc3cebcfb692e7385a1f123f20c56f33df75b2fdeed7ba4c65dcb43ca" Jan 21 11:27:49 crc kubenswrapper[4881]: E0121 11:27:49.313683 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:28:02 crc kubenswrapper[4881]: I0121 11:28:02.311500 4881 scope.go:117] "RemoveContainer" containerID="8d01a71cc3cebcfb692e7385a1f123f20c56f33df75b2fdeed7ba4c65dcb43ca" Jan 21 11:28:02 crc kubenswrapper[4881]: E0121 11:28:02.312356 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:28:16 crc kubenswrapper[4881]: I0121 11:28:16.311367 4881 scope.go:117] "RemoveContainer" containerID="8d01a71cc3cebcfb692e7385a1f123f20c56f33df75b2fdeed7ba4c65dcb43ca" Jan 21 11:28:16 crc kubenswrapper[4881]: E0121 11:28:16.312728 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:28:18 crc kubenswrapper[4881]: I0121 11:28:18.047354 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-a34b-account-create-update-hm56c"] Jan 21 11:28:18 crc kubenswrapper[4881]: I0121 11:28:18.059483 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-b4bf-account-create-update-6p74j"] Jan 21 11:28:18 crc kubenswrapper[4881]: I0121 11:28:18.076878 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-create-gc2qj"] Jan 21 11:28:18 crc kubenswrapper[4881]: I0121 11:28:18.087648 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-nv8vf"] Jan 21 11:28:18 crc kubenswrapper[4881]: I0121 11:28:18.098947 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-smj4g"] Jan 21 11:28:18 crc kubenswrapper[4881]: I0121 11:28:18.108602 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-8d4c-account-create-update-f29tp"] Jan 21 11:28:18 crc kubenswrapper[4881]: I0121 11:28:18.118177 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-create-gc2qj"] Jan 21 11:28:18 crc kubenswrapper[4881]: I0121 11:28:18.127476 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-nv8vf"] Jan 21 11:28:18 crc kubenswrapper[4881]: I0121 11:28:18.136023 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-8d4c-account-create-update-f29tp"] Jan 21 11:28:18 crc kubenswrapper[4881]: I0121 11:28:18.147619 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-b4bf-account-create-update-6p74j"] Jan 21 11:28:18 crc kubenswrapper[4881]: I0121 11:28:18.157134 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-a34b-account-create-update-hm56c"] Jan 21 11:28:18 crc kubenswrapper[4881]: I0121 11:28:18.170491 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-smj4g"] Jan 21 11:28:19 crc kubenswrapper[4881]: I0121 11:28:19.335285 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13ea4f5c-fa1d-485c-80b3-a260d8725e81" path="/var/lib/kubelet/pods/13ea4f5c-fa1d-485c-80b3-a260d8725e81/volumes" Jan 21 11:28:19 crc kubenswrapper[4881]: I0121 11:28:19.336413 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c4be317-c914-45c5-8da4-1fe7d647db7e" path="/var/lib/kubelet/pods/1c4be317-c914-45c5-8da4-1fe7d647db7e/volumes" Jan 21 11:28:19 crc kubenswrapper[4881]: I0121 11:28:19.337467 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="317bbc59-5154-4c0e-920a-3227d1ec4982" path="/var/lib/kubelet/pods/317bbc59-5154-4c0e-920a-3227d1ec4982/volumes" Jan 21 11:28:19 crc kubenswrapper[4881]: I0121 11:28:19.338202 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="331fda3a-4e64-4824-abd7-42eaef7b9b4f" path="/var/lib/kubelet/pods/331fda3a-4e64-4824-abd7-42eaef7b9b4f/volumes" Jan 21 11:28:19 crc kubenswrapper[4881]: I0121 11:28:19.339399 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ecc1262-3ebf-4a17-bc42-507ce55f6d7e" path="/var/lib/kubelet/pods/5ecc1262-3ebf-4a17-bc42-507ce55f6d7e/volumes" Jan 21 11:28:19 crc kubenswrapper[4881]: I0121 11:28:19.340330 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6a422f0-bb4b-442c-a2d7-96ac90ffde83" path="/var/lib/kubelet/pods/b6a422f0-bb4b-442c-a2d7-96ac90ffde83/volumes" Jan 21 11:28:27 crc kubenswrapper[4881]: I0121 11:28:27.310557 4881 scope.go:117] "RemoveContainer" containerID="8d01a71cc3cebcfb692e7385a1f123f20c56f33df75b2fdeed7ba4c65dcb43ca" Jan 21 11:28:27 crc kubenswrapper[4881]: E0121 11:28:27.311309 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:28:28 crc kubenswrapper[4881]: I0121 11:28:28.231443 4881 scope.go:117] "RemoveContainer" containerID="08a0b7dafd2179b30f57680020c59d606fe75966918c8bb86686a6dacf5de9ff" Jan 21 11:28:28 crc kubenswrapper[4881]: I0121 11:28:28.275876 4881 scope.go:117] "RemoveContainer" containerID="d8dd72ec74cb8c65a23a4d5b59b35333d8b4f0429542fb48634decd408b21787" Jan 21 11:28:28 crc kubenswrapper[4881]: I0121 11:28:28.322256 4881 scope.go:117] "RemoveContainer" containerID="8b53d4f0258b883730ea2ab9cbc22ea1275e34223ca52f3ff089755ba0514b17" Jan 21 11:28:28 crc kubenswrapper[4881]: I0121 11:28:28.372221 4881 scope.go:117] "RemoveContainer" containerID="5dc89d3192dccc5bebeec553b9ca36f3b56735830fa2f8fae09494c5f8979443" Jan 21 11:28:28 crc kubenswrapper[4881]: I0121 11:28:28.422174 4881 scope.go:117] "RemoveContainer" containerID="8e69c6e6b0d6f76b9304a07ebd26d806a9e9908cc09c50913b96d416ca2b1454" Jan 21 11:28:28 crc kubenswrapper[4881]: I0121 11:28:28.495676 4881 scope.go:117] "RemoveContainer" containerID="9ae9aa24bb02508282163c868da5d6ab7a85e49192dbd35ecea2bbccdab0b150" Jan 21 11:28:32 crc kubenswrapper[4881]: I0121 11:28:32.033059 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-n9992"] Jan 21 11:28:32 crc kubenswrapper[4881]: I0121 11:28:32.044923 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-n9992"] Jan 21 11:28:33 crc kubenswrapper[4881]: I0121 11:28:33.333685 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70a2b37a-049a-45a1-aeb5-6b7d5515dd69" path="/var/lib/kubelet/pods/70a2b37a-049a-45a1-aeb5-6b7d5515dd69/volumes" Jan 21 11:28:38 crc kubenswrapper[4881]: I0121 11:28:38.311218 4881 scope.go:117] "RemoveContainer" containerID="8d01a71cc3cebcfb692e7385a1f123f20c56f33df75b2fdeed7ba4c65dcb43ca" Jan 21 11:28:38 crc kubenswrapper[4881]: E0121 11:28:38.312525 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:28:52 crc kubenswrapper[4881]: I0121 11:28:52.311524 4881 scope.go:117] "RemoveContainer" containerID="8d01a71cc3cebcfb692e7385a1f123f20c56f33df75b2fdeed7ba4c65dcb43ca" Jan 21 11:28:52 crc kubenswrapper[4881]: E0121 11:28:52.312478 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:29:04 crc kubenswrapper[4881]: I0121 11:29:04.311649 4881 scope.go:117] "RemoveContainer" containerID="8d01a71cc3cebcfb692e7385a1f123f20c56f33df75b2fdeed7ba4c65dcb43ca" Jan 21 11:29:04 crc kubenswrapper[4881]: E0121 11:29:04.312605 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:29:10 crc kubenswrapper[4881]: I0121 11:29:10.206242 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-r9r4z"] Jan 21 11:29:10 crc kubenswrapper[4881]: I0121 11:29:10.223233 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-170f-account-create-update-8bt4l"] Jan 21 11:29:10 crc kubenswrapper[4881]: I0121 11:29:10.232722 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-b544m"] Jan 21 11:29:10 crc kubenswrapper[4881]: I0121 11:29:10.241848 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-r9r4z"] Jan 21 11:29:10 crc kubenswrapper[4881]: I0121 11:29:10.251024 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-b544m"] Jan 21 11:29:10 crc kubenswrapper[4881]: I0121 11:29:10.270415 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-170f-account-create-update-8bt4l"] Jan 21 11:29:11 crc kubenswrapper[4881]: I0121 11:29:11.325888 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="760e8dbf-d827-42ef-969c-1c7409f7ac20" path="/var/lib/kubelet/pods/760e8dbf-d827-42ef-969c-1c7409f7ac20/volumes" Jan 21 11:29:11 crc kubenswrapper[4881]: I0121 11:29:11.327654 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c837cab9-43a5-4b84-a0bd-d915bca31600" path="/var/lib/kubelet/pods/c837cab9-43a5-4b84-a0bd-d915bca31600/volumes" Jan 21 11:29:11 crc kubenswrapper[4881]: I0121 11:29:11.328768 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8cfe009-eba2-4713-b50f-cc334b4ca691" path="/var/lib/kubelet/pods/c8cfe009-eba2-4713-b50f-cc334b4ca691/volumes" Jan 21 11:29:16 crc kubenswrapper[4881]: I0121 11:29:16.045997 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-a5aa-account-create-update-j2nc8"] Jan 21 11:29:16 crc kubenswrapper[4881]: I0121 11:29:16.057219 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-3649-account-create-update-pqj5m"] Jan 21 11:29:16 crc kubenswrapper[4881]: I0121 11:29:16.067676 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-3649-account-create-update-pqj5m"] Jan 21 11:29:16 crc kubenswrapper[4881]: I0121 11:29:16.093376 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-ktp2w"] Jan 21 11:29:16 crc kubenswrapper[4881]: I0121 11:29:16.102860 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-82x9l"] Jan 21 11:29:16 crc kubenswrapper[4881]: I0121 11:29:16.112073 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-c7b7-account-create-update-dcz9r"] Jan 21 11:29:16 crc kubenswrapper[4881]: I0121 11:29:16.120311 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-a5aa-account-create-update-j2nc8"] Jan 21 11:29:16 crc kubenswrapper[4881]: I0121 11:29:16.129650 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-ktp2w"] Jan 21 11:29:16 crc kubenswrapper[4881]: I0121 11:29:16.137377 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-82x9l"] Jan 21 11:29:16 crc kubenswrapper[4881]: I0121 11:29:16.145194 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-c7b7-account-create-update-dcz9r"] Jan 21 11:29:16 crc kubenswrapper[4881]: I0121 11:29:16.310925 4881 scope.go:117] "RemoveContainer" containerID="8d01a71cc3cebcfb692e7385a1f123f20c56f33df75b2fdeed7ba4c65dcb43ca" Jan 21 11:29:16 crc kubenswrapper[4881]: E0121 11:29:16.311287 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:29:17 crc kubenswrapper[4881]: I0121 11:29:17.328513 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0145b8f9-5452-4f0e-819c-61fbb8badffb" path="/var/lib/kubelet/pods/0145b8f9-5452-4f0e-819c-61fbb8badffb/volumes" Jan 21 11:29:17 crc kubenswrapper[4881]: I0121 11:29:17.329975 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d72ab14-b1c2-4382-847a-00eb254ac958" path="/var/lib/kubelet/pods/5d72ab14-b1c2-4382-847a-00eb254ac958/volumes" Jan 21 11:29:17 crc kubenswrapper[4881]: I0121 11:29:17.332135 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f6f337c-95ec-448f-ab58-e7e7fe7abfd4" path="/var/lib/kubelet/pods/6f6f337c-95ec-448f-ab58-e7e7fe7abfd4/volumes" Jan 21 11:29:17 crc kubenswrapper[4881]: I0121 11:29:17.333112 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4b2b4e9-304c-47ae-939a-9d938d012b90" path="/var/lib/kubelet/pods/b4b2b4e9-304c-47ae-939a-9d938d012b90/volumes" Jan 21 11:29:17 crc kubenswrapper[4881]: I0121 11:29:17.335494 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec3ba10e-2cbd-4350-9014-27a92932849f" path="/var/lib/kubelet/pods/ec3ba10e-2cbd-4350-9014-27a92932849f/volumes" Jan 21 11:29:28 crc kubenswrapper[4881]: I0121 11:29:28.647901 4881 scope.go:117] "RemoveContainer" containerID="000840a5458dc374424237a1e0edaa7bc61f3e5c2c1a3524dfdcefbcaa258c53" Jan 21 11:29:28 crc kubenswrapper[4881]: I0121 11:29:28.674447 4881 scope.go:117] "RemoveContainer" containerID="9183c1ea9a3472251b9a9872ac196a0371d8a3a960cf0876e3244bf2dc5fc313" Jan 21 11:29:28 crc kubenswrapper[4881]: I0121 11:29:28.698500 4881 scope.go:117] "RemoveContainer" containerID="19837216e672b1d70dcee3db6a9cc2dfe6a6a6ac2f0ef6c6a1c9729e5d023d0f" Jan 21 11:29:28 crc kubenswrapper[4881]: I0121 11:29:28.766757 4881 scope.go:117] "RemoveContainer" containerID="0287622c020081ba9c95095872909db810663fe9347d92c3e84d5f5ddca8090f" Jan 21 11:29:29 crc kubenswrapper[4881]: I0121 11:29:29.116391 4881 scope.go:117] "RemoveContainer" containerID="9d3665845c2c2c09903d0aa16a7538de5b4dcf05cef7d82865d9c9d446cdaf41" Jan 21 11:29:29 crc kubenswrapper[4881]: I0121 11:29:29.141477 4881 scope.go:117] "RemoveContainer" containerID="842c407700548966028d06c2f685224af9199aeb260a3fcbe49b13c5d2308449" Jan 21 11:29:29 crc kubenswrapper[4881]: I0121 11:29:29.178563 4881 scope.go:117] "RemoveContainer" containerID="8fede96a0f0891ea2a0beeea55c81b92d1d136a372295efbbbb9fb60c32a400b" Jan 21 11:29:29 crc kubenswrapper[4881]: I0121 11:29:29.212707 4881 scope.go:117] "RemoveContainer" containerID="475d11a1d0ffe3143569c01c096587097abd1f5b648c8d0d1064b5b35157b3c4" Jan 21 11:29:29 crc kubenswrapper[4881]: I0121 11:29:29.255696 4881 scope.go:117] "RemoveContainer" containerID="23d18cc60c7d47249b61d06b5e22cae5297e1e798a824f42c26b13569f6185c2" Jan 21 11:29:29 crc kubenswrapper[4881]: I0121 11:29:29.278585 4881 scope.go:117] "RemoveContainer" containerID="68b28d1f90d946399d23686118aca2c39b038f12760a90f94c3980be0fdb6b45" Jan 21 11:29:29 crc kubenswrapper[4881]: I0121 11:29:29.304940 4881 scope.go:117] "RemoveContainer" containerID="d5d6be9da18cdb336cad44c85f030f31c3a241f6234a1b668281031e8ffb56ec" Jan 21 11:29:29 crc kubenswrapper[4881]: I0121 11:29:29.310593 4881 scope.go:117] "RemoveContainer" containerID="8d01a71cc3cebcfb692e7385a1f123f20c56f33df75b2fdeed7ba4c65dcb43ca" Jan 21 11:29:29 crc kubenswrapper[4881]: E0121 11:29:29.310894 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:29:29 crc kubenswrapper[4881]: I0121 11:29:29.336025 4881 scope.go:117] "RemoveContainer" containerID="4830c420695532fe361ac3eb65ee53d659da36dd7a4d7c07a18532e51115b820" Jan 21 11:29:43 crc kubenswrapper[4881]: I0121 11:29:43.320719 4881 scope.go:117] "RemoveContainer" containerID="8d01a71cc3cebcfb692e7385a1f123f20c56f33df75b2fdeed7ba4c65dcb43ca" Jan 21 11:29:43 crc kubenswrapper[4881]: I0121 11:29:43.592877 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"ef39ee7cfe761ce9a9728441eb10e70a161b503ea812b7dfbf273e44506d3274"} Jan 21 11:29:44 crc kubenswrapper[4881]: I0121 11:29:44.070797 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-44pdb"] Jan 21 11:29:44 crc kubenswrapper[4881]: I0121 11:29:44.083684 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-44pdb"] Jan 21 11:29:45 crc kubenswrapper[4881]: I0121 11:29:45.324776 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34efcb76-01fb-490b-88c0-a4ee1363a01e" path="/var/lib/kubelet/pods/34efcb76-01fb-490b-88c0-a4ee1363a01e/volumes" Jan 21 11:29:58 crc kubenswrapper[4881]: I0121 11:29:58.032428 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-sync-t4mx7"] Jan 21 11:29:58 crc kubenswrapper[4881]: I0121 11:29:58.042610 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-sync-t4mx7"] Jan 21 11:29:59 crc kubenswrapper[4881]: I0121 11:29:59.075256 4881 generic.go:334] "Generic (PLEG): container finished" podID="5930ee4f-c104-4ac5-9440-2a24d110fae5" containerID="670294433e01fe33af9fd85b65d810eef8d3617ee467e8afa32a1e27221cc5ca" exitCode=0 Jan 21 11:29:59 crc kubenswrapper[4881]: I0121 11:29:59.075333 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5" event={"ID":"5930ee4f-c104-4ac5-9440-2a24d110fae5","Type":"ContainerDied","Data":"670294433e01fe33af9fd85b65d810eef8d3617ee467e8afa32a1e27221cc5ca"} Jan 21 11:29:59 crc kubenswrapper[4881]: I0121 11:29:59.323442 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc7e598c-b449-4e8c-9214-44e27cb45e53" path="/var/lib/kubelet/pods/bc7e598c-b449-4e8c-9214-44e27cb45e53/volumes" Jan 21 11:30:00 crc kubenswrapper[4881]: I0121 11:30:00.191366 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483250-hpz5k"] Jan 21 11:30:00 crc kubenswrapper[4881]: I0121 11:30:00.385573 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483250-hpz5k" Jan 21 11:30:00 crc kubenswrapper[4881]: I0121 11:30:00.388767 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 11:30:00 crc kubenswrapper[4881]: I0121 11:30:00.390118 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 11:30:00 crc kubenswrapper[4881]: I0121 11:30:00.411279 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483250-hpz5k"] Jan 21 11:30:00 crc kubenswrapper[4881]: I0121 11:30:00.485744 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pftj\" (UniqueName: \"kubernetes.io/projected/0563880c-563e-4cc5-93a0-c2af095788cb-kube-api-access-6pftj\") pod \"collect-profiles-29483250-hpz5k\" (UID: \"0563880c-563e-4cc5-93a0-c2af095788cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483250-hpz5k" Jan 21 11:30:00 crc kubenswrapper[4881]: I0121 11:30:00.485955 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0563880c-563e-4cc5-93a0-c2af095788cb-secret-volume\") pod \"collect-profiles-29483250-hpz5k\" (UID: \"0563880c-563e-4cc5-93a0-c2af095788cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483250-hpz5k" Jan 21 11:30:00 crc kubenswrapper[4881]: I0121 11:30:00.486090 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0563880c-563e-4cc5-93a0-c2af095788cb-config-volume\") pod \"collect-profiles-29483250-hpz5k\" (UID: \"0563880c-563e-4cc5-93a0-c2af095788cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483250-hpz5k" Jan 21 11:30:00 crc kubenswrapper[4881]: I0121 11:30:00.587432 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0563880c-563e-4cc5-93a0-c2af095788cb-config-volume\") pod \"collect-profiles-29483250-hpz5k\" (UID: \"0563880c-563e-4cc5-93a0-c2af095788cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483250-hpz5k" Jan 21 11:30:00 crc kubenswrapper[4881]: I0121 11:30:00.587607 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pftj\" (UniqueName: \"kubernetes.io/projected/0563880c-563e-4cc5-93a0-c2af095788cb-kube-api-access-6pftj\") pod \"collect-profiles-29483250-hpz5k\" (UID: \"0563880c-563e-4cc5-93a0-c2af095788cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483250-hpz5k" Jan 21 11:30:00 crc kubenswrapper[4881]: I0121 11:30:00.587696 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0563880c-563e-4cc5-93a0-c2af095788cb-secret-volume\") pod \"collect-profiles-29483250-hpz5k\" (UID: \"0563880c-563e-4cc5-93a0-c2af095788cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483250-hpz5k" Jan 21 11:30:00 crc kubenswrapper[4881]: I0121 11:30:00.588610 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0563880c-563e-4cc5-93a0-c2af095788cb-config-volume\") pod \"collect-profiles-29483250-hpz5k\" (UID: \"0563880c-563e-4cc5-93a0-c2af095788cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483250-hpz5k" Jan 21 11:30:00 crc kubenswrapper[4881]: I0121 11:30:00.595175 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0563880c-563e-4cc5-93a0-c2af095788cb-secret-volume\") pod \"collect-profiles-29483250-hpz5k\" (UID: \"0563880c-563e-4cc5-93a0-c2af095788cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483250-hpz5k" Jan 21 11:30:00 crc kubenswrapper[4881]: I0121 11:30:00.606150 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pftj\" (UniqueName: \"kubernetes.io/projected/0563880c-563e-4cc5-93a0-c2af095788cb-kube-api-access-6pftj\") pod \"collect-profiles-29483250-hpz5k\" (UID: \"0563880c-563e-4cc5-93a0-c2af095788cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483250-hpz5k" Jan 21 11:30:00 crc kubenswrapper[4881]: I0121 11:30:00.720964 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483250-hpz5k" Jan 21 11:30:00 crc kubenswrapper[4881]: I0121 11:30:00.877369 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5" Jan 21 11:30:00 crc kubenswrapper[4881]: I0121 11:30:00.896690 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5930ee4f-c104-4ac5-9440-2a24d110fae5-ssh-key-openstack-edpm-ipam\") pod \"5930ee4f-c104-4ac5-9440-2a24d110fae5\" (UID: \"5930ee4f-c104-4ac5-9440-2a24d110fae5\") " Jan 21 11:30:00 crc kubenswrapper[4881]: I0121 11:30:00.896812 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6r5w\" (UniqueName: \"kubernetes.io/projected/5930ee4f-c104-4ac5-9440-2a24d110fae5-kube-api-access-q6r5w\") pod \"5930ee4f-c104-4ac5-9440-2a24d110fae5\" (UID: \"5930ee4f-c104-4ac5-9440-2a24d110fae5\") " Jan 21 11:30:00 crc kubenswrapper[4881]: I0121 11:30:00.896886 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5930ee4f-c104-4ac5-9440-2a24d110fae5-inventory\") pod \"5930ee4f-c104-4ac5-9440-2a24d110fae5\" (UID: \"5930ee4f-c104-4ac5-9440-2a24d110fae5\") " Jan 21 11:30:00 crc kubenswrapper[4881]: I0121 11:30:00.896909 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5930ee4f-c104-4ac5-9440-2a24d110fae5-bootstrap-combined-ca-bundle\") pod \"5930ee4f-c104-4ac5-9440-2a24d110fae5\" (UID: \"5930ee4f-c104-4ac5-9440-2a24d110fae5\") " Jan 21 11:30:00 crc kubenswrapper[4881]: I0121 11:30:00.905704 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5930ee4f-c104-4ac5-9440-2a24d110fae5-kube-api-access-q6r5w" (OuterVolumeSpecName: "kube-api-access-q6r5w") pod "5930ee4f-c104-4ac5-9440-2a24d110fae5" (UID: "5930ee4f-c104-4ac5-9440-2a24d110fae5"). InnerVolumeSpecName "kube-api-access-q6r5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:30:00 crc kubenswrapper[4881]: I0121 11:30:00.918965 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5930ee4f-c104-4ac5-9440-2a24d110fae5-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "5930ee4f-c104-4ac5-9440-2a24d110fae5" (UID: "5930ee4f-c104-4ac5-9440-2a24d110fae5"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:30:00 crc kubenswrapper[4881]: I0121 11:30:00.928996 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5930ee4f-c104-4ac5-9440-2a24d110fae5-inventory" (OuterVolumeSpecName: "inventory") pod "5930ee4f-c104-4ac5-9440-2a24d110fae5" (UID: "5930ee4f-c104-4ac5-9440-2a24d110fae5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:30:00 crc kubenswrapper[4881]: I0121 11:30:00.932522 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5930ee4f-c104-4ac5-9440-2a24d110fae5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5930ee4f-c104-4ac5-9440-2a24d110fae5" (UID: "5930ee4f-c104-4ac5-9440-2a24d110fae5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:30:01 crc kubenswrapper[4881]: I0121 11:30:01.006846 4881 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5930ee4f-c104-4ac5-9440-2a24d110fae5-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 11:30:01 crc kubenswrapper[4881]: I0121 11:30:01.006897 4881 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5930ee4f-c104-4ac5-9440-2a24d110fae5-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:30:01 crc kubenswrapper[4881]: I0121 11:30:01.006915 4881 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5930ee4f-c104-4ac5-9440-2a24d110fae5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 11:30:01 crc kubenswrapper[4881]: I0121 11:30:01.006934 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6r5w\" (UniqueName: \"kubernetes.io/projected/5930ee4f-c104-4ac5-9440-2a24d110fae5-kube-api-access-q6r5w\") on node \"crc\" DevicePath \"\"" Jan 21 11:30:01 crc kubenswrapper[4881]: I0121 11:30:01.096621 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5" event={"ID":"5930ee4f-c104-4ac5-9440-2a24d110fae5","Type":"ContainerDied","Data":"bde4706bcd913ba3323d2b1125ba1ee7475a762ce3f9d0c4ef8b30b43d404e6b"} Jan 21 11:30:01 crc kubenswrapper[4881]: I0121 11:30:01.096670 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bde4706bcd913ba3323d2b1125ba1ee7475a762ce3f9d0c4ef8b30b43d404e6b" Jan 21 11:30:01 crc kubenswrapper[4881]: I0121 11:30:01.096741 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5" Jan 21 11:30:01 crc kubenswrapper[4881]: I0121 11:30:01.195655 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2gnxt"] Jan 21 11:30:01 crc kubenswrapper[4881]: E0121 11:30:01.196294 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5930ee4f-c104-4ac5-9440-2a24d110fae5" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 21 11:30:01 crc kubenswrapper[4881]: I0121 11:30:01.196312 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="5930ee4f-c104-4ac5-9440-2a24d110fae5" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 21 11:30:01 crc kubenswrapper[4881]: I0121 11:30:01.196546 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="5930ee4f-c104-4ac5-9440-2a24d110fae5" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 21 11:30:01 crc kubenswrapper[4881]: I0121 11:30:01.197321 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2gnxt" Jan 21 11:30:01 crc kubenswrapper[4881]: I0121 11:30:01.200004 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 11:30:01 crc kubenswrapper[4881]: I0121 11:30:01.200142 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 11:30:01 crc kubenswrapper[4881]: I0121 11:30:01.200273 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 11:30:01 crc kubenswrapper[4881]: I0121 11:30:01.200328 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fd7zg" Jan 21 11:30:01 crc kubenswrapper[4881]: I0121 11:30:01.211519 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2gnxt\" (UID: \"01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2gnxt" Jan 21 11:30:01 crc kubenswrapper[4881]: I0121 11:30:01.211813 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2gnxt\" (UID: \"01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2gnxt" Jan 21 11:30:01 crc kubenswrapper[4881]: I0121 11:30:01.212017 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79mkd\" (UniqueName: \"kubernetes.io/projected/01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45-kube-api-access-79mkd\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2gnxt\" (UID: \"01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2gnxt" Jan 21 11:30:01 crc kubenswrapper[4881]: I0121 11:30:01.223899 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2gnxt"] Jan 21 11:30:01 crc kubenswrapper[4881]: I0121 11:30:01.247051 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483250-hpz5k"] Jan 21 11:30:01 crc kubenswrapper[4881]: I0121 11:30:01.313666 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2gnxt\" (UID: \"01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2gnxt" Jan 21 11:30:01 crc kubenswrapper[4881]: I0121 11:30:01.313751 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2gnxt\" (UID: \"01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2gnxt" Jan 21 11:30:01 crc kubenswrapper[4881]: I0121 11:30:01.313831 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79mkd\" (UniqueName: \"kubernetes.io/projected/01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45-kube-api-access-79mkd\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2gnxt\" (UID: \"01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2gnxt" Jan 21 11:30:01 crc kubenswrapper[4881]: I0121 11:30:01.319143 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2gnxt\" (UID: \"01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2gnxt" Jan 21 11:30:01 crc kubenswrapper[4881]: I0121 11:30:01.319172 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2gnxt\" (UID: \"01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2gnxt" Jan 21 11:30:01 crc kubenswrapper[4881]: I0121 11:30:01.346101 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79mkd\" (UniqueName: \"kubernetes.io/projected/01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45-kube-api-access-79mkd\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-2gnxt\" (UID: \"01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2gnxt" Jan 21 11:30:01 crc kubenswrapper[4881]: I0121 11:30:01.520144 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2gnxt" Jan 21 11:30:02 crc kubenswrapper[4881]: I0121 11:30:02.234495 4881 generic.go:334] "Generic (PLEG): container finished" podID="0563880c-563e-4cc5-93a0-c2af095788cb" containerID="c97b0fba984ac7ac90aa9867ceabf4a4b1015c378fef6bf95655dcf59a8cdfd7" exitCode=0 Jan 21 11:30:02 crc kubenswrapper[4881]: I0121 11:30:02.234689 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483250-hpz5k" event={"ID":"0563880c-563e-4cc5-93a0-c2af095788cb","Type":"ContainerDied","Data":"c97b0fba984ac7ac90aa9867ceabf4a4b1015c378fef6bf95655dcf59a8cdfd7"} Jan 21 11:30:02 crc kubenswrapper[4881]: I0121 11:30:02.234715 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483250-hpz5k" event={"ID":"0563880c-563e-4cc5-93a0-c2af095788cb","Type":"ContainerStarted","Data":"34da932f062cb57a55dc0f56949e474ce9f5cdd3084f9df91d17f54517eed521"} Jan 21 11:30:02 crc kubenswrapper[4881]: I0121 11:30:02.384111 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2gnxt"] Jan 21 11:30:02 crc kubenswrapper[4881]: W0121 11:30:02.385527 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01f76bc7_59dc_4fd0_8ca8_90ce72cb6f45.slice/crio-9f31968a0bdbdf01d41bad45f1b1b5ed4fb58b40ac6fee51815e11ca82a16e46 WatchSource:0}: Error finding container 9f31968a0bdbdf01d41bad45f1b1b5ed4fb58b40ac6fee51815e11ca82a16e46: Status 404 returned error can't find the container with id 9f31968a0bdbdf01d41bad45f1b1b5ed4fb58b40ac6fee51815e11ca82a16e46 Jan 21 11:30:03 crc kubenswrapper[4881]: I0121 11:30:03.251145 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2gnxt" event={"ID":"01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45","Type":"ContainerStarted","Data":"d7065389e2ebfdcbfd63692c15d886f13375179640678ddba4e24b11c5c250dd"} Jan 21 11:30:03 crc kubenswrapper[4881]: I0121 11:30:03.251873 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2gnxt" event={"ID":"01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45","Type":"ContainerStarted","Data":"9f31968a0bdbdf01d41bad45f1b1b5ed4fb58b40ac6fee51815e11ca82a16e46"} Jan 21 11:30:03 crc kubenswrapper[4881]: I0121 11:30:03.815110 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483250-hpz5k" Jan 21 11:30:03 crc kubenswrapper[4881]: I0121 11:30:03.838653 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2gnxt" podStartSLOduration=2.255792113 podStartE2EDuration="2.83863026s" podCreationTimestamp="2026-01-21 11:30:01 +0000 UTC" firstStartedPulling="2026-01-21 11:30:02.389837165 +0000 UTC m=+1989.649793634" lastFinishedPulling="2026-01-21 11:30:02.972675322 +0000 UTC m=+1990.232631781" observedRunningTime="2026-01-21 11:30:03.273349786 +0000 UTC m=+1990.533306265" watchObservedRunningTime="2026-01-21 11:30:03.83863026 +0000 UTC m=+1991.098586729" Jan 21 11:30:03 crc kubenswrapper[4881]: I0121 11:30:03.876294 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pftj\" (UniqueName: \"kubernetes.io/projected/0563880c-563e-4cc5-93a0-c2af095788cb-kube-api-access-6pftj\") pod \"0563880c-563e-4cc5-93a0-c2af095788cb\" (UID: \"0563880c-563e-4cc5-93a0-c2af095788cb\") " Jan 21 11:30:03 crc kubenswrapper[4881]: I0121 11:30:03.876437 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0563880c-563e-4cc5-93a0-c2af095788cb-config-volume\") pod \"0563880c-563e-4cc5-93a0-c2af095788cb\" (UID: \"0563880c-563e-4cc5-93a0-c2af095788cb\") " Jan 21 11:30:03 crc kubenswrapper[4881]: I0121 11:30:03.876511 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0563880c-563e-4cc5-93a0-c2af095788cb-secret-volume\") pod \"0563880c-563e-4cc5-93a0-c2af095788cb\" (UID: \"0563880c-563e-4cc5-93a0-c2af095788cb\") " Jan 21 11:30:03 crc kubenswrapper[4881]: I0121 11:30:03.877443 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0563880c-563e-4cc5-93a0-c2af095788cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "0563880c-563e-4cc5-93a0-c2af095788cb" (UID: "0563880c-563e-4cc5-93a0-c2af095788cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:30:03 crc kubenswrapper[4881]: I0121 11:30:03.884054 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0563880c-563e-4cc5-93a0-c2af095788cb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0563880c-563e-4cc5-93a0-c2af095788cb" (UID: "0563880c-563e-4cc5-93a0-c2af095788cb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:30:03 crc kubenswrapper[4881]: I0121 11:30:03.884082 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0563880c-563e-4cc5-93a0-c2af095788cb-kube-api-access-6pftj" (OuterVolumeSpecName: "kube-api-access-6pftj") pod "0563880c-563e-4cc5-93a0-c2af095788cb" (UID: "0563880c-563e-4cc5-93a0-c2af095788cb"). InnerVolumeSpecName "kube-api-access-6pftj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:30:03 crc kubenswrapper[4881]: I0121 11:30:03.978377 4881 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0563880c-563e-4cc5-93a0-c2af095788cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 11:30:03 crc kubenswrapper[4881]: I0121 11:30:03.978425 4881 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0563880c-563e-4cc5-93a0-c2af095788cb-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 11:30:03 crc kubenswrapper[4881]: I0121 11:30:03.978442 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pftj\" (UniqueName: \"kubernetes.io/projected/0563880c-563e-4cc5-93a0-c2af095788cb-kube-api-access-6pftj\") on node \"crc\" DevicePath \"\"" Jan 21 11:30:04 crc kubenswrapper[4881]: I0121 11:30:04.262380 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483250-hpz5k" event={"ID":"0563880c-563e-4cc5-93a0-c2af095788cb","Type":"ContainerDied","Data":"34da932f062cb57a55dc0f56949e474ce9f5cdd3084f9df91d17f54517eed521"} Jan 21 11:30:04 crc kubenswrapper[4881]: I0121 11:30:04.262436 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34da932f062cb57a55dc0f56949e474ce9f5cdd3084f9df91d17f54517eed521" Jan 21 11:30:04 crc kubenswrapper[4881]: I0121 11:30:04.262408 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483250-hpz5k" Jan 21 11:30:04 crc kubenswrapper[4881]: I0121 11:30:04.893115 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483205-527gk"] Jan 21 11:30:04 crc kubenswrapper[4881]: I0121 11:30:04.902422 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483205-527gk"] Jan 21 11:30:05 crc kubenswrapper[4881]: I0121 11:30:05.325821 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="303bdbe6-3bb4-4ace-86b1-f489c795580f" path="/var/lib/kubelet/pods/303bdbe6-3bb4-4ace-86b1-f489c795580f/volumes" Jan 21 11:30:14 crc kubenswrapper[4881]: I0121 11:30:14.287385 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-w5mmz"] Jan 21 11:30:14 crc kubenswrapper[4881]: E0121 11:30:14.288536 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0563880c-563e-4cc5-93a0-c2af095788cb" containerName="collect-profiles" Jan 21 11:30:14 crc kubenswrapper[4881]: I0121 11:30:14.288556 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="0563880c-563e-4cc5-93a0-c2af095788cb" containerName="collect-profiles" Jan 21 11:30:14 crc kubenswrapper[4881]: I0121 11:30:14.288838 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="0563880c-563e-4cc5-93a0-c2af095788cb" containerName="collect-profiles" Jan 21 11:30:14 crc kubenswrapper[4881]: I0121 11:30:14.290457 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w5mmz" Jan 21 11:30:14 crc kubenswrapper[4881]: I0121 11:30:14.335834 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w5mmz"] Jan 21 11:30:14 crc kubenswrapper[4881]: I0121 11:30:14.422857 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b968s\" (UniqueName: \"kubernetes.io/projected/2f7bf98e-335f-406f-8ef8-069f86093c55-kube-api-access-b968s\") pod \"redhat-marketplace-w5mmz\" (UID: \"2f7bf98e-335f-406f-8ef8-069f86093c55\") " pod="openshift-marketplace/redhat-marketplace-w5mmz" Jan 21 11:30:14 crc kubenswrapper[4881]: I0121 11:30:14.423345 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f7bf98e-335f-406f-8ef8-069f86093c55-catalog-content\") pod \"redhat-marketplace-w5mmz\" (UID: \"2f7bf98e-335f-406f-8ef8-069f86093c55\") " pod="openshift-marketplace/redhat-marketplace-w5mmz" Jan 21 11:30:14 crc kubenswrapper[4881]: I0121 11:30:14.423388 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f7bf98e-335f-406f-8ef8-069f86093c55-utilities\") pod \"redhat-marketplace-w5mmz\" (UID: \"2f7bf98e-335f-406f-8ef8-069f86093c55\") " pod="openshift-marketplace/redhat-marketplace-w5mmz" Jan 21 11:30:14 crc kubenswrapper[4881]: I0121 11:30:14.527262 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f7bf98e-335f-406f-8ef8-069f86093c55-catalog-content\") pod \"redhat-marketplace-w5mmz\" (UID: \"2f7bf98e-335f-406f-8ef8-069f86093c55\") " pod="openshift-marketplace/redhat-marketplace-w5mmz" Jan 21 11:30:14 crc kubenswrapper[4881]: I0121 11:30:14.527646 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f7bf98e-335f-406f-8ef8-069f86093c55-utilities\") pod \"redhat-marketplace-w5mmz\" (UID: \"2f7bf98e-335f-406f-8ef8-069f86093c55\") " pod="openshift-marketplace/redhat-marketplace-w5mmz" Jan 21 11:30:14 crc kubenswrapper[4881]: I0121 11:30:14.527897 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b968s\" (UniqueName: \"kubernetes.io/projected/2f7bf98e-335f-406f-8ef8-069f86093c55-kube-api-access-b968s\") pod \"redhat-marketplace-w5mmz\" (UID: \"2f7bf98e-335f-406f-8ef8-069f86093c55\") " pod="openshift-marketplace/redhat-marketplace-w5mmz" Jan 21 11:30:14 crc kubenswrapper[4881]: I0121 11:30:14.527932 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f7bf98e-335f-406f-8ef8-069f86093c55-catalog-content\") pod \"redhat-marketplace-w5mmz\" (UID: \"2f7bf98e-335f-406f-8ef8-069f86093c55\") " pod="openshift-marketplace/redhat-marketplace-w5mmz" Jan 21 11:30:14 crc kubenswrapper[4881]: I0121 11:30:14.528156 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f7bf98e-335f-406f-8ef8-069f86093c55-utilities\") pod \"redhat-marketplace-w5mmz\" (UID: \"2f7bf98e-335f-406f-8ef8-069f86093c55\") " pod="openshift-marketplace/redhat-marketplace-w5mmz" Jan 21 11:30:14 crc kubenswrapper[4881]: I0121 11:30:14.551315 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b968s\" (UniqueName: \"kubernetes.io/projected/2f7bf98e-335f-406f-8ef8-069f86093c55-kube-api-access-b968s\") pod \"redhat-marketplace-w5mmz\" (UID: \"2f7bf98e-335f-406f-8ef8-069f86093c55\") " pod="openshift-marketplace/redhat-marketplace-w5mmz" Jan 21 11:30:14 crc kubenswrapper[4881]: I0121 11:30:14.659293 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w5mmz" Jan 21 11:30:15 crc kubenswrapper[4881]: I0121 11:30:15.142235 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w5mmz"] Jan 21 11:30:15 crc kubenswrapper[4881]: W0121 11:30:15.148631 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f7bf98e_335f_406f_8ef8_069f86093c55.slice/crio-f9664760a6abe2fd92cc6c7d5038daf2f3334a151e64a19140c80a7ac40d0bdc WatchSource:0}: Error finding container f9664760a6abe2fd92cc6c7d5038daf2f3334a151e64a19140c80a7ac40d0bdc: Status 404 returned error can't find the container with id f9664760a6abe2fd92cc6c7d5038daf2f3334a151e64a19140c80a7ac40d0bdc Jan 21 11:30:15 crc kubenswrapper[4881]: I0121 11:30:15.380454 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5mmz" event={"ID":"2f7bf98e-335f-406f-8ef8-069f86093c55","Type":"ContainerStarted","Data":"48d5d26b6c9086a6b947d5294b328f1c7e8f26fa1ce1593b0120714fc18e44b1"} Jan 21 11:30:15 crc kubenswrapper[4881]: I0121 11:30:15.380508 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5mmz" event={"ID":"2f7bf98e-335f-406f-8ef8-069f86093c55","Type":"ContainerStarted","Data":"f9664760a6abe2fd92cc6c7d5038daf2f3334a151e64a19140c80a7ac40d0bdc"} Jan 21 11:30:16 crc kubenswrapper[4881]: I0121 11:30:16.098453 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cgr87"] Jan 21 11:30:16 crc kubenswrapper[4881]: I0121 11:30:16.106115 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cgr87" Jan 21 11:30:16 crc kubenswrapper[4881]: I0121 11:30:16.112123 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cgr87"] Jan 21 11:30:16 crc kubenswrapper[4881]: I0121 11:30:16.268979 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65rfw\" (UniqueName: \"kubernetes.io/projected/e28b5533-edc8-47ef-8ba6-23368631d10d-kube-api-access-65rfw\") pod \"redhat-operators-cgr87\" (UID: \"e28b5533-edc8-47ef-8ba6-23368631d10d\") " pod="openshift-marketplace/redhat-operators-cgr87" Jan 21 11:30:16 crc kubenswrapper[4881]: I0121 11:30:16.269077 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e28b5533-edc8-47ef-8ba6-23368631d10d-utilities\") pod \"redhat-operators-cgr87\" (UID: \"e28b5533-edc8-47ef-8ba6-23368631d10d\") " pod="openshift-marketplace/redhat-operators-cgr87" Jan 21 11:30:16 crc kubenswrapper[4881]: I0121 11:30:16.269135 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e28b5533-edc8-47ef-8ba6-23368631d10d-catalog-content\") pod \"redhat-operators-cgr87\" (UID: \"e28b5533-edc8-47ef-8ba6-23368631d10d\") " pod="openshift-marketplace/redhat-operators-cgr87" Jan 21 11:30:16 crc kubenswrapper[4881]: I0121 11:30:16.371335 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e28b5533-edc8-47ef-8ba6-23368631d10d-utilities\") pod \"redhat-operators-cgr87\" (UID: \"e28b5533-edc8-47ef-8ba6-23368631d10d\") " pod="openshift-marketplace/redhat-operators-cgr87" Jan 21 11:30:16 crc kubenswrapper[4881]: I0121 11:30:16.371403 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e28b5533-edc8-47ef-8ba6-23368631d10d-catalog-content\") pod \"redhat-operators-cgr87\" (UID: \"e28b5533-edc8-47ef-8ba6-23368631d10d\") " pod="openshift-marketplace/redhat-operators-cgr87" Jan 21 11:30:16 crc kubenswrapper[4881]: I0121 11:30:16.371643 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65rfw\" (UniqueName: \"kubernetes.io/projected/e28b5533-edc8-47ef-8ba6-23368631d10d-kube-api-access-65rfw\") pod \"redhat-operators-cgr87\" (UID: \"e28b5533-edc8-47ef-8ba6-23368631d10d\") " pod="openshift-marketplace/redhat-operators-cgr87" Jan 21 11:30:16 crc kubenswrapper[4881]: I0121 11:30:16.372164 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e28b5533-edc8-47ef-8ba6-23368631d10d-catalog-content\") pod \"redhat-operators-cgr87\" (UID: \"e28b5533-edc8-47ef-8ba6-23368631d10d\") " pod="openshift-marketplace/redhat-operators-cgr87" Jan 21 11:30:16 crc kubenswrapper[4881]: I0121 11:30:16.373242 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e28b5533-edc8-47ef-8ba6-23368631d10d-utilities\") pod \"redhat-operators-cgr87\" (UID: \"e28b5533-edc8-47ef-8ba6-23368631d10d\") " pod="openshift-marketplace/redhat-operators-cgr87" Jan 21 11:30:16 crc kubenswrapper[4881]: I0121 11:30:16.395769 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65rfw\" (UniqueName: \"kubernetes.io/projected/e28b5533-edc8-47ef-8ba6-23368631d10d-kube-api-access-65rfw\") pod \"redhat-operators-cgr87\" (UID: \"e28b5533-edc8-47ef-8ba6-23368631d10d\") " pod="openshift-marketplace/redhat-operators-cgr87" Jan 21 11:30:16 crc kubenswrapper[4881]: I0121 11:30:16.397334 4881 generic.go:334] "Generic (PLEG): container finished" podID="2f7bf98e-335f-406f-8ef8-069f86093c55" containerID="48d5d26b6c9086a6b947d5294b328f1c7e8f26fa1ce1593b0120714fc18e44b1" exitCode=0 Jan 21 11:30:16 crc kubenswrapper[4881]: I0121 11:30:16.397377 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5mmz" event={"ID":"2f7bf98e-335f-406f-8ef8-069f86093c55","Type":"ContainerDied","Data":"48d5d26b6c9086a6b947d5294b328f1c7e8f26fa1ce1593b0120714fc18e44b1"} Jan 21 11:30:16 crc kubenswrapper[4881]: I0121 11:30:16.453676 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cgr87" Jan 21 11:30:16 crc kubenswrapper[4881]: I0121 11:30:16.957216 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cgr87"] Jan 21 11:30:17 crc kubenswrapper[4881]: I0121 11:30:17.417320 4881 generic.go:334] "Generic (PLEG): container finished" podID="e28b5533-edc8-47ef-8ba6-23368631d10d" containerID="d6ee22258af69df6704251a1ea48a067b0aad9b9017145fdec7581e1437ace89" exitCode=0 Jan 21 11:30:17 crc kubenswrapper[4881]: I0121 11:30:17.417416 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cgr87" event={"ID":"e28b5533-edc8-47ef-8ba6-23368631d10d","Type":"ContainerDied","Data":"d6ee22258af69df6704251a1ea48a067b0aad9b9017145fdec7581e1437ace89"} Jan 21 11:30:17 crc kubenswrapper[4881]: I0121 11:30:17.417492 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cgr87" event={"ID":"e28b5533-edc8-47ef-8ba6-23368631d10d","Type":"ContainerStarted","Data":"1f7f3ae2471976e97c8ea641c9792ee7bc57f8b6be98d0f78836de61e158f4a0"} Jan 21 11:30:17 crc kubenswrapper[4881]: I0121 11:30:17.423959 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5mmz" event={"ID":"2f7bf98e-335f-406f-8ef8-069f86093c55","Type":"ContainerStarted","Data":"c1eba3ae03b1d6805b90d42d0ec2f798fa4704781a61dbdfa8159f414d7bb80e"} Jan 21 11:30:18 crc kubenswrapper[4881]: I0121 11:30:18.438419 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cgr87" event={"ID":"e28b5533-edc8-47ef-8ba6-23368631d10d","Type":"ContainerStarted","Data":"c222168e828ddf8dc31adf5d20e6251d1aebd2db36a121297ee44763be9bc74e"} Jan 21 11:30:18 crc kubenswrapper[4881]: I0121 11:30:18.441389 4881 generic.go:334] "Generic (PLEG): container finished" podID="2f7bf98e-335f-406f-8ef8-069f86093c55" containerID="c1eba3ae03b1d6805b90d42d0ec2f798fa4704781a61dbdfa8159f414d7bb80e" exitCode=0 Jan 21 11:30:18 crc kubenswrapper[4881]: I0121 11:30:18.441433 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5mmz" event={"ID":"2f7bf98e-335f-406f-8ef8-069f86093c55","Type":"ContainerDied","Data":"c1eba3ae03b1d6805b90d42d0ec2f798fa4704781a61dbdfa8159f414d7bb80e"} Jan 21 11:30:22 crc kubenswrapper[4881]: I0121 11:30:22.477326 4881 generic.go:334] "Generic (PLEG): container finished" podID="e28b5533-edc8-47ef-8ba6-23368631d10d" containerID="c222168e828ddf8dc31adf5d20e6251d1aebd2db36a121297ee44763be9bc74e" exitCode=0 Jan 21 11:30:22 crc kubenswrapper[4881]: I0121 11:30:22.477403 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cgr87" event={"ID":"e28b5533-edc8-47ef-8ba6-23368631d10d","Type":"ContainerDied","Data":"c222168e828ddf8dc31adf5d20e6251d1aebd2db36a121297ee44763be9bc74e"} Jan 21 11:30:23 crc kubenswrapper[4881]: I0121 11:30:23.490132 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cgr87" event={"ID":"e28b5533-edc8-47ef-8ba6-23368631d10d","Type":"ContainerStarted","Data":"5e0abf8ffd3df2b4543f3b78f4df1de894199c4c001e6db2e5a3872e46d7a54b"} Jan 21 11:30:23 crc kubenswrapper[4881]: I0121 11:30:23.493270 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5mmz" event={"ID":"2f7bf98e-335f-406f-8ef8-069f86093c55","Type":"ContainerStarted","Data":"0ab0a82d406b0a4031e5637f72af69a714ded06513932b035aeb5ac564f21b6b"} Jan 21 11:30:23 crc kubenswrapper[4881]: I0121 11:30:23.513915 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cgr87" podStartSLOduration=2.065297799 podStartE2EDuration="7.513899556s" podCreationTimestamp="2026-01-21 11:30:16 +0000 UTC" firstStartedPulling="2026-01-21 11:30:17.419847487 +0000 UTC m=+2004.679803956" lastFinishedPulling="2026-01-21 11:30:22.868449244 +0000 UTC m=+2010.128405713" observedRunningTime="2026-01-21 11:30:23.510364 +0000 UTC m=+2010.770320469" watchObservedRunningTime="2026-01-21 11:30:23.513899556 +0000 UTC m=+2010.773856025" Jan 21 11:30:23 crc kubenswrapper[4881]: I0121 11:30:23.541570 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-w5mmz" podStartSLOduration=6.379220021 podStartE2EDuration="9.54154931s" podCreationTimestamp="2026-01-21 11:30:14 +0000 UTC" firstStartedPulling="2026-01-21 11:30:16.399591625 +0000 UTC m=+2003.659548094" lastFinishedPulling="2026-01-21 11:30:19.561920914 +0000 UTC m=+2006.821877383" observedRunningTime="2026-01-21 11:30:23.53458797 +0000 UTC m=+2010.794544439" watchObservedRunningTime="2026-01-21 11:30:23.54154931 +0000 UTC m=+2010.801505779" Jan 21 11:30:24 crc kubenswrapper[4881]: I0121 11:30:24.660428 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-w5mmz" Jan 21 11:30:24 crc kubenswrapper[4881]: I0121 11:30:24.660767 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-w5mmz" Jan 21 11:30:25 crc kubenswrapper[4881]: I0121 11:30:25.702530 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-w5mmz" podUID="2f7bf98e-335f-406f-8ef8-069f86093c55" containerName="registry-server" probeResult="failure" output=< Jan 21 11:30:25 crc kubenswrapper[4881]: timeout: failed to connect service ":50051" within 1s Jan 21 11:30:25 crc kubenswrapper[4881]: > Jan 21 11:30:26 crc kubenswrapper[4881]: I0121 11:30:26.719073 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cgr87" Jan 21 11:30:26 crc kubenswrapper[4881]: I0121 11:30:26.719121 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cgr87" Jan 21 11:30:27 crc kubenswrapper[4881]: I0121 11:30:27.778178 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cgr87" podUID="e28b5533-edc8-47ef-8ba6-23368631d10d" containerName="registry-server" probeResult="failure" output=< Jan 21 11:30:27 crc kubenswrapper[4881]: timeout: failed to connect service ":50051" within 1s Jan 21 11:30:27 crc kubenswrapper[4881]: > Jan 21 11:30:29 crc kubenswrapper[4881]: I0121 11:30:29.550590 4881 scope.go:117] "RemoveContainer" containerID="498906e9fbb3b564603759f2238f54ad3d7c8a3ccff8535f1f6031fd2e192fd4" Jan 21 11:30:29 crc kubenswrapper[4881]: I0121 11:30:29.599753 4881 scope.go:117] "RemoveContainer" containerID="7b3d565271b021e09dee5880082bea3cf44364df7d0a06382823cae7b26b1046" Jan 21 11:30:29 crc kubenswrapper[4881]: I0121 11:30:29.635082 4881 scope.go:117] "RemoveContainer" containerID="2f6a1a1e4268540ee682b58127eb41126b116ba4e30186b584ee325d0961ebec" Jan 21 11:30:29 crc kubenswrapper[4881]: I0121 11:30:29.697513 4881 scope.go:117] "RemoveContainer" containerID="b4ed75bebc3e4f7b35b331a2f216bede613a9086f548aa45e96cbef5724a690a" Jan 21 11:30:29 crc kubenswrapper[4881]: I0121 11:30:29.750424 4881 scope.go:117] "RemoveContainer" containerID="a807273d95c9864f3ecabade018dc0a91eb28a83bcfcbef9786d9473502a12a5" Jan 21 11:30:34 crc kubenswrapper[4881]: I0121 11:30:34.720439 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-w5mmz" Jan 21 11:30:34 crc kubenswrapper[4881]: I0121 11:30:34.775642 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-w5mmz" Jan 21 11:30:34 crc kubenswrapper[4881]: I0121 11:30:34.961836 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w5mmz"] Jan 21 11:30:35 crc kubenswrapper[4881]: I0121 11:30:35.993015 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-w5mmz" podUID="2f7bf98e-335f-406f-8ef8-069f86093c55" containerName="registry-server" containerID="cri-o://0ab0a82d406b0a4031e5637f72af69a714ded06513932b035aeb5ac564f21b6b" gracePeriod=2 Jan 21 11:30:36 crc kubenswrapper[4881]: I0121 11:30:36.512704 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cgr87" Jan 21 11:30:36 crc kubenswrapper[4881]: I0121 11:30:36.558696 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cgr87" Jan 21 11:30:37 crc kubenswrapper[4881]: I0121 11:30:37.362965 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cgr87"] Jan 21 11:30:38 crc kubenswrapper[4881]: I0121 11:30:38.016527 4881 generic.go:334] "Generic (PLEG): container finished" podID="2f7bf98e-335f-406f-8ef8-069f86093c55" containerID="0ab0a82d406b0a4031e5637f72af69a714ded06513932b035aeb5ac564f21b6b" exitCode=0 Jan 21 11:30:38 crc kubenswrapper[4881]: I0121 11:30:38.016598 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5mmz" event={"ID":"2f7bf98e-335f-406f-8ef8-069f86093c55","Type":"ContainerDied","Data":"0ab0a82d406b0a4031e5637f72af69a714ded06513932b035aeb5ac564f21b6b"} Jan 21 11:30:38 crc kubenswrapper[4881]: I0121 11:30:38.017104 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cgr87" podUID="e28b5533-edc8-47ef-8ba6-23368631d10d" containerName="registry-server" containerID="cri-o://5e0abf8ffd3df2b4543f3b78f4df1de894199c4c001e6db2e5a3872e46d7a54b" gracePeriod=2 Jan 21 11:30:39 crc kubenswrapper[4881]: I0121 11:30:39.033192 4881 generic.go:334] "Generic (PLEG): container finished" podID="e28b5533-edc8-47ef-8ba6-23368631d10d" containerID="5e0abf8ffd3df2b4543f3b78f4df1de894199c4c001e6db2e5a3872e46d7a54b" exitCode=0 Jan 21 11:30:39 crc kubenswrapper[4881]: I0121 11:30:39.033273 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cgr87" event={"ID":"e28b5533-edc8-47ef-8ba6-23368631d10d","Type":"ContainerDied","Data":"5e0abf8ffd3df2b4543f3b78f4df1de894199c4c001e6db2e5a3872e46d7a54b"} Jan 21 11:30:39 crc kubenswrapper[4881]: I0121 11:30:39.033343 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cgr87" event={"ID":"e28b5533-edc8-47ef-8ba6-23368631d10d","Type":"ContainerDied","Data":"1f7f3ae2471976e97c8ea641c9792ee7bc57f8b6be98d0f78836de61e158f4a0"} Jan 21 11:30:39 crc kubenswrapper[4881]: I0121 11:30:39.033360 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f7f3ae2471976e97c8ea641c9792ee7bc57f8b6be98d0f78836de61e158f4a0" Jan 21 11:30:39 crc kubenswrapper[4881]: I0121 11:30:39.035812 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w5mmz" event={"ID":"2f7bf98e-335f-406f-8ef8-069f86093c55","Type":"ContainerDied","Data":"f9664760a6abe2fd92cc6c7d5038daf2f3334a151e64a19140c80a7ac40d0bdc"} Jan 21 11:30:39 crc kubenswrapper[4881]: I0121 11:30:39.035868 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9664760a6abe2fd92cc6c7d5038daf2f3334a151e64a19140c80a7ac40d0bdc" Jan 21 11:30:39 crc kubenswrapper[4881]: I0121 11:30:39.072239 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w5mmz" Jan 21 11:30:39 crc kubenswrapper[4881]: I0121 11:30:39.085535 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cgr87" Jan 21 11:30:39 crc kubenswrapper[4881]: I0121 11:30:39.273849 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e28b5533-edc8-47ef-8ba6-23368631d10d-utilities\") pod \"e28b5533-edc8-47ef-8ba6-23368631d10d\" (UID: \"e28b5533-edc8-47ef-8ba6-23368631d10d\") " Jan 21 11:30:39 crc kubenswrapper[4881]: I0121 11:30:39.273991 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b968s\" (UniqueName: \"kubernetes.io/projected/2f7bf98e-335f-406f-8ef8-069f86093c55-kube-api-access-b968s\") pod \"2f7bf98e-335f-406f-8ef8-069f86093c55\" (UID: \"2f7bf98e-335f-406f-8ef8-069f86093c55\") " Jan 21 11:30:39 crc kubenswrapper[4881]: I0121 11:30:39.274728 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e28b5533-edc8-47ef-8ba6-23368631d10d-utilities" (OuterVolumeSpecName: "utilities") pod "e28b5533-edc8-47ef-8ba6-23368631d10d" (UID: "e28b5533-edc8-47ef-8ba6-23368631d10d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:30:39 crc kubenswrapper[4881]: I0121 11:30:39.274812 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65rfw\" (UniqueName: \"kubernetes.io/projected/e28b5533-edc8-47ef-8ba6-23368631d10d-kube-api-access-65rfw\") pod \"e28b5533-edc8-47ef-8ba6-23368631d10d\" (UID: \"e28b5533-edc8-47ef-8ba6-23368631d10d\") " Jan 21 11:30:39 crc kubenswrapper[4881]: I0121 11:30:39.274866 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e28b5533-edc8-47ef-8ba6-23368631d10d-catalog-content\") pod \"e28b5533-edc8-47ef-8ba6-23368631d10d\" (UID: \"e28b5533-edc8-47ef-8ba6-23368631d10d\") " Jan 21 11:30:39 crc kubenswrapper[4881]: I0121 11:30:39.274958 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f7bf98e-335f-406f-8ef8-069f86093c55-utilities\") pod \"2f7bf98e-335f-406f-8ef8-069f86093c55\" (UID: \"2f7bf98e-335f-406f-8ef8-069f86093c55\") " Jan 21 11:30:39 crc kubenswrapper[4881]: I0121 11:30:39.275062 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f7bf98e-335f-406f-8ef8-069f86093c55-catalog-content\") pod \"2f7bf98e-335f-406f-8ef8-069f86093c55\" (UID: \"2f7bf98e-335f-406f-8ef8-069f86093c55\") " Jan 21 11:30:39 crc kubenswrapper[4881]: I0121 11:30:39.275521 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e28b5533-edc8-47ef-8ba6-23368631d10d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:30:39 crc kubenswrapper[4881]: I0121 11:30:39.275705 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f7bf98e-335f-406f-8ef8-069f86093c55-utilities" (OuterVolumeSpecName: "utilities") pod "2f7bf98e-335f-406f-8ef8-069f86093c55" (UID: "2f7bf98e-335f-406f-8ef8-069f86093c55"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:30:39 crc kubenswrapper[4881]: I0121 11:30:39.280339 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f7bf98e-335f-406f-8ef8-069f86093c55-kube-api-access-b968s" (OuterVolumeSpecName: "kube-api-access-b968s") pod "2f7bf98e-335f-406f-8ef8-069f86093c55" (UID: "2f7bf98e-335f-406f-8ef8-069f86093c55"). InnerVolumeSpecName "kube-api-access-b968s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:30:39 crc kubenswrapper[4881]: I0121 11:30:39.288100 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e28b5533-edc8-47ef-8ba6-23368631d10d-kube-api-access-65rfw" (OuterVolumeSpecName: "kube-api-access-65rfw") pod "e28b5533-edc8-47ef-8ba6-23368631d10d" (UID: "e28b5533-edc8-47ef-8ba6-23368631d10d"). InnerVolumeSpecName "kube-api-access-65rfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:30:39 crc kubenswrapper[4881]: I0121 11:30:39.304468 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f7bf98e-335f-406f-8ef8-069f86093c55-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2f7bf98e-335f-406f-8ef8-069f86093c55" (UID: "2f7bf98e-335f-406f-8ef8-069f86093c55"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:30:39 crc kubenswrapper[4881]: I0121 11:30:39.377103 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f7bf98e-335f-406f-8ef8-069f86093c55-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:30:39 crc kubenswrapper[4881]: I0121 11:30:39.377141 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f7bf98e-335f-406f-8ef8-069f86093c55-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:30:39 crc kubenswrapper[4881]: I0121 11:30:39.377151 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b968s\" (UniqueName: \"kubernetes.io/projected/2f7bf98e-335f-406f-8ef8-069f86093c55-kube-api-access-b968s\") on node \"crc\" DevicePath \"\"" Jan 21 11:30:39 crc kubenswrapper[4881]: I0121 11:30:39.377160 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65rfw\" (UniqueName: \"kubernetes.io/projected/e28b5533-edc8-47ef-8ba6-23368631d10d-kube-api-access-65rfw\") on node \"crc\" DevicePath \"\"" Jan 21 11:30:39 crc kubenswrapper[4881]: I0121 11:30:39.394549 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e28b5533-edc8-47ef-8ba6-23368631d10d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e28b5533-edc8-47ef-8ba6-23368631d10d" (UID: "e28b5533-edc8-47ef-8ba6-23368631d10d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:30:39 crc kubenswrapper[4881]: I0121 11:30:39.479696 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e28b5533-edc8-47ef-8ba6-23368631d10d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:30:40 crc kubenswrapper[4881]: I0121 11:30:40.045985 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w5mmz" Jan 21 11:30:40 crc kubenswrapper[4881]: I0121 11:30:40.046022 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cgr87" Jan 21 11:30:40 crc kubenswrapper[4881]: I0121 11:30:40.085477 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w5mmz"] Jan 21 11:30:40 crc kubenswrapper[4881]: I0121 11:30:40.115810 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-w5mmz"] Jan 21 11:30:40 crc kubenswrapper[4881]: I0121 11:30:40.124627 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cgr87"] Jan 21 11:30:40 crc kubenswrapper[4881]: I0121 11:30:40.133304 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cgr87"] Jan 21 11:30:41 crc kubenswrapper[4881]: I0121 11:30:41.328266 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f7bf98e-335f-406f-8ef8-069f86093c55" path="/var/lib/kubelet/pods/2f7bf98e-335f-406f-8ef8-069f86093c55/volumes" Jan 21 11:30:41 crc kubenswrapper[4881]: I0121 11:30:41.329245 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e28b5533-edc8-47ef-8ba6-23368631d10d" path="/var/lib/kubelet/pods/e28b5533-edc8-47ef-8ba6-23368631d10d/volumes" Jan 21 11:30:46 crc kubenswrapper[4881]: I0121 11:30:46.047176 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-kc9jz"] Jan 21 11:30:46 crc kubenswrapper[4881]: I0121 11:30:46.058972 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-mzhtm"] Jan 21 11:30:46 crc kubenswrapper[4881]: I0121 11:30:46.069275 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-mzhtm"] Jan 21 11:30:46 crc kubenswrapper[4881]: I0121 11:30:46.079484 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-kc9jz"] Jan 21 11:30:47 crc kubenswrapper[4881]: I0121 11:30:47.687291 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33f9442b-24ee-47d4-b914-19d32a5cad74" path="/var/lib/kubelet/pods/33f9442b-24ee-47d4-b914-19d32a5cad74/volumes" Jan 21 11:30:47 crc kubenswrapper[4881]: I0121 11:30:47.689887 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f568ffda-82a9-4f47-89d3-13b89a35c9b4" path="/var/lib/kubelet/pods/f568ffda-82a9-4f47-89d3-13b89a35c9b4/volumes" Jan 21 11:30:50 crc kubenswrapper[4881]: I0121 11:30:50.029211 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-t6mz2"] Jan 21 11:30:50 crc kubenswrapper[4881]: I0121 11:30:50.037505 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-t6mz2"] Jan 21 11:30:51 crc kubenswrapper[4881]: I0121 11:30:51.325908 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869a596b-159c-4185-a4ab-0e36c5d130fc" path="/var/lib/kubelet/pods/869a596b-159c-4185-a4ab-0e36c5d130fc/volumes" Jan 21 11:31:00 crc kubenswrapper[4881]: I0121 11:31:00.043944 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-slhtz"] Jan 21 11:31:00 crc kubenswrapper[4881]: I0121 11:31:00.054007 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-slhtz"] Jan 21 11:31:01 crc kubenswrapper[4881]: I0121 11:31:01.321209 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bf52889-d5f3-44f8-b657-8ff3790962d1" path="/var/lib/kubelet/pods/4bf52889-d5f3-44f8-b657-8ff3790962d1/volumes" Jan 21 11:31:07 crc kubenswrapper[4881]: I0121 11:31:07.054040 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-4wxvl"] Jan 21 11:31:07 crc kubenswrapper[4881]: I0121 11:31:07.090190 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-4wxvl"] Jan 21 11:31:07 crc kubenswrapper[4881]: I0121 11:31:07.322194 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65250dcf-0f0f-4fa6-8d57-e07d3d29f290" path="/var/lib/kubelet/pods/65250dcf-0f0f-4fa6-8d57-e07d3d29f290/volumes" Jan 21 11:31:24 crc kubenswrapper[4881]: I0121 11:31:24.042396 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-mxb97"] Jan 21 11:31:24 crc kubenswrapper[4881]: I0121 11:31:24.053664 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-mxb97"] Jan 21 11:31:25 crc kubenswrapper[4881]: I0121 11:31:25.325515 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="349e8898-8b7c-414a-8357-d431c8b81bf4" path="/var/lib/kubelet/pods/349e8898-8b7c-414a-8357-d431c8b81bf4/volumes" Jan 21 11:31:29 crc kubenswrapper[4881]: I0121 11:31:29.841419 4881 scope.go:117] "RemoveContainer" containerID="6641f95a17dea3fe9aff6d4faf3bd17425257c19253868f2b83b7d7d759a48fd" Jan 21 11:31:29 crc kubenswrapper[4881]: I0121 11:31:29.910680 4881 scope.go:117] "RemoveContainer" containerID="c648692c811ad6f54f474e55240cf83d10bccce020989330faa953f52c62836c" Jan 21 11:31:30 crc kubenswrapper[4881]: I0121 11:31:30.002983 4881 scope.go:117] "RemoveContainer" containerID="60c7ee63bf67b35a7137c545eb5e36b0ba7f24fe96f583c9314a3bcf2ea933c6" Jan 21 11:31:30 crc kubenswrapper[4881]: I0121 11:31:30.047668 4881 scope.go:117] "RemoveContainer" containerID="3a796b1b54b7432132400a5a214afb4cf61aaada5f5054cc747d5e74194d9dae" Jan 21 11:31:30 crc kubenswrapper[4881]: I0121 11:31:30.106041 4881 scope.go:117] "RemoveContainer" containerID="b750c2c4c79eaa65d01394c5ce39a3b9970863a1b04d7248173d08889a7ae0be" Jan 21 11:31:30 crc kubenswrapper[4881]: I0121 11:31:30.152221 4881 scope.go:117] "RemoveContainer" containerID="e31e701604fd33a6bb82c0b6900e3f3bdeaa0b71abb7488fd4edd2c71ed37a56" Jan 21 11:31:59 crc kubenswrapper[4881]: I0121 11:31:59.062111 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-jdk2x"] Jan 21 11:31:59 crc kubenswrapper[4881]: I0121 11:31:59.071655 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-b85xv"] Jan 21 11:31:59 crc kubenswrapper[4881]: I0121 11:31:59.080233 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-b4dc-account-create-update-46bk2"] Jan 21 11:31:59 crc kubenswrapper[4881]: I0121 11:31:59.090179 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-jdk2x"] Jan 21 11:31:59 crc kubenswrapper[4881]: I0121 11:31:59.100586 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-b4dc-account-create-update-46bk2"] Jan 21 11:31:59 crc kubenswrapper[4881]: I0121 11:31:59.109024 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-b85xv"] Jan 21 11:31:59 crc kubenswrapper[4881]: I0121 11:31:59.327618 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a601b0e-b326-4e55-901e-08a32fe24005" path="/var/lib/kubelet/pods/2a601b0e-b326-4e55-901e-08a32fe24005/volumes" Jan 21 11:31:59 crc kubenswrapper[4881]: I0121 11:31:59.328808 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d8a04fd-1a86-454f-bd69-64ad270b8357" path="/var/lib/kubelet/pods/4d8a04fd-1a86-454f-bd69-64ad270b8357/volumes" Jan 21 11:31:59 crc kubenswrapper[4881]: I0121 11:31:59.330097 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="502efce3-0d16-491d-b6fa-1b1d98f76d4b" path="/var/lib/kubelet/pods/502efce3-0d16-491d-b6fa-1b1d98f76d4b/volumes" Jan 21 11:31:59 crc kubenswrapper[4881]: I0121 11:31:59.851306 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:31:59 crc kubenswrapper[4881]: I0121 11:31:59.851684 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:32:00 crc kubenswrapper[4881]: I0121 11:32:00.032529 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-fb46-account-create-update-xxwmq"] Jan 21 11:32:00 crc kubenswrapper[4881]: I0121 11:32:00.042126 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-f99bl"] Jan 21 11:32:00 crc kubenswrapper[4881]: I0121 11:32:00.052226 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-5627-account-create-update-mbnwf"] Jan 21 11:32:00 crc kubenswrapper[4881]: I0121 11:32:00.066218 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-5627-account-create-update-mbnwf"] Jan 21 11:32:00 crc kubenswrapper[4881]: I0121 11:32:00.082237 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-fb46-account-create-update-xxwmq"] Jan 21 11:32:00 crc kubenswrapper[4881]: I0121 11:32:00.104018 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-f99bl"] Jan 21 11:32:01 crc kubenswrapper[4881]: I0121 11:32:01.325395 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29487dae-24e9-4d5b-9819-99516df78630" path="/var/lib/kubelet/pods/29487dae-24e9-4d5b-9819-99516df78630/volumes" Jan 21 11:32:01 crc kubenswrapper[4881]: I0121 11:32:01.327398 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de50b4a3-643f-4e4a-9853-b794eae5c08c" path="/var/lib/kubelet/pods/de50b4a3-643f-4e4a-9853-b794eae5c08c/volumes" Jan 21 11:32:01 crc kubenswrapper[4881]: I0121 11:32:01.329113 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2c35a47-0e6e-4760-9026-617ca187b066" path="/var/lib/kubelet/pods/f2c35a47-0e6e-4760-9026-617ca187b066/volumes" Jan 21 11:32:29 crc kubenswrapper[4881]: I0121 11:32:29.850912 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:32:29 crc kubenswrapper[4881]: I0121 11:32:29.851958 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:32:30 crc kubenswrapper[4881]: I0121 11:32:30.316398 4881 scope.go:117] "RemoveContainer" containerID="22038197b765a72901f7e4d04d0bebb17e8d3bca09464adc6dc75e99375c24ab" Jan 21 11:32:30 crc kubenswrapper[4881]: I0121 11:32:30.354545 4881 scope.go:117] "RemoveContainer" containerID="e072378bb8b79adf91d2701f6ed4a0743a1956ccf92868309d50c74d1a40ff46" Jan 21 11:32:30 crc kubenswrapper[4881]: I0121 11:32:30.449429 4881 scope.go:117] "RemoveContainer" containerID="5d3f34869256c4d21e6b17d94ceaa6baf87aefe4c608982c7e1561bfc3b81de2" Jan 21 11:32:30 crc kubenswrapper[4881]: I0121 11:32:30.494561 4881 scope.go:117] "RemoveContainer" containerID="dccd9ebbabd2787629df88e189e045b4233f9efdaa17a33f088ad8c951d3530a" Jan 21 11:32:30 crc kubenswrapper[4881]: I0121 11:32:30.540836 4881 scope.go:117] "RemoveContainer" containerID="27659f5aab69bf4af66ab9aeb1d61a07fd49c77e8daa35d08cb33096b28e9074" Jan 21 11:32:30 crc kubenswrapper[4881]: I0121 11:32:30.618175 4881 scope.go:117] "RemoveContainer" containerID="3e8735972d4959fbfdcc07dada19674d2a9110125d71fdfe160979bcc5be0481" Jan 21 11:32:32 crc kubenswrapper[4881]: I0121 11:32:32.272842 4881 generic.go:334] "Generic (PLEG): container finished" podID="01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45" containerID="d7065389e2ebfdcbfd63692c15d886f13375179640678ddba4e24b11c5c250dd" exitCode=0 Jan 21 11:32:32 crc kubenswrapper[4881]: I0121 11:32:32.272928 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2gnxt" event={"ID":"01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45","Type":"ContainerDied","Data":"d7065389e2ebfdcbfd63692c15d886f13375179640678ddba4e24b11c5c250dd"} Jan 21 11:32:33 crc kubenswrapper[4881]: I0121 11:32:33.805341 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2gnxt" Jan 21 11:32:33 crc kubenswrapper[4881]: I0121 11:32:33.863260 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79mkd\" (UniqueName: \"kubernetes.io/projected/01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45-kube-api-access-79mkd\") pod \"01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45\" (UID: \"01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45\") " Jan 21 11:32:33 crc kubenswrapper[4881]: I0121 11:32:33.863658 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45-ssh-key-openstack-edpm-ipam\") pod \"01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45\" (UID: \"01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45\") " Jan 21 11:32:33 crc kubenswrapper[4881]: I0121 11:32:33.863829 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45-inventory\") pod \"01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45\" (UID: \"01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45\") " Jan 21 11:32:33 crc kubenswrapper[4881]: I0121 11:32:33.870873 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45-kube-api-access-79mkd" (OuterVolumeSpecName: "kube-api-access-79mkd") pod "01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45" (UID: "01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45"). InnerVolumeSpecName "kube-api-access-79mkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:32:33 crc kubenswrapper[4881]: I0121 11:32:33.896050 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45" (UID: "01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:32:33 crc kubenswrapper[4881]: I0121 11:32:33.912250 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45-inventory" (OuterVolumeSpecName: "inventory") pod "01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45" (UID: "01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:32:33 crc kubenswrapper[4881]: I0121 11:32:33.966539 4881 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 11:32:33 crc kubenswrapper[4881]: I0121 11:32:33.966591 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79mkd\" (UniqueName: \"kubernetes.io/projected/01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45-kube-api-access-79mkd\") on node \"crc\" DevicePath \"\"" Jan 21 11:32:33 crc kubenswrapper[4881]: I0121 11:32:33.966607 4881 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 11:32:34 crc kubenswrapper[4881]: I0121 11:32:34.291867 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2gnxt" event={"ID":"01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45","Type":"ContainerDied","Data":"9f31968a0bdbdf01d41bad45f1b1b5ed4fb58b40ac6fee51815e11ca82a16e46"} Jan 21 11:32:34 crc kubenswrapper[4881]: I0121 11:32:34.291913 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f31968a0bdbdf01d41bad45f1b1b5ed4fb58b40ac6fee51815e11ca82a16e46" Jan 21 11:32:34 crc kubenswrapper[4881]: I0121 11:32:34.291915 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-2gnxt" Jan 21 11:32:34 crc kubenswrapper[4881]: I0121 11:32:34.403570 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4wdq6"] Jan 21 11:32:34 crc kubenswrapper[4881]: E0121 11:32:34.404216 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e28b5533-edc8-47ef-8ba6-23368631d10d" containerName="registry-server" Jan 21 11:32:34 crc kubenswrapper[4881]: I0121 11:32:34.404237 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="e28b5533-edc8-47ef-8ba6-23368631d10d" containerName="registry-server" Jan 21 11:32:34 crc kubenswrapper[4881]: E0121 11:32:34.404254 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 21 11:32:34 crc kubenswrapper[4881]: I0121 11:32:34.404281 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 21 11:32:34 crc kubenswrapper[4881]: E0121 11:32:34.404295 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f7bf98e-335f-406f-8ef8-069f86093c55" containerName="extract-utilities" Jan 21 11:32:34 crc kubenswrapper[4881]: I0121 11:32:34.404301 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f7bf98e-335f-406f-8ef8-069f86093c55" containerName="extract-utilities" Jan 21 11:32:34 crc kubenswrapper[4881]: E0121 11:32:34.404320 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f7bf98e-335f-406f-8ef8-069f86093c55" containerName="registry-server" Jan 21 11:32:34 crc kubenswrapper[4881]: I0121 11:32:34.404326 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f7bf98e-335f-406f-8ef8-069f86093c55" containerName="registry-server" Jan 21 11:32:34 crc kubenswrapper[4881]: E0121 11:32:34.404365 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e28b5533-edc8-47ef-8ba6-23368631d10d" containerName="extract-utilities" Jan 21 11:32:34 crc kubenswrapper[4881]: I0121 11:32:34.404372 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="e28b5533-edc8-47ef-8ba6-23368631d10d" containerName="extract-utilities" Jan 21 11:32:34 crc kubenswrapper[4881]: E0121 11:32:34.404383 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e28b5533-edc8-47ef-8ba6-23368631d10d" containerName="extract-content" Jan 21 11:32:34 crc kubenswrapper[4881]: I0121 11:32:34.404390 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="e28b5533-edc8-47ef-8ba6-23368631d10d" containerName="extract-content" Jan 21 11:32:34 crc kubenswrapper[4881]: E0121 11:32:34.404411 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f7bf98e-335f-406f-8ef8-069f86093c55" containerName="extract-content" Jan 21 11:32:34 crc kubenswrapper[4881]: I0121 11:32:34.404434 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f7bf98e-335f-406f-8ef8-069f86093c55" containerName="extract-content" Jan 21 11:32:34 crc kubenswrapper[4881]: I0121 11:32:34.404675 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f7bf98e-335f-406f-8ef8-069f86093c55" containerName="registry-server" Jan 21 11:32:34 crc kubenswrapper[4881]: I0121 11:32:34.404700 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="e28b5533-edc8-47ef-8ba6-23368631d10d" containerName="registry-server" Jan 21 11:32:34 crc kubenswrapper[4881]: I0121 11:32:34.404717 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 21 11:32:34 crc kubenswrapper[4881]: I0121 11:32:34.406226 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4wdq6" Jan 21 11:32:34 crc kubenswrapper[4881]: I0121 11:32:34.411400 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fd7zg" Jan 21 11:32:34 crc kubenswrapper[4881]: I0121 11:32:34.411686 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 11:32:34 crc kubenswrapper[4881]: I0121 11:32:34.412087 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 11:32:34 crc kubenswrapper[4881]: I0121 11:32:34.412270 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 11:32:34 crc kubenswrapper[4881]: I0121 11:32:34.413762 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4wdq6"] Jan 21 11:32:34 crc kubenswrapper[4881]: I0121 11:32:34.482415 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/24a093f9-cd67-48f9-a18b-48d1a79a8aa0-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4wdq6\" (UID: \"24a093f9-cd67-48f9-a18b-48d1a79a8aa0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4wdq6" Jan 21 11:32:34 crc kubenswrapper[4881]: I0121 11:32:34.482815 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwwm9\" (UniqueName: \"kubernetes.io/projected/24a093f9-cd67-48f9-a18b-48d1a79a8aa0-kube-api-access-wwwm9\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4wdq6\" (UID: \"24a093f9-cd67-48f9-a18b-48d1a79a8aa0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4wdq6" Jan 21 11:32:34 crc kubenswrapper[4881]: I0121 11:32:34.482944 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24a093f9-cd67-48f9-a18b-48d1a79a8aa0-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4wdq6\" (UID: \"24a093f9-cd67-48f9-a18b-48d1a79a8aa0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4wdq6" Jan 21 11:32:34 crc kubenswrapper[4881]: I0121 11:32:34.586411 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/24a093f9-cd67-48f9-a18b-48d1a79a8aa0-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4wdq6\" (UID: \"24a093f9-cd67-48f9-a18b-48d1a79a8aa0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4wdq6" Jan 21 11:32:34 crc kubenswrapper[4881]: I0121 11:32:34.586461 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwwm9\" (UniqueName: \"kubernetes.io/projected/24a093f9-cd67-48f9-a18b-48d1a79a8aa0-kube-api-access-wwwm9\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4wdq6\" (UID: \"24a093f9-cd67-48f9-a18b-48d1a79a8aa0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4wdq6" Jan 21 11:32:34 crc kubenswrapper[4881]: I0121 11:32:34.586494 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24a093f9-cd67-48f9-a18b-48d1a79a8aa0-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4wdq6\" (UID: \"24a093f9-cd67-48f9-a18b-48d1a79a8aa0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4wdq6" Jan 21 11:32:34 crc kubenswrapper[4881]: I0121 11:32:34.591177 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/24a093f9-cd67-48f9-a18b-48d1a79a8aa0-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4wdq6\" (UID: \"24a093f9-cd67-48f9-a18b-48d1a79a8aa0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4wdq6" Jan 21 11:32:34 crc kubenswrapper[4881]: I0121 11:32:34.596892 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24a093f9-cd67-48f9-a18b-48d1a79a8aa0-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4wdq6\" (UID: \"24a093f9-cd67-48f9-a18b-48d1a79a8aa0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4wdq6" Jan 21 11:32:34 crc kubenswrapper[4881]: I0121 11:32:34.603744 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwwm9\" (UniqueName: \"kubernetes.io/projected/24a093f9-cd67-48f9-a18b-48d1a79a8aa0-kube-api-access-wwwm9\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4wdq6\" (UID: \"24a093f9-cd67-48f9-a18b-48d1a79a8aa0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4wdq6" Jan 21 11:32:34 crc kubenswrapper[4881]: I0121 11:32:34.727516 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4wdq6" Jan 21 11:32:35 crc kubenswrapper[4881]: I0121 11:32:35.298958 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4wdq6"] Jan 21 11:32:35 crc kubenswrapper[4881]: I0121 11:32:35.305811 4881 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 11:32:36 crc kubenswrapper[4881]: I0121 11:32:36.316355 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4wdq6" event={"ID":"24a093f9-cd67-48f9-a18b-48d1a79a8aa0","Type":"ContainerStarted","Data":"ed91e50a3880cb037a332efeeea663c905f6d34b8520e7608505f8f61898c93d"} Jan 21 11:32:36 crc kubenswrapper[4881]: I0121 11:32:36.316698 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4wdq6" event={"ID":"24a093f9-cd67-48f9-a18b-48d1a79a8aa0","Type":"ContainerStarted","Data":"7a8fa3b39b588ac3bed4bee992d7ff3c312e5258aac1318986c1e1881a279a1c"} Jan 21 11:32:36 crc kubenswrapper[4881]: I0121 11:32:36.347980 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4wdq6" podStartSLOduration=1.850976952 podStartE2EDuration="2.3479595s" podCreationTimestamp="2026-01-21 11:32:34 +0000 UTC" firstStartedPulling="2026-01-21 11:32:35.304576503 +0000 UTC m=+2142.564532982" lastFinishedPulling="2026-01-21 11:32:35.801559061 +0000 UTC m=+2143.061515530" observedRunningTime="2026-01-21 11:32:36.342026894 +0000 UTC m=+2143.601983383" watchObservedRunningTime="2026-01-21 11:32:36.3479595 +0000 UTC m=+2143.607915969" Jan 21 11:32:42 crc kubenswrapper[4881]: I0121 11:32:42.046766 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-f7mmp"] Jan 21 11:32:42 crc kubenswrapper[4881]: I0121 11:32:42.055956 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-f7mmp"] Jan 21 11:32:43 crc kubenswrapper[4881]: I0121 11:32:43.339689 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16c22e38-1b3d-44b8-9519-0769200d708b" path="/var/lib/kubelet/pods/16c22e38-1b3d-44b8-9519-0769200d708b/volumes" Jan 21 11:32:59 crc kubenswrapper[4881]: I0121 11:32:59.851770 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:32:59 crc kubenswrapper[4881]: I0121 11:32:59.852566 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:32:59 crc kubenswrapper[4881]: I0121 11:32:59.852639 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 11:32:59 crc kubenswrapper[4881]: I0121 11:32:59.854092 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ef39ee7cfe761ce9a9728441eb10e70a161b503ea812b7dfbf273e44506d3274"} pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 11:32:59 crc kubenswrapper[4881]: I0121 11:32:59.854206 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" containerID="cri-o://ef39ee7cfe761ce9a9728441eb10e70a161b503ea812b7dfbf273e44506d3274" gracePeriod=600 Jan 21 11:33:00 crc kubenswrapper[4881]: I0121 11:33:00.626930 4881 generic.go:334] "Generic (PLEG): container finished" podID="3687b313-1df2-4274-80db-8c758b51bf2d" containerID="ef39ee7cfe761ce9a9728441eb10e70a161b503ea812b7dfbf273e44506d3274" exitCode=0 Jan 21 11:33:00 crc kubenswrapper[4881]: I0121 11:33:00.627452 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerDied","Data":"ef39ee7cfe761ce9a9728441eb10e70a161b503ea812b7dfbf273e44506d3274"} Jan 21 11:33:00 crc kubenswrapper[4881]: I0121 11:33:00.627526 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"ec38e0182d2afb5352817a93e9019bceb63eb1c3df53485164249f8ee9b9d46f"} Jan 21 11:33:00 crc kubenswrapper[4881]: I0121 11:33:00.627550 4881 scope.go:117] "RemoveContainer" containerID="8d01a71cc3cebcfb692e7385a1f123f20c56f33df75b2fdeed7ba4c65dcb43ca" Jan 21 11:33:11 crc kubenswrapper[4881]: I0121 11:33:11.051924 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-qgqh7"] Jan 21 11:33:11 crc kubenswrapper[4881]: I0121 11:33:11.067722 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-qgqh7"] Jan 21 11:33:11 crc kubenswrapper[4881]: I0121 11:33:11.329637 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad" path="/var/lib/kubelet/pods/9ba4ee35-5e5b-4f3c-ab64-e1dbd6b494ad/volumes" Jan 21 11:33:13 crc kubenswrapper[4881]: I0121 11:33:13.031229 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-sf7xj"] Jan 21 11:33:13 crc kubenswrapper[4881]: I0121 11:33:13.047871 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-sf7xj"] Jan 21 11:33:13 crc kubenswrapper[4881]: I0121 11:33:13.324429 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="813d73da-18da-40fa-b949-bbeec6604ac9" path="/var/lib/kubelet/pods/813d73da-18da-40fa-b949-bbeec6604ac9/volumes" Jan 21 11:33:30 crc kubenswrapper[4881]: I0121 11:33:30.791676 4881 scope.go:117] "RemoveContainer" containerID="45d2c9cf95b1e6ab35e425681a61a8e4775263f35ab1c8463912de139e00b535" Jan 21 11:33:30 crc kubenswrapper[4881]: I0121 11:33:30.877508 4881 scope.go:117] "RemoveContainer" containerID="0055b21217090cd15d9d0b17356b22b40f32a70cf1a35f1e9043b6cc9a7f1186" Jan 21 11:33:30 crc kubenswrapper[4881]: I0121 11:33:30.945247 4881 scope.go:117] "RemoveContainer" containerID="02004fbf2f26b53236286799b468ab78450f8557fc37a01d6e78bf2e7876befc" Jan 21 11:33:39 crc kubenswrapper[4881]: I0121 11:33:39.105333 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7jd4s"] Jan 21 11:33:39 crc kubenswrapper[4881]: I0121 11:33:39.109221 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7jd4s" Jan 21 11:33:39 crc kubenswrapper[4881]: I0121 11:33:39.116963 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7jd4s"] Jan 21 11:33:39 crc kubenswrapper[4881]: I0121 11:33:39.269582 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/067c1d92-f45d-4b2d-978c-7db14c5db12c-utilities\") pod \"community-operators-7jd4s\" (UID: \"067c1d92-f45d-4b2d-978c-7db14c5db12c\") " pod="openshift-marketplace/community-operators-7jd4s" Jan 21 11:33:39 crc kubenswrapper[4881]: I0121 11:33:39.269679 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/067c1d92-f45d-4b2d-978c-7db14c5db12c-catalog-content\") pod \"community-operators-7jd4s\" (UID: \"067c1d92-f45d-4b2d-978c-7db14c5db12c\") " pod="openshift-marketplace/community-operators-7jd4s" Jan 21 11:33:39 crc kubenswrapper[4881]: I0121 11:33:39.270399 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhnkc\" (UniqueName: \"kubernetes.io/projected/067c1d92-f45d-4b2d-978c-7db14c5db12c-kube-api-access-xhnkc\") pod \"community-operators-7jd4s\" (UID: \"067c1d92-f45d-4b2d-978c-7db14c5db12c\") " pod="openshift-marketplace/community-operators-7jd4s" Jan 21 11:33:39 crc kubenswrapper[4881]: I0121 11:33:39.583069 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/067c1d92-f45d-4b2d-978c-7db14c5db12c-utilities\") pod \"community-operators-7jd4s\" (UID: \"067c1d92-f45d-4b2d-978c-7db14c5db12c\") " pod="openshift-marketplace/community-operators-7jd4s" Jan 21 11:33:39 crc kubenswrapper[4881]: I0121 11:33:39.583197 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/067c1d92-f45d-4b2d-978c-7db14c5db12c-catalog-content\") pod \"community-operators-7jd4s\" (UID: \"067c1d92-f45d-4b2d-978c-7db14c5db12c\") " pod="openshift-marketplace/community-operators-7jd4s" Jan 21 11:33:39 crc kubenswrapper[4881]: I0121 11:33:39.583324 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhnkc\" (UniqueName: \"kubernetes.io/projected/067c1d92-f45d-4b2d-978c-7db14c5db12c-kube-api-access-xhnkc\") pod \"community-operators-7jd4s\" (UID: \"067c1d92-f45d-4b2d-978c-7db14c5db12c\") " pod="openshift-marketplace/community-operators-7jd4s" Jan 21 11:33:39 crc kubenswrapper[4881]: I0121 11:33:39.587735 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/067c1d92-f45d-4b2d-978c-7db14c5db12c-utilities\") pod \"community-operators-7jd4s\" (UID: \"067c1d92-f45d-4b2d-978c-7db14c5db12c\") " pod="openshift-marketplace/community-operators-7jd4s" Jan 21 11:33:39 crc kubenswrapper[4881]: I0121 11:33:39.591188 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/067c1d92-f45d-4b2d-978c-7db14c5db12c-catalog-content\") pod \"community-operators-7jd4s\" (UID: \"067c1d92-f45d-4b2d-978c-7db14c5db12c\") " pod="openshift-marketplace/community-operators-7jd4s" Jan 21 11:33:39 crc kubenswrapper[4881]: I0121 11:33:39.609675 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhnkc\" (UniqueName: \"kubernetes.io/projected/067c1d92-f45d-4b2d-978c-7db14c5db12c-kube-api-access-xhnkc\") pod \"community-operators-7jd4s\" (UID: \"067c1d92-f45d-4b2d-978c-7db14c5db12c\") " pod="openshift-marketplace/community-operators-7jd4s" Jan 21 11:33:39 crc kubenswrapper[4881]: I0121 11:33:39.743140 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7jd4s" Jan 21 11:33:40 crc kubenswrapper[4881]: I0121 11:33:40.283820 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7jd4s"] Jan 21 11:33:41 crc kubenswrapper[4881]: I0121 11:33:41.164921 4881 generic.go:334] "Generic (PLEG): container finished" podID="067c1d92-f45d-4b2d-978c-7db14c5db12c" containerID="da1884db75984a22d15c0d5244bbfd183ce4833da864081225239071f7cec101" exitCode=0 Jan 21 11:33:41 crc kubenswrapper[4881]: I0121 11:33:41.165034 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7jd4s" event={"ID":"067c1d92-f45d-4b2d-978c-7db14c5db12c","Type":"ContainerDied","Data":"da1884db75984a22d15c0d5244bbfd183ce4833da864081225239071f7cec101"} Jan 21 11:33:41 crc kubenswrapper[4881]: I0121 11:33:41.165131 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7jd4s" event={"ID":"067c1d92-f45d-4b2d-978c-7db14c5db12c","Type":"ContainerStarted","Data":"25bb6209d91174507b7f8c32f8e2ad4514560130ba6ed8ac62902a3fc7a9a941"} Jan 21 11:33:43 crc kubenswrapper[4881]: I0121 11:33:43.184769 4881 generic.go:334] "Generic (PLEG): container finished" podID="067c1d92-f45d-4b2d-978c-7db14c5db12c" containerID="c10512d9c18e4cb3f71ce9e97dc85557eb1d6bd93eecea4367efb88fd50b12d7" exitCode=0 Jan 21 11:33:43 crc kubenswrapper[4881]: I0121 11:33:43.184874 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7jd4s" event={"ID":"067c1d92-f45d-4b2d-978c-7db14c5db12c","Type":"ContainerDied","Data":"c10512d9c18e4cb3f71ce9e97dc85557eb1d6bd93eecea4367efb88fd50b12d7"} Jan 21 11:33:44 crc kubenswrapper[4881]: I0121 11:33:44.196743 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7jd4s" event={"ID":"067c1d92-f45d-4b2d-978c-7db14c5db12c","Type":"ContainerStarted","Data":"564f6b823a6f1aa09a39d5db433824f256cbc10b60795637c9c287ec0ebbc3a2"} Jan 21 11:33:44 crc kubenswrapper[4881]: I0121 11:33:44.227751 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7jd4s" podStartSLOduration=2.780929822 podStartE2EDuration="5.227723931s" podCreationTimestamp="2026-01-21 11:33:39 +0000 UTC" firstStartedPulling="2026-01-21 11:33:41.167928747 +0000 UTC m=+2208.427885216" lastFinishedPulling="2026-01-21 11:33:43.614722856 +0000 UTC m=+2210.874679325" observedRunningTime="2026-01-21 11:33:44.217438847 +0000 UTC m=+2211.477395326" watchObservedRunningTime="2026-01-21 11:33:44.227723931 +0000 UTC m=+2211.487680410" Jan 21 11:33:49 crc kubenswrapper[4881]: I0121 11:33:49.743611 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7jd4s" Jan 21 11:33:49 crc kubenswrapper[4881]: I0121 11:33:49.746423 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7jd4s" Jan 21 11:33:49 crc kubenswrapper[4881]: I0121 11:33:49.792728 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7jd4s" Jan 21 11:33:50 crc kubenswrapper[4881]: I0121 11:33:50.311370 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7jd4s" Jan 21 11:33:50 crc kubenswrapper[4881]: I0121 11:33:50.373304 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7jd4s"] Jan 21 11:33:52 crc kubenswrapper[4881]: I0121 11:33:52.272412 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7jd4s" podUID="067c1d92-f45d-4b2d-978c-7db14c5db12c" containerName="registry-server" containerID="cri-o://564f6b823a6f1aa09a39d5db433824f256cbc10b60795637c9c287ec0ebbc3a2" gracePeriod=2 Jan 21 11:33:53 crc kubenswrapper[4881]: I0121 11:33:53.101753 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7jd4s" Jan 21 11:33:53 crc kubenswrapper[4881]: I0121 11:33:53.158818 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhnkc\" (UniqueName: \"kubernetes.io/projected/067c1d92-f45d-4b2d-978c-7db14c5db12c-kube-api-access-xhnkc\") pod \"067c1d92-f45d-4b2d-978c-7db14c5db12c\" (UID: \"067c1d92-f45d-4b2d-978c-7db14c5db12c\") " Jan 21 11:33:53 crc kubenswrapper[4881]: I0121 11:33:53.158945 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/067c1d92-f45d-4b2d-978c-7db14c5db12c-utilities\") pod \"067c1d92-f45d-4b2d-978c-7db14c5db12c\" (UID: \"067c1d92-f45d-4b2d-978c-7db14c5db12c\") " Jan 21 11:33:53 crc kubenswrapper[4881]: I0121 11:33:53.159091 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/067c1d92-f45d-4b2d-978c-7db14c5db12c-catalog-content\") pod \"067c1d92-f45d-4b2d-978c-7db14c5db12c\" (UID: \"067c1d92-f45d-4b2d-978c-7db14c5db12c\") " Jan 21 11:33:53 crc kubenswrapper[4881]: I0121 11:33:53.162882 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/067c1d92-f45d-4b2d-978c-7db14c5db12c-utilities" (OuterVolumeSpecName: "utilities") pod "067c1d92-f45d-4b2d-978c-7db14c5db12c" (UID: "067c1d92-f45d-4b2d-978c-7db14c5db12c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:33:53 crc kubenswrapper[4881]: I0121 11:33:53.190209 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/067c1d92-f45d-4b2d-978c-7db14c5db12c-kube-api-access-xhnkc" (OuterVolumeSpecName: "kube-api-access-xhnkc") pod "067c1d92-f45d-4b2d-978c-7db14c5db12c" (UID: "067c1d92-f45d-4b2d-978c-7db14c5db12c"). InnerVolumeSpecName "kube-api-access-xhnkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:33:53 crc kubenswrapper[4881]: I0121 11:33:53.273944 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhnkc\" (UniqueName: \"kubernetes.io/projected/067c1d92-f45d-4b2d-978c-7db14c5db12c-kube-api-access-xhnkc\") on node \"crc\" DevicePath \"\"" Jan 21 11:33:53 crc kubenswrapper[4881]: I0121 11:33:53.273999 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/067c1d92-f45d-4b2d-978c-7db14c5db12c-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:33:53 crc kubenswrapper[4881]: I0121 11:33:53.282046 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/067c1d92-f45d-4b2d-978c-7db14c5db12c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "067c1d92-f45d-4b2d-978c-7db14c5db12c" (UID: "067c1d92-f45d-4b2d-978c-7db14c5db12c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:33:53 crc kubenswrapper[4881]: I0121 11:33:53.324307 4881 generic.go:334] "Generic (PLEG): container finished" podID="067c1d92-f45d-4b2d-978c-7db14c5db12c" containerID="564f6b823a6f1aa09a39d5db433824f256cbc10b60795637c9c287ec0ebbc3a2" exitCode=0 Jan 21 11:33:53 crc kubenswrapper[4881]: I0121 11:33:53.324430 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7jd4s" Jan 21 11:33:53 crc kubenswrapper[4881]: I0121 11:33:53.339241 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7jd4s" event={"ID":"067c1d92-f45d-4b2d-978c-7db14c5db12c","Type":"ContainerDied","Data":"564f6b823a6f1aa09a39d5db433824f256cbc10b60795637c9c287ec0ebbc3a2"} Jan 21 11:33:53 crc kubenswrapper[4881]: I0121 11:33:53.349894 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7jd4s" event={"ID":"067c1d92-f45d-4b2d-978c-7db14c5db12c","Type":"ContainerDied","Data":"25bb6209d91174507b7f8c32f8e2ad4514560130ba6ed8ac62902a3fc7a9a941"} Jan 21 11:33:53 crc kubenswrapper[4881]: I0121 11:33:53.349924 4881 scope.go:117] "RemoveContainer" containerID="564f6b823a6f1aa09a39d5db433824f256cbc10b60795637c9c287ec0ebbc3a2" Jan 21 11:33:53 crc kubenswrapper[4881]: I0121 11:33:53.375878 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/067c1d92-f45d-4b2d-978c-7db14c5db12c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:33:53 crc kubenswrapper[4881]: I0121 11:33:53.407363 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7jd4s"] Jan 21 11:33:53 crc kubenswrapper[4881]: I0121 11:33:53.432505 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7jd4s"] Jan 21 11:33:53 crc kubenswrapper[4881]: I0121 11:33:53.444958 4881 scope.go:117] "RemoveContainer" containerID="c10512d9c18e4cb3f71ce9e97dc85557eb1d6bd93eecea4367efb88fd50b12d7" Jan 21 11:33:53 crc kubenswrapper[4881]: I0121 11:33:53.546522 4881 scope.go:117] "RemoveContainer" containerID="da1884db75984a22d15c0d5244bbfd183ce4833da864081225239071f7cec101" Jan 21 11:33:53 crc kubenswrapper[4881]: I0121 11:33:53.567004 4881 scope.go:117] "RemoveContainer" containerID="564f6b823a6f1aa09a39d5db433824f256cbc10b60795637c9c287ec0ebbc3a2" Jan 21 11:33:53 crc kubenswrapper[4881]: E0121 11:33:53.568718 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"564f6b823a6f1aa09a39d5db433824f256cbc10b60795637c9c287ec0ebbc3a2\": container with ID starting with 564f6b823a6f1aa09a39d5db433824f256cbc10b60795637c9c287ec0ebbc3a2 not found: ID does not exist" containerID="564f6b823a6f1aa09a39d5db433824f256cbc10b60795637c9c287ec0ebbc3a2" Jan 21 11:33:53 crc kubenswrapper[4881]: I0121 11:33:53.568773 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"564f6b823a6f1aa09a39d5db433824f256cbc10b60795637c9c287ec0ebbc3a2"} err="failed to get container status \"564f6b823a6f1aa09a39d5db433824f256cbc10b60795637c9c287ec0ebbc3a2\": rpc error: code = NotFound desc = could not find container \"564f6b823a6f1aa09a39d5db433824f256cbc10b60795637c9c287ec0ebbc3a2\": container with ID starting with 564f6b823a6f1aa09a39d5db433824f256cbc10b60795637c9c287ec0ebbc3a2 not found: ID does not exist" Jan 21 11:33:53 crc kubenswrapper[4881]: I0121 11:33:53.568828 4881 scope.go:117] "RemoveContainer" containerID="c10512d9c18e4cb3f71ce9e97dc85557eb1d6bd93eecea4367efb88fd50b12d7" Jan 21 11:33:53 crc kubenswrapper[4881]: E0121 11:33:53.569142 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c10512d9c18e4cb3f71ce9e97dc85557eb1d6bd93eecea4367efb88fd50b12d7\": container with ID starting with c10512d9c18e4cb3f71ce9e97dc85557eb1d6bd93eecea4367efb88fd50b12d7 not found: ID does not exist" containerID="c10512d9c18e4cb3f71ce9e97dc85557eb1d6bd93eecea4367efb88fd50b12d7" Jan 21 11:33:53 crc kubenswrapper[4881]: I0121 11:33:53.569190 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c10512d9c18e4cb3f71ce9e97dc85557eb1d6bd93eecea4367efb88fd50b12d7"} err="failed to get container status \"c10512d9c18e4cb3f71ce9e97dc85557eb1d6bd93eecea4367efb88fd50b12d7\": rpc error: code = NotFound desc = could not find container \"c10512d9c18e4cb3f71ce9e97dc85557eb1d6bd93eecea4367efb88fd50b12d7\": container with ID starting with c10512d9c18e4cb3f71ce9e97dc85557eb1d6bd93eecea4367efb88fd50b12d7 not found: ID does not exist" Jan 21 11:33:53 crc kubenswrapper[4881]: I0121 11:33:53.569211 4881 scope.go:117] "RemoveContainer" containerID="da1884db75984a22d15c0d5244bbfd183ce4833da864081225239071f7cec101" Jan 21 11:33:53 crc kubenswrapper[4881]: E0121 11:33:53.569450 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da1884db75984a22d15c0d5244bbfd183ce4833da864081225239071f7cec101\": container with ID starting with da1884db75984a22d15c0d5244bbfd183ce4833da864081225239071f7cec101 not found: ID does not exist" containerID="da1884db75984a22d15c0d5244bbfd183ce4833da864081225239071f7cec101" Jan 21 11:33:53 crc kubenswrapper[4881]: I0121 11:33:53.569487 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da1884db75984a22d15c0d5244bbfd183ce4833da864081225239071f7cec101"} err="failed to get container status \"da1884db75984a22d15c0d5244bbfd183ce4833da864081225239071f7cec101\": rpc error: code = NotFound desc = could not find container \"da1884db75984a22d15c0d5244bbfd183ce4833da864081225239071f7cec101\": container with ID starting with da1884db75984a22d15c0d5244bbfd183ce4833da864081225239071f7cec101 not found: ID does not exist" Jan 21 11:33:55 crc kubenswrapper[4881]: I0121 11:33:55.325535 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="067c1d92-f45d-4b2d-978c-7db14c5db12c" path="/var/lib/kubelet/pods/067c1d92-f45d-4b2d-978c-7db14c5db12c/volumes" Jan 21 11:33:58 crc kubenswrapper[4881]: I0121 11:33:58.040844 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-bdc49"] Jan 21 11:33:58 crc kubenswrapper[4881]: I0121 11:33:58.056654 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-bdc49"] Jan 21 11:33:59 crc kubenswrapper[4881]: I0121 11:33:59.522314 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d8ffc48-6b0f-48d1-b13d-8a766f5b604a" path="/var/lib/kubelet/pods/3d8ffc48-6b0f-48d1-b13d-8a766f5b604a/volumes" Jan 21 11:34:09 crc kubenswrapper[4881]: I0121 11:34:09.306619 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4vlh9"] Jan 21 11:34:09 crc kubenswrapper[4881]: E0121 11:34:09.307532 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="067c1d92-f45d-4b2d-978c-7db14c5db12c" containerName="extract-content" Jan 21 11:34:09 crc kubenswrapper[4881]: I0121 11:34:09.307544 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="067c1d92-f45d-4b2d-978c-7db14c5db12c" containerName="extract-content" Jan 21 11:34:09 crc kubenswrapper[4881]: E0121 11:34:09.307557 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="067c1d92-f45d-4b2d-978c-7db14c5db12c" containerName="extract-utilities" Jan 21 11:34:09 crc kubenswrapper[4881]: I0121 11:34:09.307566 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="067c1d92-f45d-4b2d-978c-7db14c5db12c" containerName="extract-utilities" Jan 21 11:34:09 crc kubenswrapper[4881]: E0121 11:34:09.307598 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="067c1d92-f45d-4b2d-978c-7db14c5db12c" containerName="registry-server" Jan 21 11:34:09 crc kubenswrapper[4881]: I0121 11:34:09.307605 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="067c1d92-f45d-4b2d-978c-7db14c5db12c" containerName="registry-server" Jan 21 11:34:09 crc kubenswrapper[4881]: I0121 11:34:09.307810 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="067c1d92-f45d-4b2d-978c-7db14c5db12c" containerName="registry-server" Jan 21 11:34:09 crc kubenswrapper[4881]: I0121 11:34:09.309241 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4vlh9" Jan 21 11:34:09 crc kubenswrapper[4881]: I0121 11:34:09.341704 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4vlh9"] Jan 21 11:34:09 crc kubenswrapper[4881]: I0121 11:34:09.371077 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz9xk\" (UniqueName: \"kubernetes.io/projected/4b51ea6d-7925-4ba0-af48-901f9ef8f774-kube-api-access-zz9xk\") pod \"certified-operators-4vlh9\" (UID: \"4b51ea6d-7925-4ba0-af48-901f9ef8f774\") " pod="openshift-marketplace/certified-operators-4vlh9" Jan 21 11:34:09 crc kubenswrapper[4881]: I0121 11:34:09.371249 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b51ea6d-7925-4ba0-af48-901f9ef8f774-utilities\") pod \"certified-operators-4vlh9\" (UID: \"4b51ea6d-7925-4ba0-af48-901f9ef8f774\") " pod="openshift-marketplace/certified-operators-4vlh9" Jan 21 11:34:09 crc kubenswrapper[4881]: I0121 11:34:09.371286 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b51ea6d-7925-4ba0-af48-901f9ef8f774-catalog-content\") pod \"certified-operators-4vlh9\" (UID: \"4b51ea6d-7925-4ba0-af48-901f9ef8f774\") " pod="openshift-marketplace/certified-operators-4vlh9" Jan 21 11:34:09 crc kubenswrapper[4881]: I0121 11:34:09.507817 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b51ea6d-7925-4ba0-af48-901f9ef8f774-utilities\") pod \"certified-operators-4vlh9\" (UID: \"4b51ea6d-7925-4ba0-af48-901f9ef8f774\") " pod="openshift-marketplace/certified-operators-4vlh9" Jan 21 11:34:09 crc kubenswrapper[4881]: I0121 11:34:09.507879 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b51ea6d-7925-4ba0-af48-901f9ef8f774-catalog-content\") pod \"certified-operators-4vlh9\" (UID: \"4b51ea6d-7925-4ba0-af48-901f9ef8f774\") " pod="openshift-marketplace/certified-operators-4vlh9" Jan 21 11:34:09 crc kubenswrapper[4881]: I0121 11:34:09.508038 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zz9xk\" (UniqueName: \"kubernetes.io/projected/4b51ea6d-7925-4ba0-af48-901f9ef8f774-kube-api-access-zz9xk\") pod \"certified-operators-4vlh9\" (UID: \"4b51ea6d-7925-4ba0-af48-901f9ef8f774\") " pod="openshift-marketplace/certified-operators-4vlh9" Jan 21 11:34:09 crc kubenswrapper[4881]: I0121 11:34:09.508513 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b51ea6d-7925-4ba0-af48-901f9ef8f774-utilities\") pod \"certified-operators-4vlh9\" (UID: \"4b51ea6d-7925-4ba0-af48-901f9ef8f774\") " pod="openshift-marketplace/certified-operators-4vlh9" Jan 21 11:34:09 crc kubenswrapper[4881]: I0121 11:34:09.508850 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b51ea6d-7925-4ba0-af48-901f9ef8f774-catalog-content\") pod \"certified-operators-4vlh9\" (UID: \"4b51ea6d-7925-4ba0-af48-901f9ef8f774\") " pod="openshift-marketplace/certified-operators-4vlh9" Jan 21 11:34:09 crc kubenswrapper[4881]: I0121 11:34:09.541108 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz9xk\" (UniqueName: \"kubernetes.io/projected/4b51ea6d-7925-4ba0-af48-901f9ef8f774-kube-api-access-zz9xk\") pod \"certified-operators-4vlh9\" (UID: \"4b51ea6d-7925-4ba0-af48-901f9ef8f774\") " pod="openshift-marketplace/certified-operators-4vlh9" Jan 21 11:34:09 crc kubenswrapper[4881]: I0121 11:34:09.650478 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4vlh9" Jan 21 11:34:10 crc kubenswrapper[4881]: I0121 11:34:10.199355 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4vlh9"] Jan 21 11:34:11 crc kubenswrapper[4881]: I0121 11:34:11.045005 4881 generic.go:334] "Generic (PLEG): container finished" podID="4b51ea6d-7925-4ba0-af48-901f9ef8f774" containerID="d67f30b1065ba9c2e5e661b4d33f75f8b5adbff3b28180745f9c5f99280ec4d4" exitCode=0 Jan 21 11:34:11 crc kubenswrapper[4881]: I0121 11:34:11.045075 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4vlh9" event={"ID":"4b51ea6d-7925-4ba0-af48-901f9ef8f774","Type":"ContainerDied","Data":"d67f30b1065ba9c2e5e661b4d33f75f8b5adbff3b28180745f9c5f99280ec4d4"} Jan 21 11:34:11 crc kubenswrapper[4881]: I0121 11:34:11.045359 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4vlh9" event={"ID":"4b51ea6d-7925-4ba0-af48-901f9ef8f774","Type":"ContainerStarted","Data":"db8e6c89e98fa09300373151d8b1fe224bb54f6db3db3ee5e913299b110c67d8"} Jan 21 11:34:11 crc kubenswrapper[4881]: I0121 11:34:11.047718 4881 generic.go:334] "Generic (PLEG): container finished" podID="24a093f9-cd67-48f9-a18b-48d1a79a8aa0" containerID="ed91e50a3880cb037a332efeeea663c905f6d34b8520e7608505f8f61898c93d" exitCode=0 Jan 21 11:34:11 crc kubenswrapper[4881]: I0121 11:34:11.047767 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4wdq6" event={"ID":"24a093f9-cd67-48f9-a18b-48d1a79a8aa0","Type":"ContainerDied","Data":"ed91e50a3880cb037a332efeeea663c905f6d34b8520e7608505f8f61898c93d"} Jan 21 11:34:12 crc kubenswrapper[4881]: I0121 11:34:12.056971 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4vlh9" event={"ID":"4b51ea6d-7925-4ba0-af48-901f9ef8f774","Type":"ContainerStarted","Data":"69ff14d981728245b89b2edfeb15ee103c7fc0d9ef94fb16eaa34e81e72f1f8d"} Jan 21 11:34:12 crc kubenswrapper[4881]: I0121 11:34:12.530872 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4wdq6" Jan 21 11:34:12 crc kubenswrapper[4881]: I0121 11:34:12.602616 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwwm9\" (UniqueName: \"kubernetes.io/projected/24a093f9-cd67-48f9-a18b-48d1a79a8aa0-kube-api-access-wwwm9\") pod \"24a093f9-cd67-48f9-a18b-48d1a79a8aa0\" (UID: \"24a093f9-cd67-48f9-a18b-48d1a79a8aa0\") " Jan 21 11:34:12 crc kubenswrapper[4881]: I0121 11:34:12.602732 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24a093f9-cd67-48f9-a18b-48d1a79a8aa0-inventory\") pod \"24a093f9-cd67-48f9-a18b-48d1a79a8aa0\" (UID: \"24a093f9-cd67-48f9-a18b-48d1a79a8aa0\") " Jan 21 11:34:12 crc kubenswrapper[4881]: I0121 11:34:12.602823 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/24a093f9-cd67-48f9-a18b-48d1a79a8aa0-ssh-key-openstack-edpm-ipam\") pod \"24a093f9-cd67-48f9-a18b-48d1a79a8aa0\" (UID: \"24a093f9-cd67-48f9-a18b-48d1a79a8aa0\") " Jan 21 11:34:12 crc kubenswrapper[4881]: I0121 11:34:12.612183 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24a093f9-cd67-48f9-a18b-48d1a79a8aa0-kube-api-access-wwwm9" (OuterVolumeSpecName: "kube-api-access-wwwm9") pod "24a093f9-cd67-48f9-a18b-48d1a79a8aa0" (UID: "24a093f9-cd67-48f9-a18b-48d1a79a8aa0"). InnerVolumeSpecName "kube-api-access-wwwm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:34:12 crc kubenswrapper[4881]: I0121 11:34:12.632941 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24a093f9-cd67-48f9-a18b-48d1a79a8aa0-inventory" (OuterVolumeSpecName: "inventory") pod "24a093f9-cd67-48f9-a18b-48d1a79a8aa0" (UID: "24a093f9-cd67-48f9-a18b-48d1a79a8aa0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:34:12 crc kubenswrapper[4881]: I0121 11:34:12.637888 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24a093f9-cd67-48f9-a18b-48d1a79a8aa0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "24a093f9-cd67-48f9-a18b-48d1a79a8aa0" (UID: "24a093f9-cd67-48f9-a18b-48d1a79a8aa0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:34:12 crc kubenswrapper[4881]: I0121 11:34:12.711570 4881 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24a093f9-cd67-48f9-a18b-48d1a79a8aa0-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 11:34:12 crc kubenswrapper[4881]: I0121 11:34:12.711604 4881 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/24a093f9-cd67-48f9-a18b-48d1a79a8aa0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 11:34:12 crc kubenswrapper[4881]: I0121 11:34:12.711615 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwwm9\" (UniqueName: \"kubernetes.io/projected/24a093f9-cd67-48f9-a18b-48d1a79a8aa0-kube-api-access-wwwm9\") on node \"crc\" DevicePath \"\"" Jan 21 11:34:13 crc kubenswrapper[4881]: I0121 11:34:13.195245 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4wdq6" event={"ID":"24a093f9-cd67-48f9-a18b-48d1a79a8aa0","Type":"ContainerDied","Data":"7a8fa3b39b588ac3bed4bee992d7ff3c312e5258aac1318986c1e1881a279a1c"} Jan 21 11:34:13 crc kubenswrapper[4881]: I0121 11:34:13.195688 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a8fa3b39b588ac3bed4bee992d7ff3c312e5258aac1318986c1e1881a279a1c" Jan 21 11:34:13 crc kubenswrapper[4881]: I0121 11:34:13.195801 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4wdq6" Jan 21 11:34:13 crc kubenswrapper[4881]: I0121 11:34:13.204271 4881 generic.go:334] "Generic (PLEG): container finished" podID="4b51ea6d-7925-4ba0-af48-901f9ef8f774" containerID="69ff14d981728245b89b2edfeb15ee103c7fc0d9ef94fb16eaa34e81e72f1f8d" exitCode=0 Jan 21 11:34:13 crc kubenswrapper[4881]: I0121 11:34:13.204328 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4vlh9" event={"ID":"4b51ea6d-7925-4ba0-af48-901f9ef8f774","Type":"ContainerDied","Data":"69ff14d981728245b89b2edfeb15ee103c7fc0d9ef94fb16eaa34e81e72f1f8d"} Jan 21 11:34:13 crc kubenswrapper[4881]: I0121 11:34:13.272929 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gl8zp"] Jan 21 11:34:13 crc kubenswrapper[4881]: E0121 11:34:13.273744 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24a093f9-cd67-48f9-a18b-48d1a79a8aa0" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 21 11:34:13 crc kubenswrapper[4881]: I0121 11:34:13.273796 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="24a093f9-cd67-48f9-a18b-48d1a79a8aa0" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 21 11:34:13 crc kubenswrapper[4881]: I0121 11:34:13.274114 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="24a093f9-cd67-48f9-a18b-48d1a79a8aa0" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 21 11:34:13 crc kubenswrapper[4881]: I0121 11:34:13.279427 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gl8zp" Jan 21 11:34:13 crc kubenswrapper[4881]: I0121 11:34:13.289077 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 11:34:13 crc kubenswrapper[4881]: I0121 11:34:13.289316 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 11:34:13 crc kubenswrapper[4881]: I0121 11:34:13.289512 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 11:34:13 crc kubenswrapper[4881]: I0121 11:34:13.289675 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fd7zg" Jan 21 11:34:13 crc kubenswrapper[4881]: I0121 11:34:13.331228 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gl8zp"] Jan 21 11:34:13 crc kubenswrapper[4881]: I0121 11:34:13.363118 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lm6m\" (UniqueName: \"kubernetes.io/projected/ec204ea7-b207-409b-8fa0-ff2847f7400a-kube-api-access-7lm6m\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-gl8zp\" (UID: \"ec204ea7-b207-409b-8fa0-ff2847f7400a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gl8zp" Jan 21 11:34:13 crc kubenswrapper[4881]: I0121 11:34:13.363175 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ec204ea7-b207-409b-8fa0-ff2847f7400a-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-gl8zp\" (UID: \"ec204ea7-b207-409b-8fa0-ff2847f7400a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gl8zp" Jan 21 11:34:13 crc kubenswrapper[4881]: I0121 11:34:13.363387 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec204ea7-b207-409b-8fa0-ff2847f7400a-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-gl8zp\" (UID: \"ec204ea7-b207-409b-8fa0-ff2847f7400a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gl8zp" Jan 21 11:34:13 crc kubenswrapper[4881]: I0121 11:34:13.467042 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lm6m\" (UniqueName: \"kubernetes.io/projected/ec204ea7-b207-409b-8fa0-ff2847f7400a-kube-api-access-7lm6m\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-gl8zp\" (UID: \"ec204ea7-b207-409b-8fa0-ff2847f7400a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gl8zp" Jan 21 11:34:13 crc kubenswrapper[4881]: I0121 11:34:13.467131 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ec204ea7-b207-409b-8fa0-ff2847f7400a-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-gl8zp\" (UID: \"ec204ea7-b207-409b-8fa0-ff2847f7400a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gl8zp" Jan 21 11:34:13 crc kubenswrapper[4881]: I0121 11:34:13.467227 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec204ea7-b207-409b-8fa0-ff2847f7400a-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-gl8zp\" (UID: \"ec204ea7-b207-409b-8fa0-ff2847f7400a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gl8zp" Jan 21 11:34:13 crc kubenswrapper[4881]: I0121 11:34:13.472526 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec204ea7-b207-409b-8fa0-ff2847f7400a-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-gl8zp\" (UID: \"ec204ea7-b207-409b-8fa0-ff2847f7400a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gl8zp" Jan 21 11:34:13 crc kubenswrapper[4881]: I0121 11:34:13.473288 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ec204ea7-b207-409b-8fa0-ff2847f7400a-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-gl8zp\" (UID: \"ec204ea7-b207-409b-8fa0-ff2847f7400a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gl8zp" Jan 21 11:34:13 crc kubenswrapper[4881]: I0121 11:34:13.491600 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lm6m\" (UniqueName: \"kubernetes.io/projected/ec204ea7-b207-409b-8fa0-ff2847f7400a-kube-api-access-7lm6m\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-gl8zp\" (UID: \"ec204ea7-b207-409b-8fa0-ff2847f7400a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gl8zp" Jan 21 11:34:13 crc kubenswrapper[4881]: I0121 11:34:13.617541 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gl8zp" Jan 21 11:34:13 crc kubenswrapper[4881]: W0121 11:34:13.991399 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec204ea7_b207_409b_8fa0_ff2847f7400a.slice/crio-2c32e7c92bc4ff8bd6fe6be45aae1bb184709bf6bda7cb3b5e2d0d4f1c3e94ad WatchSource:0}: Error finding container 2c32e7c92bc4ff8bd6fe6be45aae1bb184709bf6bda7cb3b5e2d0d4f1c3e94ad: Status 404 returned error can't find the container with id 2c32e7c92bc4ff8bd6fe6be45aae1bb184709bf6bda7cb3b5e2d0d4f1c3e94ad Jan 21 11:34:13 crc kubenswrapper[4881]: I0121 11:34:13.992890 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gl8zp"] Jan 21 11:34:14 crc kubenswrapper[4881]: I0121 11:34:14.217911 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4vlh9" event={"ID":"4b51ea6d-7925-4ba0-af48-901f9ef8f774","Type":"ContainerStarted","Data":"9a204a4071f03355ba563e850fb77851f121d2bd1cc36b8cb17910eb192265d6"} Jan 21 11:34:14 crc kubenswrapper[4881]: I0121 11:34:14.219159 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gl8zp" event={"ID":"ec204ea7-b207-409b-8fa0-ff2847f7400a","Type":"ContainerStarted","Data":"2c32e7c92bc4ff8bd6fe6be45aae1bb184709bf6bda7cb3b5e2d0d4f1c3e94ad"} Jan 21 11:34:14 crc kubenswrapper[4881]: I0121 11:34:14.261169 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4vlh9" podStartSLOduration=2.623129703 podStartE2EDuration="5.261150857s" podCreationTimestamp="2026-01-21 11:34:09 +0000 UTC" firstStartedPulling="2026-01-21 11:34:11.047660816 +0000 UTC m=+2238.307617295" lastFinishedPulling="2026-01-21 11:34:13.68568198 +0000 UTC m=+2240.945638449" observedRunningTime="2026-01-21 11:34:14.256999475 +0000 UTC m=+2241.516955944" watchObservedRunningTime="2026-01-21 11:34:14.261150857 +0000 UTC m=+2241.521107326" Jan 21 11:34:15 crc kubenswrapper[4881]: I0121 11:34:15.234760 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gl8zp" event={"ID":"ec204ea7-b207-409b-8fa0-ff2847f7400a","Type":"ContainerStarted","Data":"16130ddaa7d6120624e03973b67c3a94a50f4edd014c457d5948bdfe0654d13c"} Jan 21 11:34:15 crc kubenswrapper[4881]: I0121 11:34:15.264986 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gl8zp" podStartSLOduration=1.708805807 podStartE2EDuration="2.264956307s" podCreationTimestamp="2026-01-21 11:34:13 +0000 UTC" firstStartedPulling="2026-01-21 11:34:13.994434348 +0000 UTC m=+2241.254390817" lastFinishedPulling="2026-01-21 11:34:14.550584848 +0000 UTC m=+2241.810541317" observedRunningTime="2026-01-21 11:34:15.252416297 +0000 UTC m=+2242.512372766" watchObservedRunningTime="2026-01-21 11:34:15.264956307 +0000 UTC m=+2242.524912776" Jan 21 11:34:19 crc kubenswrapper[4881]: I0121 11:34:19.651335 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4vlh9" Jan 21 11:34:19 crc kubenswrapper[4881]: I0121 11:34:19.653159 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4vlh9" Jan 21 11:34:19 crc kubenswrapper[4881]: I0121 11:34:19.736090 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4vlh9" Jan 21 11:34:20 crc kubenswrapper[4881]: I0121 11:34:20.350071 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4vlh9" Jan 21 11:34:20 crc kubenswrapper[4881]: I0121 11:34:20.414939 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4vlh9"] Jan 21 11:34:21 crc kubenswrapper[4881]: I0121 11:34:21.299103 4881 generic.go:334] "Generic (PLEG): container finished" podID="ec204ea7-b207-409b-8fa0-ff2847f7400a" containerID="16130ddaa7d6120624e03973b67c3a94a50f4edd014c457d5948bdfe0654d13c" exitCode=0 Jan 21 11:34:21 crc kubenswrapper[4881]: I0121 11:34:21.299190 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gl8zp" event={"ID":"ec204ea7-b207-409b-8fa0-ff2847f7400a","Type":"ContainerDied","Data":"16130ddaa7d6120624e03973b67c3a94a50f4edd014c457d5948bdfe0654d13c"} Jan 21 11:34:22 crc kubenswrapper[4881]: I0121 11:34:22.308398 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4vlh9" podUID="4b51ea6d-7925-4ba0-af48-901f9ef8f774" containerName="registry-server" containerID="cri-o://9a204a4071f03355ba563e850fb77851f121d2bd1cc36b8cb17910eb192265d6" gracePeriod=2 Jan 21 11:34:22 crc kubenswrapper[4881]: I0121 11:34:22.822042 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gl8zp" Jan 21 11:34:22 crc kubenswrapper[4881]: I0121 11:34:22.994871 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec204ea7-b207-409b-8fa0-ff2847f7400a-inventory\") pod \"ec204ea7-b207-409b-8fa0-ff2847f7400a\" (UID: \"ec204ea7-b207-409b-8fa0-ff2847f7400a\") " Jan 21 11:34:22 crc kubenswrapper[4881]: I0121 11:34:22.995308 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lm6m\" (UniqueName: \"kubernetes.io/projected/ec204ea7-b207-409b-8fa0-ff2847f7400a-kube-api-access-7lm6m\") pod \"ec204ea7-b207-409b-8fa0-ff2847f7400a\" (UID: \"ec204ea7-b207-409b-8fa0-ff2847f7400a\") " Jan 21 11:34:22 crc kubenswrapper[4881]: I0121 11:34:22.995375 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ec204ea7-b207-409b-8fa0-ff2847f7400a-ssh-key-openstack-edpm-ipam\") pod \"ec204ea7-b207-409b-8fa0-ff2847f7400a\" (UID: \"ec204ea7-b207-409b-8fa0-ff2847f7400a\") " Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.007899 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec204ea7-b207-409b-8fa0-ff2847f7400a-kube-api-access-7lm6m" (OuterVolumeSpecName: "kube-api-access-7lm6m") pod "ec204ea7-b207-409b-8fa0-ff2847f7400a" (UID: "ec204ea7-b207-409b-8fa0-ff2847f7400a"). InnerVolumeSpecName "kube-api-access-7lm6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.026804 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec204ea7-b207-409b-8fa0-ff2847f7400a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ec204ea7-b207-409b-8fa0-ff2847f7400a" (UID: "ec204ea7-b207-409b-8fa0-ff2847f7400a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.031230 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec204ea7-b207-409b-8fa0-ff2847f7400a-inventory" (OuterVolumeSpecName: "inventory") pod "ec204ea7-b207-409b-8fa0-ff2847f7400a" (UID: "ec204ea7-b207-409b-8fa0-ff2847f7400a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.098163 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lm6m\" (UniqueName: \"kubernetes.io/projected/ec204ea7-b207-409b-8fa0-ff2847f7400a-kube-api-access-7lm6m\") on node \"crc\" DevicePath \"\"" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.098226 4881 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ec204ea7-b207-409b-8fa0-ff2847f7400a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.098242 4881 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ec204ea7-b207-409b-8fa0-ff2847f7400a-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.591491 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4vlh9" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.636323 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gl8zp" event={"ID":"ec204ea7-b207-409b-8fa0-ff2847f7400a","Type":"ContainerDied","Data":"2c32e7c92bc4ff8bd6fe6be45aae1bb184709bf6bda7cb3b5e2d0d4f1c3e94ad"} Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.636377 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c32e7c92bc4ff8bd6fe6be45aae1bb184709bf6bda7cb3b5e2d0d4f1c3e94ad" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.636606 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-gl8zp" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.648965 4881 generic.go:334] "Generic (PLEG): container finished" podID="4b51ea6d-7925-4ba0-af48-901f9ef8f774" containerID="9a204a4071f03355ba563e850fb77851f121d2bd1cc36b8cb17910eb192265d6" exitCode=0 Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.649040 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4vlh9" event={"ID":"4b51ea6d-7925-4ba0-af48-901f9ef8f774","Type":"ContainerDied","Data":"9a204a4071f03355ba563e850fb77851f121d2bd1cc36b8cb17910eb192265d6"} Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.649083 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4vlh9" event={"ID":"4b51ea6d-7925-4ba0-af48-901f9ef8f774","Type":"ContainerDied","Data":"db8e6c89e98fa09300373151d8b1fe224bb54f6db3db3ee5e913299b110c67d8"} Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.649107 4881 scope.go:117] "RemoveContainer" containerID="9a204a4071f03355ba563e850fb77851f121d2bd1cc36b8cb17910eb192265d6" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.649399 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4vlh9" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.701110 4881 scope.go:117] "RemoveContainer" containerID="69ff14d981728245b89b2edfeb15ee103c7fc0d9ef94fb16eaa34e81e72f1f8d" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.733226 4881 scope.go:117] "RemoveContainer" containerID="d67f30b1065ba9c2e5e661b4d33f75f8b5adbff3b28180745f9c5f99280ec4d4" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.775372 4881 scope.go:117] "RemoveContainer" containerID="9a204a4071f03355ba563e850fb77851f121d2bd1cc36b8cb17910eb192265d6" Jan 21 11:34:23 crc kubenswrapper[4881]: E0121 11:34:23.777650 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a204a4071f03355ba563e850fb77851f121d2bd1cc36b8cb17910eb192265d6\": container with ID starting with 9a204a4071f03355ba563e850fb77851f121d2bd1cc36b8cb17910eb192265d6 not found: ID does not exist" containerID="9a204a4071f03355ba563e850fb77851f121d2bd1cc36b8cb17910eb192265d6" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.777687 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a204a4071f03355ba563e850fb77851f121d2bd1cc36b8cb17910eb192265d6"} err="failed to get container status \"9a204a4071f03355ba563e850fb77851f121d2bd1cc36b8cb17910eb192265d6\": rpc error: code = NotFound desc = could not find container \"9a204a4071f03355ba563e850fb77851f121d2bd1cc36b8cb17910eb192265d6\": container with ID starting with 9a204a4071f03355ba563e850fb77851f121d2bd1cc36b8cb17910eb192265d6 not found: ID does not exist" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.777712 4881 scope.go:117] "RemoveContainer" containerID="69ff14d981728245b89b2edfeb15ee103c7fc0d9ef94fb16eaa34e81e72f1f8d" Jan 21 11:34:23 crc kubenswrapper[4881]: E0121 11:34:23.778179 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69ff14d981728245b89b2edfeb15ee103c7fc0d9ef94fb16eaa34e81e72f1f8d\": container with ID starting with 69ff14d981728245b89b2edfeb15ee103c7fc0d9ef94fb16eaa34e81e72f1f8d not found: ID does not exist" containerID="69ff14d981728245b89b2edfeb15ee103c7fc0d9ef94fb16eaa34e81e72f1f8d" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.778207 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69ff14d981728245b89b2edfeb15ee103c7fc0d9ef94fb16eaa34e81e72f1f8d"} err="failed to get container status \"69ff14d981728245b89b2edfeb15ee103c7fc0d9ef94fb16eaa34e81e72f1f8d\": rpc error: code = NotFound desc = could not find container \"69ff14d981728245b89b2edfeb15ee103c7fc0d9ef94fb16eaa34e81e72f1f8d\": container with ID starting with 69ff14d981728245b89b2edfeb15ee103c7fc0d9ef94fb16eaa34e81e72f1f8d not found: ID does not exist" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.778224 4881 scope.go:117] "RemoveContainer" containerID="d67f30b1065ba9c2e5e661b4d33f75f8b5adbff3b28180745f9c5f99280ec4d4" Jan 21 11:34:23 crc kubenswrapper[4881]: E0121 11:34:23.778653 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d67f30b1065ba9c2e5e661b4d33f75f8b5adbff3b28180745f9c5f99280ec4d4\": container with ID starting with d67f30b1065ba9c2e5e661b4d33f75f8b5adbff3b28180745f9c5f99280ec4d4 not found: ID does not exist" containerID="d67f30b1065ba9c2e5e661b4d33f75f8b5adbff3b28180745f9c5f99280ec4d4" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.778686 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d67f30b1065ba9c2e5e661b4d33f75f8b5adbff3b28180745f9c5f99280ec4d4"} err="failed to get container status \"d67f30b1065ba9c2e5e661b4d33f75f8b5adbff3b28180745f9c5f99280ec4d4\": rpc error: code = NotFound desc = could not find container \"d67f30b1065ba9c2e5e661b4d33f75f8b5adbff3b28180745f9c5f99280ec4d4\": container with ID starting with d67f30b1065ba9c2e5e661b4d33f75f8b5adbff3b28180745f9c5f99280ec4d4 not found: ID does not exist" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.795310 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b51ea6d-7925-4ba0-af48-901f9ef8f774-catalog-content\") pod \"4b51ea6d-7925-4ba0-af48-901f9ef8f774\" (UID: \"4b51ea6d-7925-4ba0-af48-901f9ef8f774\") " Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.795438 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zz9xk\" (UniqueName: \"kubernetes.io/projected/4b51ea6d-7925-4ba0-af48-901f9ef8f774-kube-api-access-zz9xk\") pod \"4b51ea6d-7925-4ba0-af48-901f9ef8f774\" (UID: \"4b51ea6d-7925-4ba0-af48-901f9ef8f774\") " Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.795596 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b51ea6d-7925-4ba0-af48-901f9ef8f774-utilities\") pod \"4b51ea6d-7925-4ba0-af48-901f9ef8f774\" (UID: \"4b51ea6d-7925-4ba0-af48-901f9ef8f774\") " Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.797837 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b51ea6d-7925-4ba0-af48-901f9ef8f774-utilities" (OuterVolumeSpecName: "utilities") pod "4b51ea6d-7925-4ba0-af48-901f9ef8f774" (UID: "4b51ea6d-7925-4ba0-af48-901f9ef8f774"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.803174 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b51ea6d-7925-4ba0-af48-901f9ef8f774-kube-api-access-zz9xk" (OuterVolumeSpecName: "kube-api-access-zz9xk") pod "4b51ea6d-7925-4ba0-af48-901f9ef8f774" (UID: "4b51ea6d-7925-4ba0-af48-901f9ef8f774"). InnerVolumeSpecName "kube-api-access-zz9xk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.840611 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-6khfl"] Jan 21 11:34:23 crc kubenswrapper[4881]: E0121 11:34:23.841163 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b51ea6d-7925-4ba0-af48-901f9ef8f774" containerName="extract-utilities" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.841182 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b51ea6d-7925-4ba0-af48-901f9ef8f774" containerName="extract-utilities" Jan 21 11:34:23 crc kubenswrapper[4881]: E0121 11:34:23.841195 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec204ea7-b207-409b-8fa0-ff2847f7400a" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.841203 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec204ea7-b207-409b-8fa0-ff2847f7400a" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 21 11:34:23 crc kubenswrapper[4881]: E0121 11:34:23.841221 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b51ea6d-7925-4ba0-af48-901f9ef8f774" containerName="extract-content" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.841227 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b51ea6d-7925-4ba0-af48-901f9ef8f774" containerName="extract-content" Jan 21 11:34:23 crc kubenswrapper[4881]: E0121 11:34:23.841249 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b51ea6d-7925-4ba0-af48-901f9ef8f774" containerName="registry-server" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.841255 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b51ea6d-7925-4ba0-af48-901f9ef8f774" containerName="registry-server" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.841441 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec204ea7-b207-409b-8fa0-ff2847f7400a" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.841457 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b51ea6d-7925-4ba0-af48-901f9ef8f774" containerName="registry-server" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.842194 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6khfl" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.845237 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.845369 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.845516 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fd7zg" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.846297 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.852191 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-6khfl"] Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.867479 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b51ea6d-7925-4ba0-af48-901f9ef8f774-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4b51ea6d-7925-4ba0-af48-901f9ef8f774" (UID: "4b51ea6d-7925-4ba0-af48-901f9ef8f774"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.899440 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b51ea6d-7925-4ba0-af48-901f9ef8f774-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.899676 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zz9xk\" (UniqueName: \"kubernetes.io/projected/4b51ea6d-7925-4ba0-af48-901f9ef8f774-kube-api-access-zz9xk\") on node \"crc\" DevicePath \"\"" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.899743 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b51ea6d-7925-4ba0-af48-901f9ef8f774-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.987702 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4vlh9"] Jan 21 11:34:23 crc kubenswrapper[4881]: I0121 11:34:23.996677 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4vlh9"] Jan 21 11:34:24 crc kubenswrapper[4881]: I0121 11:34:24.008637 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg4tq\" (UniqueName: \"kubernetes.io/projected/3880ebda-d882-4e35-89e7-ef739a423a7d-kube-api-access-mg4tq\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6khfl\" (UID: \"3880ebda-d882-4e35-89e7-ef739a423a7d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6khfl" Jan 21 11:34:24 crc kubenswrapper[4881]: I0121 11:34:24.008760 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3880ebda-d882-4e35-89e7-ef739a423a7d-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6khfl\" (UID: \"3880ebda-d882-4e35-89e7-ef739a423a7d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6khfl" Jan 21 11:34:24 crc kubenswrapper[4881]: I0121 11:34:24.008857 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3880ebda-d882-4e35-89e7-ef739a423a7d-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6khfl\" (UID: \"3880ebda-d882-4e35-89e7-ef739a423a7d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6khfl" Jan 21 11:34:24 crc kubenswrapper[4881]: I0121 11:34:24.113011 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3880ebda-d882-4e35-89e7-ef739a423a7d-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6khfl\" (UID: \"3880ebda-d882-4e35-89e7-ef739a423a7d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6khfl" Jan 21 11:34:24 crc kubenswrapper[4881]: I0121 11:34:24.113162 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mg4tq\" (UniqueName: \"kubernetes.io/projected/3880ebda-d882-4e35-89e7-ef739a423a7d-kube-api-access-mg4tq\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6khfl\" (UID: \"3880ebda-d882-4e35-89e7-ef739a423a7d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6khfl" Jan 21 11:34:24 crc kubenswrapper[4881]: I0121 11:34:24.113412 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3880ebda-d882-4e35-89e7-ef739a423a7d-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6khfl\" (UID: \"3880ebda-d882-4e35-89e7-ef739a423a7d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6khfl" Jan 21 11:34:24 crc kubenswrapper[4881]: I0121 11:34:24.116888 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3880ebda-d882-4e35-89e7-ef739a423a7d-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6khfl\" (UID: \"3880ebda-d882-4e35-89e7-ef739a423a7d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6khfl" Jan 21 11:34:24 crc kubenswrapper[4881]: I0121 11:34:24.117082 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3880ebda-d882-4e35-89e7-ef739a423a7d-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6khfl\" (UID: \"3880ebda-d882-4e35-89e7-ef739a423a7d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6khfl" Jan 21 11:34:24 crc kubenswrapper[4881]: I0121 11:34:24.131320 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mg4tq\" (UniqueName: \"kubernetes.io/projected/3880ebda-d882-4e35-89e7-ef739a423a7d-kube-api-access-mg4tq\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-6khfl\" (UID: \"3880ebda-d882-4e35-89e7-ef739a423a7d\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6khfl" Jan 21 11:34:24 crc kubenswrapper[4881]: I0121 11:34:24.222862 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6khfl" Jan 21 11:34:24 crc kubenswrapper[4881]: I0121 11:34:24.965685 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-6khfl"] Jan 21 11:34:25 crc kubenswrapper[4881]: I0121 11:34:25.325721 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b51ea6d-7925-4ba0-af48-901f9ef8f774" path="/var/lib/kubelet/pods/4b51ea6d-7925-4ba0-af48-901f9ef8f774/volumes" Jan 21 11:34:25 crc kubenswrapper[4881]: I0121 11:34:25.684865 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6khfl" event={"ID":"3880ebda-d882-4e35-89e7-ef739a423a7d","Type":"ContainerStarted","Data":"7714267ec3dc1640c123557117fbc7bea0a5f6ebfaf06413867f22000ae2f1bc"} Jan 21 11:34:25 crc kubenswrapper[4881]: I0121 11:34:25.684924 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6khfl" event={"ID":"3880ebda-d882-4e35-89e7-ef739a423a7d","Type":"ContainerStarted","Data":"8d72a218bfa949867c619b3098aa191e472babf9948808437235ab0bbda32186"} Jan 21 11:34:25 crc kubenswrapper[4881]: I0121 11:34:25.715922 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6khfl" podStartSLOduration=2.284404114 podStartE2EDuration="2.715899224s" podCreationTimestamp="2026-01-21 11:34:23 +0000 UTC" firstStartedPulling="2026-01-21 11:34:24.972086318 +0000 UTC m=+2252.232042787" lastFinishedPulling="2026-01-21 11:34:25.403581428 +0000 UTC m=+2252.663537897" observedRunningTime="2026-01-21 11:34:25.699704935 +0000 UTC m=+2252.959661424" watchObservedRunningTime="2026-01-21 11:34:25.715899224 +0000 UTC m=+2252.975855703" Jan 21 11:34:31 crc kubenswrapper[4881]: I0121 11:34:31.053928 4881 scope.go:117] "RemoveContainer" containerID="62b5fd9972946ab2305558cba9c0d54f5b29b725654cb25337e61434a431d9ea" Jan 21 11:35:15 crc kubenswrapper[4881]: I0121 11:35:15.356273 4881 generic.go:334] "Generic (PLEG): container finished" podID="3880ebda-d882-4e35-89e7-ef739a423a7d" containerID="7714267ec3dc1640c123557117fbc7bea0a5f6ebfaf06413867f22000ae2f1bc" exitCode=0 Jan 21 11:35:15 crc kubenswrapper[4881]: I0121 11:35:15.356403 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6khfl" event={"ID":"3880ebda-d882-4e35-89e7-ef739a423a7d","Type":"ContainerDied","Data":"7714267ec3dc1640c123557117fbc7bea0a5f6ebfaf06413867f22000ae2f1bc"} Jan 21 11:35:16 crc kubenswrapper[4881]: I0121 11:35:16.812368 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6khfl" Jan 21 11:35:16 crc kubenswrapper[4881]: I0121 11:35:16.945900 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3880ebda-d882-4e35-89e7-ef739a423a7d-inventory\") pod \"3880ebda-d882-4e35-89e7-ef739a423a7d\" (UID: \"3880ebda-d882-4e35-89e7-ef739a423a7d\") " Jan 21 11:35:16 crc kubenswrapper[4881]: I0121 11:35:16.946172 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg4tq\" (UniqueName: \"kubernetes.io/projected/3880ebda-d882-4e35-89e7-ef739a423a7d-kube-api-access-mg4tq\") pod \"3880ebda-d882-4e35-89e7-ef739a423a7d\" (UID: \"3880ebda-d882-4e35-89e7-ef739a423a7d\") " Jan 21 11:35:16 crc kubenswrapper[4881]: I0121 11:35:16.946260 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3880ebda-d882-4e35-89e7-ef739a423a7d-ssh-key-openstack-edpm-ipam\") pod \"3880ebda-d882-4e35-89e7-ef739a423a7d\" (UID: \"3880ebda-d882-4e35-89e7-ef739a423a7d\") " Jan 21 11:35:16 crc kubenswrapper[4881]: I0121 11:35:16.951586 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3880ebda-d882-4e35-89e7-ef739a423a7d-kube-api-access-mg4tq" (OuterVolumeSpecName: "kube-api-access-mg4tq") pod "3880ebda-d882-4e35-89e7-ef739a423a7d" (UID: "3880ebda-d882-4e35-89e7-ef739a423a7d"). InnerVolumeSpecName "kube-api-access-mg4tq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:35:16 crc kubenswrapper[4881]: I0121 11:35:16.975820 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3880ebda-d882-4e35-89e7-ef739a423a7d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3880ebda-d882-4e35-89e7-ef739a423a7d" (UID: "3880ebda-d882-4e35-89e7-ef739a423a7d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:35:16 crc kubenswrapper[4881]: I0121 11:35:16.976421 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3880ebda-d882-4e35-89e7-ef739a423a7d-inventory" (OuterVolumeSpecName: "inventory") pod "3880ebda-d882-4e35-89e7-ef739a423a7d" (UID: "3880ebda-d882-4e35-89e7-ef739a423a7d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:35:17 crc kubenswrapper[4881]: I0121 11:35:17.049126 4881 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3880ebda-d882-4e35-89e7-ef739a423a7d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 11:35:17 crc kubenswrapper[4881]: I0121 11:35:17.049165 4881 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3880ebda-d882-4e35-89e7-ef739a423a7d-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 11:35:17 crc kubenswrapper[4881]: I0121 11:35:17.049178 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg4tq\" (UniqueName: \"kubernetes.io/projected/3880ebda-d882-4e35-89e7-ef739a423a7d-kube-api-access-mg4tq\") on node \"crc\" DevicePath \"\"" Jan 21 11:35:17 crc kubenswrapper[4881]: I0121 11:35:17.378271 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6khfl" event={"ID":"3880ebda-d882-4e35-89e7-ef739a423a7d","Type":"ContainerDied","Data":"8d72a218bfa949867c619b3098aa191e472babf9948808437235ab0bbda32186"} Jan 21 11:35:17 crc kubenswrapper[4881]: I0121 11:35:17.378318 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-6khfl" Jan 21 11:35:17 crc kubenswrapper[4881]: I0121 11:35:17.378327 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d72a218bfa949867c619b3098aa191e472babf9948808437235ab0bbda32186" Jan 21 11:35:17 crc kubenswrapper[4881]: I0121 11:35:17.484940 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c995r"] Jan 21 11:35:17 crc kubenswrapper[4881]: E0121 11:35:17.486247 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3880ebda-d882-4e35-89e7-ef739a423a7d" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 21 11:35:17 crc kubenswrapper[4881]: I0121 11:35:17.486331 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="3880ebda-d882-4e35-89e7-ef739a423a7d" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 21 11:35:17 crc kubenswrapper[4881]: I0121 11:35:17.486613 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="3880ebda-d882-4e35-89e7-ef739a423a7d" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 21 11:35:17 crc kubenswrapper[4881]: I0121 11:35:17.487440 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c995r" Jan 21 11:35:17 crc kubenswrapper[4881]: I0121 11:35:17.491428 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 11:35:17 crc kubenswrapper[4881]: I0121 11:35:17.491733 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fd7zg" Jan 21 11:35:17 crc kubenswrapper[4881]: I0121 11:35:17.499307 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c995r"] Jan 21 11:35:17 crc kubenswrapper[4881]: I0121 11:35:17.533291 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 11:35:17 crc kubenswrapper[4881]: I0121 11:35:17.533631 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 11:35:17 crc kubenswrapper[4881]: I0121 11:35:17.663014 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f96dcee4-7734-4166-9a01-443c6ee66f86-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c995r\" (UID: \"f96dcee4-7734-4166-9a01-443c6ee66f86\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c995r" Jan 21 11:35:17 crc kubenswrapper[4881]: I0121 11:35:17.663150 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rccwx\" (UniqueName: \"kubernetes.io/projected/f96dcee4-7734-4166-9a01-443c6ee66f86-kube-api-access-rccwx\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c995r\" (UID: \"f96dcee4-7734-4166-9a01-443c6ee66f86\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c995r" Jan 21 11:35:17 crc kubenswrapper[4881]: I0121 11:35:17.663535 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f96dcee4-7734-4166-9a01-443c6ee66f86-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c995r\" (UID: \"f96dcee4-7734-4166-9a01-443c6ee66f86\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c995r" Jan 21 11:35:17 crc kubenswrapper[4881]: I0121 11:35:17.765500 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f96dcee4-7734-4166-9a01-443c6ee66f86-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c995r\" (UID: \"f96dcee4-7734-4166-9a01-443c6ee66f86\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c995r" Jan 21 11:35:17 crc kubenswrapper[4881]: I0121 11:35:17.765558 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f96dcee4-7734-4166-9a01-443c6ee66f86-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c995r\" (UID: \"f96dcee4-7734-4166-9a01-443c6ee66f86\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c995r" Jan 21 11:35:17 crc kubenswrapper[4881]: I0121 11:35:17.765633 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rccwx\" (UniqueName: \"kubernetes.io/projected/f96dcee4-7734-4166-9a01-443c6ee66f86-kube-api-access-rccwx\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c995r\" (UID: \"f96dcee4-7734-4166-9a01-443c6ee66f86\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c995r" Jan 21 11:35:17 crc kubenswrapper[4881]: I0121 11:35:17.772832 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f96dcee4-7734-4166-9a01-443c6ee66f86-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c995r\" (UID: \"f96dcee4-7734-4166-9a01-443c6ee66f86\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c995r" Jan 21 11:35:17 crc kubenswrapper[4881]: I0121 11:35:17.773040 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f96dcee4-7734-4166-9a01-443c6ee66f86-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c995r\" (UID: \"f96dcee4-7734-4166-9a01-443c6ee66f86\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c995r" Jan 21 11:35:17 crc kubenswrapper[4881]: I0121 11:35:17.785472 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rccwx\" (UniqueName: \"kubernetes.io/projected/f96dcee4-7734-4166-9a01-443c6ee66f86-kube-api-access-rccwx\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-c995r\" (UID: \"f96dcee4-7734-4166-9a01-443c6ee66f86\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c995r" Jan 21 11:35:17 crc kubenswrapper[4881]: I0121 11:35:17.847040 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c995r" Jan 21 11:35:18 crc kubenswrapper[4881]: I0121 11:35:18.409142 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c995r"] Jan 21 11:35:19 crc kubenswrapper[4881]: I0121 11:35:19.405920 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c995r" event={"ID":"f96dcee4-7734-4166-9a01-443c6ee66f86","Type":"ContainerStarted","Data":"0e14bee8e522916ea5670966d0aff696ff982885f93ca6e554dfbd5aec6d5c80"} Jan 21 11:35:19 crc kubenswrapper[4881]: I0121 11:35:19.406468 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c995r" event={"ID":"f96dcee4-7734-4166-9a01-443c6ee66f86","Type":"ContainerStarted","Data":"c2b4105bf60b3cd2cbbdb22ac6c4b2b563ce2ee61089eacc780ff88d5f4eeae1"} Jan 21 11:35:19 crc kubenswrapper[4881]: I0121 11:35:19.434455 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c995r" podStartSLOduration=2.009433461 podStartE2EDuration="2.43442422s" podCreationTimestamp="2026-01-21 11:35:17 +0000 UTC" firstStartedPulling="2026-01-21 11:35:18.410874362 +0000 UTC m=+2305.670830831" lastFinishedPulling="2026-01-21 11:35:18.835865121 +0000 UTC m=+2306.095821590" observedRunningTime="2026-01-21 11:35:19.424679139 +0000 UTC m=+2306.684635608" watchObservedRunningTime="2026-01-21 11:35:19.43442422 +0000 UTC m=+2306.694380689" Jan 21 11:35:29 crc kubenswrapper[4881]: I0121 11:35:29.850724 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:35:29 crc kubenswrapper[4881]: I0121 11:35:29.851273 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:35:59 crc kubenswrapper[4881]: I0121 11:35:59.851658 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:35:59 crc kubenswrapper[4881]: I0121 11:35:59.852253 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:36:19 crc kubenswrapper[4881]: I0121 11:36:19.101368 4881 generic.go:334] "Generic (PLEG): container finished" podID="f96dcee4-7734-4166-9a01-443c6ee66f86" containerID="0e14bee8e522916ea5670966d0aff696ff982885f93ca6e554dfbd5aec6d5c80" exitCode=0 Jan 21 11:36:19 crc kubenswrapper[4881]: I0121 11:36:19.101459 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c995r" event={"ID":"f96dcee4-7734-4166-9a01-443c6ee66f86","Type":"ContainerDied","Data":"0e14bee8e522916ea5670966d0aff696ff982885f93ca6e554dfbd5aec6d5c80"} Jan 21 11:36:20 crc kubenswrapper[4881]: I0121 11:36:20.567639 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c995r" Jan 21 11:36:20 crc kubenswrapper[4881]: I0121 11:36:20.698672 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f96dcee4-7734-4166-9a01-443c6ee66f86-ssh-key-openstack-edpm-ipam\") pod \"f96dcee4-7734-4166-9a01-443c6ee66f86\" (UID: \"f96dcee4-7734-4166-9a01-443c6ee66f86\") " Jan 21 11:36:20 crc kubenswrapper[4881]: I0121 11:36:20.698933 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f96dcee4-7734-4166-9a01-443c6ee66f86-inventory\") pod \"f96dcee4-7734-4166-9a01-443c6ee66f86\" (UID: \"f96dcee4-7734-4166-9a01-443c6ee66f86\") " Jan 21 11:36:20 crc kubenswrapper[4881]: I0121 11:36:20.698988 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rccwx\" (UniqueName: \"kubernetes.io/projected/f96dcee4-7734-4166-9a01-443c6ee66f86-kube-api-access-rccwx\") pod \"f96dcee4-7734-4166-9a01-443c6ee66f86\" (UID: \"f96dcee4-7734-4166-9a01-443c6ee66f86\") " Jan 21 11:36:20 crc kubenswrapper[4881]: I0121 11:36:20.705609 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f96dcee4-7734-4166-9a01-443c6ee66f86-kube-api-access-rccwx" (OuterVolumeSpecName: "kube-api-access-rccwx") pod "f96dcee4-7734-4166-9a01-443c6ee66f86" (UID: "f96dcee4-7734-4166-9a01-443c6ee66f86"). InnerVolumeSpecName "kube-api-access-rccwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:36:20 crc kubenswrapper[4881]: I0121 11:36:20.730130 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f96dcee4-7734-4166-9a01-443c6ee66f86-inventory" (OuterVolumeSpecName: "inventory") pod "f96dcee4-7734-4166-9a01-443c6ee66f86" (UID: "f96dcee4-7734-4166-9a01-443c6ee66f86"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:36:20 crc kubenswrapper[4881]: I0121 11:36:20.739601 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f96dcee4-7734-4166-9a01-443c6ee66f86-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f96dcee4-7734-4166-9a01-443c6ee66f86" (UID: "f96dcee4-7734-4166-9a01-443c6ee66f86"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:36:20 crc kubenswrapper[4881]: I0121 11:36:20.801275 4881 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f96dcee4-7734-4166-9a01-443c6ee66f86-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 11:36:20 crc kubenswrapper[4881]: I0121 11:36:20.801325 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rccwx\" (UniqueName: \"kubernetes.io/projected/f96dcee4-7734-4166-9a01-443c6ee66f86-kube-api-access-rccwx\") on node \"crc\" DevicePath \"\"" Jan 21 11:36:20 crc kubenswrapper[4881]: I0121 11:36:20.801340 4881 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f96dcee4-7734-4166-9a01-443c6ee66f86-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 11:36:21 crc kubenswrapper[4881]: I0121 11:36:21.127584 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c995r" event={"ID":"f96dcee4-7734-4166-9a01-443c6ee66f86","Type":"ContainerDied","Data":"c2b4105bf60b3cd2cbbdb22ac6c4b2b563ce2ee61089eacc780ff88d5f4eeae1"} Jan 21 11:36:21 crc kubenswrapper[4881]: I0121 11:36:21.127649 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2b4105bf60b3cd2cbbdb22ac6c4b2b563ce2ee61089eacc780ff88d5f4eeae1" Jan 21 11:36:21 crc kubenswrapper[4881]: I0121 11:36:21.127676 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-c995r" Jan 21 11:36:21 crc kubenswrapper[4881]: I0121 11:36:21.230095 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-dd2hk"] Jan 21 11:36:21 crc kubenswrapper[4881]: E0121 11:36:21.230809 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f96dcee4-7734-4166-9a01-443c6ee66f86" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 21 11:36:21 crc kubenswrapper[4881]: I0121 11:36:21.230832 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="f96dcee4-7734-4166-9a01-443c6ee66f86" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 21 11:36:21 crc kubenswrapper[4881]: I0121 11:36:21.231122 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="f96dcee4-7734-4166-9a01-443c6ee66f86" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 21 11:36:21 crc kubenswrapper[4881]: I0121 11:36:21.232165 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-dd2hk" Jan 21 11:36:21 crc kubenswrapper[4881]: I0121 11:36:21.236096 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fd7zg" Jan 21 11:36:21 crc kubenswrapper[4881]: I0121 11:36:21.236255 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 11:36:21 crc kubenswrapper[4881]: I0121 11:36:21.236282 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 11:36:21 crc kubenswrapper[4881]: I0121 11:36:21.236946 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 11:36:21 crc kubenswrapper[4881]: I0121 11:36:21.248702 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-dd2hk"] Jan 21 11:36:21 crc kubenswrapper[4881]: I0121 11:36:21.312665 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/157a809f-f6fa-43dc-b73d-380976da1312-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-dd2hk\" (UID: \"157a809f-f6fa-43dc-b73d-380976da1312\") " pod="openstack/ssh-known-hosts-edpm-deployment-dd2hk" Jan 21 11:36:21 crc kubenswrapper[4881]: I0121 11:36:21.312765 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/157a809f-f6fa-43dc-b73d-380976da1312-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-dd2hk\" (UID: \"157a809f-f6fa-43dc-b73d-380976da1312\") " pod="openstack/ssh-known-hosts-edpm-deployment-dd2hk" Jan 21 11:36:21 crc kubenswrapper[4881]: I0121 11:36:21.312935 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcxvz\" (UniqueName: \"kubernetes.io/projected/157a809f-f6fa-43dc-b73d-380976da1312-kube-api-access-hcxvz\") pod \"ssh-known-hosts-edpm-deployment-dd2hk\" (UID: \"157a809f-f6fa-43dc-b73d-380976da1312\") " pod="openstack/ssh-known-hosts-edpm-deployment-dd2hk" Jan 21 11:36:21 crc kubenswrapper[4881]: I0121 11:36:21.415587 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/157a809f-f6fa-43dc-b73d-380976da1312-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-dd2hk\" (UID: \"157a809f-f6fa-43dc-b73d-380976da1312\") " pod="openstack/ssh-known-hosts-edpm-deployment-dd2hk" Jan 21 11:36:21 crc kubenswrapper[4881]: I0121 11:36:21.415742 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcxvz\" (UniqueName: \"kubernetes.io/projected/157a809f-f6fa-43dc-b73d-380976da1312-kube-api-access-hcxvz\") pod \"ssh-known-hosts-edpm-deployment-dd2hk\" (UID: \"157a809f-f6fa-43dc-b73d-380976da1312\") " pod="openstack/ssh-known-hosts-edpm-deployment-dd2hk" Jan 21 11:36:21 crc kubenswrapper[4881]: I0121 11:36:21.417032 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/157a809f-f6fa-43dc-b73d-380976da1312-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-dd2hk\" (UID: \"157a809f-f6fa-43dc-b73d-380976da1312\") " pod="openstack/ssh-known-hosts-edpm-deployment-dd2hk" Jan 21 11:36:21 crc kubenswrapper[4881]: I0121 11:36:21.422682 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/157a809f-f6fa-43dc-b73d-380976da1312-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-dd2hk\" (UID: \"157a809f-f6fa-43dc-b73d-380976da1312\") " pod="openstack/ssh-known-hosts-edpm-deployment-dd2hk" Jan 21 11:36:21 crc kubenswrapper[4881]: I0121 11:36:21.424109 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/157a809f-f6fa-43dc-b73d-380976da1312-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-dd2hk\" (UID: \"157a809f-f6fa-43dc-b73d-380976da1312\") " pod="openstack/ssh-known-hosts-edpm-deployment-dd2hk" Jan 21 11:36:21 crc kubenswrapper[4881]: I0121 11:36:21.433822 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcxvz\" (UniqueName: \"kubernetes.io/projected/157a809f-f6fa-43dc-b73d-380976da1312-kube-api-access-hcxvz\") pod \"ssh-known-hosts-edpm-deployment-dd2hk\" (UID: \"157a809f-f6fa-43dc-b73d-380976da1312\") " pod="openstack/ssh-known-hosts-edpm-deployment-dd2hk" Jan 21 11:36:21 crc kubenswrapper[4881]: I0121 11:36:21.587109 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-dd2hk" Jan 21 11:36:22 crc kubenswrapper[4881]: I0121 11:36:22.132482 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-dd2hk"] Jan 21 11:36:23 crc kubenswrapper[4881]: I0121 11:36:23.144953 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-dd2hk" event={"ID":"157a809f-f6fa-43dc-b73d-380976da1312","Type":"ContainerStarted","Data":"f1db5909ded55b74a3536abb2e28180a19052deddcecbb0f0ed78e60d78a0e4f"} Jan 21 11:36:23 crc kubenswrapper[4881]: I0121 11:36:23.145278 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-dd2hk" event={"ID":"157a809f-f6fa-43dc-b73d-380976da1312","Type":"ContainerStarted","Data":"66235c313b4580faaef6c50feeddc7e2004a0ad3aed1911d1a15ba7785f574fc"} Jan 21 11:36:23 crc kubenswrapper[4881]: I0121 11:36:23.167644 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-dd2hk" podStartSLOduration=1.713326981 podStartE2EDuration="2.167620988s" podCreationTimestamp="2026-01-21 11:36:21 +0000 UTC" firstStartedPulling="2026-01-21 11:36:22.151919785 +0000 UTC m=+2369.411876254" lastFinishedPulling="2026-01-21 11:36:22.606213792 +0000 UTC m=+2369.866170261" observedRunningTime="2026-01-21 11:36:23.160053704 +0000 UTC m=+2370.420010173" watchObservedRunningTime="2026-01-21 11:36:23.167620988 +0000 UTC m=+2370.427577457" Jan 21 11:36:29 crc kubenswrapper[4881]: I0121 11:36:29.851401 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:36:29 crc kubenswrapper[4881]: I0121 11:36:29.852458 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:36:29 crc kubenswrapper[4881]: I0121 11:36:29.852546 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 11:36:29 crc kubenswrapper[4881]: I0121 11:36:29.853423 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ec38e0182d2afb5352817a93e9019bceb63eb1c3df53485164249f8ee9b9d46f"} pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 11:36:29 crc kubenswrapper[4881]: I0121 11:36:29.853485 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" containerID="cri-o://ec38e0182d2afb5352817a93e9019bceb63eb1c3df53485164249f8ee9b9d46f" gracePeriod=600 Jan 21 11:36:29 crc kubenswrapper[4881]: E0121 11:36:29.978924 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:36:30 crc kubenswrapper[4881]: I0121 11:36:30.227122 4881 generic.go:334] "Generic (PLEG): container finished" podID="3687b313-1df2-4274-80db-8c758b51bf2d" containerID="ec38e0182d2afb5352817a93e9019bceb63eb1c3df53485164249f8ee9b9d46f" exitCode=0 Jan 21 11:36:30 crc kubenswrapper[4881]: I0121 11:36:30.227159 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerDied","Data":"ec38e0182d2afb5352817a93e9019bceb63eb1c3df53485164249f8ee9b9d46f"} Jan 21 11:36:30 crc kubenswrapper[4881]: I0121 11:36:30.227223 4881 scope.go:117] "RemoveContainer" containerID="ef39ee7cfe761ce9a9728441eb10e70a161b503ea812b7dfbf273e44506d3274" Jan 21 11:36:30 crc kubenswrapper[4881]: I0121 11:36:30.228046 4881 scope.go:117] "RemoveContainer" containerID="ec38e0182d2afb5352817a93e9019bceb63eb1c3df53485164249f8ee9b9d46f" Jan 21 11:36:30 crc kubenswrapper[4881]: E0121 11:36:30.228379 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:36:31 crc kubenswrapper[4881]: I0121 11:36:31.242491 4881 generic.go:334] "Generic (PLEG): container finished" podID="157a809f-f6fa-43dc-b73d-380976da1312" containerID="f1db5909ded55b74a3536abb2e28180a19052deddcecbb0f0ed78e60d78a0e4f" exitCode=0 Jan 21 11:36:31 crc kubenswrapper[4881]: I0121 11:36:31.242582 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-dd2hk" event={"ID":"157a809f-f6fa-43dc-b73d-380976da1312","Type":"ContainerDied","Data":"f1db5909ded55b74a3536abb2e28180a19052deddcecbb0f0ed78e60d78a0e4f"} Jan 21 11:36:31 crc kubenswrapper[4881]: I0121 11:36:31.250280 4881 scope.go:117] "RemoveContainer" containerID="c1eba3ae03b1d6805b90d42d0ec2f798fa4704781a61dbdfa8159f414d7bb80e" Jan 21 11:36:31 crc kubenswrapper[4881]: I0121 11:36:31.281058 4881 scope.go:117] "RemoveContainer" containerID="c222168e828ddf8dc31adf5d20e6251d1aebd2db36a121297ee44763be9bc74e" Jan 21 11:36:31 crc kubenswrapper[4881]: I0121 11:36:31.344272 4881 scope.go:117] "RemoveContainer" containerID="0ab0a82d406b0a4031e5637f72af69a714ded06513932b035aeb5ac564f21b6b" Jan 21 11:36:31 crc kubenswrapper[4881]: I0121 11:36:31.382865 4881 scope.go:117] "RemoveContainer" containerID="d6ee22258af69df6704251a1ea48a067b0aad9b9017145fdec7581e1437ace89" Jan 21 11:36:31 crc kubenswrapper[4881]: I0121 11:36:31.403176 4881 scope.go:117] "RemoveContainer" containerID="48d5d26b6c9086a6b947d5294b328f1c7e8f26fa1ce1593b0120714fc18e44b1" Jan 21 11:36:31 crc kubenswrapper[4881]: I0121 11:36:31.456965 4881 scope.go:117] "RemoveContainer" containerID="5e0abf8ffd3df2b4543f3b78f4df1de894199c4c001e6db2e5a3872e46d7a54b" Jan 21 11:36:32 crc kubenswrapper[4881]: I0121 11:36:32.728492 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-dd2hk" Jan 21 11:36:32 crc kubenswrapper[4881]: I0121 11:36:32.915850 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcxvz\" (UniqueName: \"kubernetes.io/projected/157a809f-f6fa-43dc-b73d-380976da1312-kube-api-access-hcxvz\") pod \"157a809f-f6fa-43dc-b73d-380976da1312\" (UID: \"157a809f-f6fa-43dc-b73d-380976da1312\") " Jan 21 11:36:32 crc kubenswrapper[4881]: I0121 11:36:32.915977 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/157a809f-f6fa-43dc-b73d-380976da1312-ssh-key-openstack-edpm-ipam\") pod \"157a809f-f6fa-43dc-b73d-380976da1312\" (UID: \"157a809f-f6fa-43dc-b73d-380976da1312\") " Jan 21 11:36:32 crc kubenswrapper[4881]: I0121 11:36:32.916107 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/157a809f-f6fa-43dc-b73d-380976da1312-inventory-0\") pod \"157a809f-f6fa-43dc-b73d-380976da1312\" (UID: \"157a809f-f6fa-43dc-b73d-380976da1312\") " Jan 21 11:36:32 crc kubenswrapper[4881]: I0121 11:36:32.925447 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/157a809f-f6fa-43dc-b73d-380976da1312-kube-api-access-hcxvz" (OuterVolumeSpecName: "kube-api-access-hcxvz") pod "157a809f-f6fa-43dc-b73d-380976da1312" (UID: "157a809f-f6fa-43dc-b73d-380976da1312"). InnerVolumeSpecName "kube-api-access-hcxvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:36:32 crc kubenswrapper[4881]: I0121 11:36:32.970038 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/157a809f-f6fa-43dc-b73d-380976da1312-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "157a809f-f6fa-43dc-b73d-380976da1312" (UID: "157a809f-f6fa-43dc-b73d-380976da1312"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:36:32 crc kubenswrapper[4881]: I0121 11:36:32.985981 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/157a809f-f6fa-43dc-b73d-380976da1312-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "157a809f-f6fa-43dc-b73d-380976da1312" (UID: "157a809f-f6fa-43dc-b73d-380976da1312"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:36:33 crc kubenswrapper[4881]: I0121 11:36:33.019355 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcxvz\" (UniqueName: \"kubernetes.io/projected/157a809f-f6fa-43dc-b73d-380976da1312-kube-api-access-hcxvz\") on node \"crc\" DevicePath \"\"" Jan 21 11:36:33 crc kubenswrapper[4881]: I0121 11:36:33.019385 4881 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/157a809f-f6fa-43dc-b73d-380976da1312-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 11:36:33 crc kubenswrapper[4881]: I0121 11:36:33.019398 4881 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/157a809f-f6fa-43dc-b73d-380976da1312-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:36:33 crc kubenswrapper[4881]: I0121 11:36:33.264832 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-dd2hk" event={"ID":"157a809f-f6fa-43dc-b73d-380976da1312","Type":"ContainerDied","Data":"66235c313b4580faaef6c50feeddc7e2004a0ad3aed1911d1a15ba7785f574fc"} Jan 21 11:36:33 crc kubenswrapper[4881]: I0121 11:36:33.264878 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66235c313b4580faaef6c50feeddc7e2004a0ad3aed1911d1a15ba7785f574fc" Jan 21 11:36:33 crc kubenswrapper[4881]: I0121 11:36:33.264916 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-dd2hk" Jan 21 11:36:33 crc kubenswrapper[4881]: I0121 11:36:33.370407 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xfqr"] Jan 21 11:36:33 crc kubenswrapper[4881]: E0121 11:36:33.370959 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="157a809f-f6fa-43dc-b73d-380976da1312" containerName="ssh-known-hosts-edpm-deployment" Jan 21 11:36:33 crc kubenswrapper[4881]: I0121 11:36:33.370977 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="157a809f-f6fa-43dc-b73d-380976da1312" containerName="ssh-known-hosts-edpm-deployment" Jan 21 11:36:33 crc kubenswrapper[4881]: I0121 11:36:33.371166 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="157a809f-f6fa-43dc-b73d-380976da1312" containerName="ssh-known-hosts-edpm-deployment" Jan 21 11:36:33 crc kubenswrapper[4881]: I0121 11:36:33.371937 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xfqr" Jan 21 11:36:33 crc kubenswrapper[4881]: I0121 11:36:33.384484 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 11:36:33 crc kubenswrapper[4881]: I0121 11:36:33.384720 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 11:36:33 crc kubenswrapper[4881]: I0121 11:36:33.385281 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fd7zg" Jan 21 11:36:33 crc kubenswrapper[4881]: I0121 11:36:33.385428 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 11:36:33 crc kubenswrapper[4881]: I0121 11:36:33.388504 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xfqr"] Jan 21 11:36:33 crc kubenswrapper[4881]: I0121 11:36:33.447056 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af647318-40b6-4ce3-8f5b-c3af4c8dcb0d-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7xfqr\" (UID: \"af647318-40b6-4ce3-8f5b-c3af4c8dcb0d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xfqr" Jan 21 11:36:33 crc kubenswrapper[4881]: I0121 11:36:33.447147 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/af647318-40b6-4ce3-8f5b-c3af4c8dcb0d-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7xfqr\" (UID: \"af647318-40b6-4ce3-8f5b-c3af4c8dcb0d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xfqr" Jan 21 11:36:33 crc kubenswrapper[4881]: I0121 11:36:33.447233 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47xx6\" (UniqueName: \"kubernetes.io/projected/af647318-40b6-4ce3-8f5b-c3af4c8dcb0d-kube-api-access-47xx6\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7xfqr\" (UID: \"af647318-40b6-4ce3-8f5b-c3af4c8dcb0d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xfqr" Jan 21 11:36:33 crc kubenswrapper[4881]: I0121 11:36:33.549241 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af647318-40b6-4ce3-8f5b-c3af4c8dcb0d-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7xfqr\" (UID: \"af647318-40b6-4ce3-8f5b-c3af4c8dcb0d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xfqr" Jan 21 11:36:33 crc kubenswrapper[4881]: I0121 11:36:33.549315 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/af647318-40b6-4ce3-8f5b-c3af4c8dcb0d-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7xfqr\" (UID: \"af647318-40b6-4ce3-8f5b-c3af4c8dcb0d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xfqr" Jan 21 11:36:33 crc kubenswrapper[4881]: I0121 11:36:33.549352 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47xx6\" (UniqueName: \"kubernetes.io/projected/af647318-40b6-4ce3-8f5b-c3af4c8dcb0d-kube-api-access-47xx6\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7xfqr\" (UID: \"af647318-40b6-4ce3-8f5b-c3af4c8dcb0d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xfqr" Jan 21 11:36:33 crc kubenswrapper[4881]: I0121 11:36:33.553816 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af647318-40b6-4ce3-8f5b-c3af4c8dcb0d-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7xfqr\" (UID: \"af647318-40b6-4ce3-8f5b-c3af4c8dcb0d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xfqr" Jan 21 11:36:33 crc kubenswrapper[4881]: I0121 11:36:33.560069 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/af647318-40b6-4ce3-8f5b-c3af4c8dcb0d-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7xfqr\" (UID: \"af647318-40b6-4ce3-8f5b-c3af4c8dcb0d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xfqr" Jan 21 11:36:33 crc kubenswrapper[4881]: I0121 11:36:33.567474 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47xx6\" (UniqueName: \"kubernetes.io/projected/af647318-40b6-4ce3-8f5b-c3af4c8dcb0d-kube-api-access-47xx6\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7xfqr\" (UID: \"af647318-40b6-4ce3-8f5b-c3af4c8dcb0d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xfqr" Jan 21 11:36:33 crc kubenswrapper[4881]: I0121 11:36:33.698459 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xfqr" Jan 21 11:36:34 crc kubenswrapper[4881]: I0121 11:36:34.327866 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xfqr"] Jan 21 11:36:35 crc kubenswrapper[4881]: I0121 11:36:35.290677 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xfqr" event={"ID":"af647318-40b6-4ce3-8f5b-c3af4c8dcb0d","Type":"ContainerStarted","Data":"409f626ab96ec0faa85083350b4a7d7f3a62c09e89bee9c03ac1296a6549197d"} Jan 21 11:36:35 crc kubenswrapper[4881]: I0121 11:36:35.291180 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xfqr" event={"ID":"af647318-40b6-4ce3-8f5b-c3af4c8dcb0d","Type":"ContainerStarted","Data":"a859b9ac6ed5fc21e2f0d9aea74ba2e88254a24acdbcf86471001e1c0e500490"} Jan 21 11:36:35 crc kubenswrapper[4881]: I0121 11:36:35.309095 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xfqr" podStartSLOduration=1.860044757 podStartE2EDuration="2.309070995s" podCreationTimestamp="2026-01-21 11:36:33 +0000 UTC" firstStartedPulling="2026-01-21 11:36:34.353110692 +0000 UTC m=+2381.613067161" lastFinishedPulling="2026-01-21 11:36:34.80213691 +0000 UTC m=+2382.062093399" observedRunningTime="2026-01-21 11:36:35.305945928 +0000 UTC m=+2382.565902397" watchObservedRunningTime="2026-01-21 11:36:35.309070995 +0000 UTC m=+2382.569027464" Jan 21 11:36:44 crc kubenswrapper[4881]: I0121 11:36:44.371858 4881 generic.go:334] "Generic (PLEG): container finished" podID="af647318-40b6-4ce3-8f5b-c3af4c8dcb0d" containerID="409f626ab96ec0faa85083350b4a7d7f3a62c09e89bee9c03ac1296a6549197d" exitCode=0 Jan 21 11:36:44 crc kubenswrapper[4881]: I0121 11:36:44.371937 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xfqr" event={"ID":"af647318-40b6-4ce3-8f5b-c3af4c8dcb0d","Type":"ContainerDied","Data":"409f626ab96ec0faa85083350b4a7d7f3a62c09e89bee9c03ac1296a6549197d"} Jan 21 11:36:45 crc kubenswrapper[4881]: I0121 11:36:45.311274 4881 scope.go:117] "RemoveContainer" containerID="ec38e0182d2afb5352817a93e9019bceb63eb1c3df53485164249f8ee9b9d46f" Jan 21 11:36:45 crc kubenswrapper[4881]: E0121 11:36:45.311941 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:36:45 crc kubenswrapper[4881]: I0121 11:36:45.876034 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xfqr" Jan 21 11:36:45 crc kubenswrapper[4881]: I0121 11:36:45.976584 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af647318-40b6-4ce3-8f5b-c3af4c8dcb0d-inventory\") pod \"af647318-40b6-4ce3-8f5b-c3af4c8dcb0d\" (UID: \"af647318-40b6-4ce3-8f5b-c3af4c8dcb0d\") " Jan 21 11:36:45 crc kubenswrapper[4881]: I0121 11:36:45.976729 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47xx6\" (UniqueName: \"kubernetes.io/projected/af647318-40b6-4ce3-8f5b-c3af4c8dcb0d-kube-api-access-47xx6\") pod \"af647318-40b6-4ce3-8f5b-c3af4c8dcb0d\" (UID: \"af647318-40b6-4ce3-8f5b-c3af4c8dcb0d\") " Jan 21 11:36:45 crc kubenswrapper[4881]: I0121 11:36:45.983061 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af647318-40b6-4ce3-8f5b-c3af4c8dcb0d-kube-api-access-47xx6" (OuterVolumeSpecName: "kube-api-access-47xx6") pod "af647318-40b6-4ce3-8f5b-c3af4c8dcb0d" (UID: "af647318-40b6-4ce3-8f5b-c3af4c8dcb0d"). InnerVolumeSpecName "kube-api-access-47xx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:36:46 crc kubenswrapper[4881]: I0121 11:36:46.010989 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af647318-40b6-4ce3-8f5b-c3af4c8dcb0d-inventory" (OuterVolumeSpecName: "inventory") pod "af647318-40b6-4ce3-8f5b-c3af4c8dcb0d" (UID: "af647318-40b6-4ce3-8f5b-c3af4c8dcb0d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:36:46 crc kubenswrapper[4881]: I0121 11:36:46.078146 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/af647318-40b6-4ce3-8f5b-c3af4c8dcb0d-ssh-key-openstack-edpm-ipam\") pod \"af647318-40b6-4ce3-8f5b-c3af4c8dcb0d\" (UID: \"af647318-40b6-4ce3-8f5b-c3af4c8dcb0d\") " Jan 21 11:36:46 crc kubenswrapper[4881]: I0121 11:36:46.078532 4881 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/af647318-40b6-4ce3-8f5b-c3af4c8dcb0d-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 11:36:46 crc kubenswrapper[4881]: I0121 11:36:46.078554 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47xx6\" (UniqueName: \"kubernetes.io/projected/af647318-40b6-4ce3-8f5b-c3af4c8dcb0d-kube-api-access-47xx6\") on node \"crc\" DevicePath \"\"" Jan 21 11:36:46 crc kubenswrapper[4881]: I0121 11:36:46.104647 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af647318-40b6-4ce3-8f5b-c3af4c8dcb0d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "af647318-40b6-4ce3-8f5b-c3af4c8dcb0d" (UID: "af647318-40b6-4ce3-8f5b-c3af4c8dcb0d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:36:46 crc kubenswrapper[4881]: I0121 11:36:46.181083 4881 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/af647318-40b6-4ce3-8f5b-c3af4c8dcb0d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 11:36:46 crc kubenswrapper[4881]: I0121 11:36:46.393842 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xfqr" event={"ID":"af647318-40b6-4ce3-8f5b-c3af4c8dcb0d","Type":"ContainerDied","Data":"a859b9ac6ed5fc21e2f0d9aea74ba2e88254a24acdbcf86471001e1c0e500490"} Jan 21 11:36:46 crc kubenswrapper[4881]: I0121 11:36:46.393901 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a859b9ac6ed5fc21e2f0d9aea74ba2e88254a24acdbcf86471001e1c0e500490" Jan 21 11:36:46 crc kubenswrapper[4881]: I0121 11:36:46.393902 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7xfqr" Jan 21 11:36:46 crc kubenswrapper[4881]: I0121 11:36:46.497391 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rdchn"] Jan 21 11:36:46 crc kubenswrapper[4881]: E0121 11:36:46.498208 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af647318-40b6-4ce3-8f5b-c3af4c8dcb0d" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 21 11:36:46 crc kubenswrapper[4881]: I0121 11:36:46.498236 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="af647318-40b6-4ce3-8f5b-c3af4c8dcb0d" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 21 11:36:46 crc kubenswrapper[4881]: I0121 11:36:46.498475 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="af647318-40b6-4ce3-8f5b-c3af4c8dcb0d" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 21 11:36:46 crc kubenswrapper[4881]: I0121 11:36:46.499671 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rdchn" Jan 21 11:36:46 crc kubenswrapper[4881]: I0121 11:36:46.505663 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fd7zg" Jan 21 11:36:46 crc kubenswrapper[4881]: I0121 11:36:46.506103 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rdchn"] Jan 21 11:36:46 crc kubenswrapper[4881]: I0121 11:36:46.506642 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 11:36:46 crc kubenswrapper[4881]: I0121 11:36:46.506713 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 11:36:46 crc kubenswrapper[4881]: I0121 11:36:46.509038 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 11:36:46 crc kubenswrapper[4881]: I0121 11:36:46.590718 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndnpw\" (UniqueName: \"kubernetes.io/projected/828bd055-053d-43b7-b76f-746438bb9b41-kube-api-access-ndnpw\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rdchn\" (UID: \"828bd055-053d-43b7-b76f-746438bb9b41\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rdchn" Jan 21 11:36:46 crc kubenswrapper[4881]: I0121 11:36:46.591043 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/828bd055-053d-43b7-b76f-746438bb9b41-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rdchn\" (UID: \"828bd055-053d-43b7-b76f-746438bb9b41\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rdchn" Jan 21 11:36:46 crc kubenswrapper[4881]: I0121 11:36:46.591109 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/828bd055-053d-43b7-b76f-746438bb9b41-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rdchn\" (UID: \"828bd055-053d-43b7-b76f-746438bb9b41\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rdchn" Jan 21 11:36:46 crc kubenswrapper[4881]: I0121 11:36:46.692658 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndnpw\" (UniqueName: \"kubernetes.io/projected/828bd055-053d-43b7-b76f-746438bb9b41-kube-api-access-ndnpw\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rdchn\" (UID: \"828bd055-053d-43b7-b76f-746438bb9b41\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rdchn" Jan 21 11:36:46 crc kubenswrapper[4881]: I0121 11:36:46.692771 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/828bd055-053d-43b7-b76f-746438bb9b41-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rdchn\" (UID: \"828bd055-053d-43b7-b76f-746438bb9b41\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rdchn" Jan 21 11:36:46 crc kubenswrapper[4881]: I0121 11:36:46.692827 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/828bd055-053d-43b7-b76f-746438bb9b41-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rdchn\" (UID: \"828bd055-053d-43b7-b76f-746438bb9b41\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rdchn" Jan 21 11:36:46 crc kubenswrapper[4881]: I0121 11:36:46.696878 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/828bd055-053d-43b7-b76f-746438bb9b41-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rdchn\" (UID: \"828bd055-053d-43b7-b76f-746438bb9b41\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rdchn" Jan 21 11:36:46 crc kubenswrapper[4881]: I0121 11:36:46.697376 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/828bd055-053d-43b7-b76f-746438bb9b41-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rdchn\" (UID: \"828bd055-053d-43b7-b76f-746438bb9b41\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rdchn" Jan 21 11:36:46 crc kubenswrapper[4881]: I0121 11:36:46.709202 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndnpw\" (UniqueName: \"kubernetes.io/projected/828bd055-053d-43b7-b76f-746438bb9b41-kube-api-access-ndnpw\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-rdchn\" (UID: \"828bd055-053d-43b7-b76f-746438bb9b41\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rdchn" Jan 21 11:36:46 crc kubenswrapper[4881]: I0121 11:36:46.828252 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rdchn" Jan 21 11:36:47 crc kubenswrapper[4881]: I0121 11:36:47.593424 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rdchn"] Jan 21 11:36:48 crc kubenswrapper[4881]: I0121 11:36:48.410814 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rdchn" event={"ID":"828bd055-053d-43b7-b76f-746438bb9b41","Type":"ContainerStarted","Data":"29753d3eba82df008a09044e90acb3b1e9b17ea67ac8abcf21cad2cd4786c8d0"} Jan 21 11:36:49 crc kubenswrapper[4881]: I0121 11:36:49.421621 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rdchn" event={"ID":"828bd055-053d-43b7-b76f-746438bb9b41","Type":"ContainerStarted","Data":"6e3e0b0bdb0a610ffbd23e94a352b3de735fe924fe27e0ef3590b79f42b1d2cb"} Jan 21 11:36:49 crc kubenswrapper[4881]: I0121 11:36:49.450601 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rdchn" podStartSLOduration=2.754850763 podStartE2EDuration="3.450576623s" podCreationTimestamp="2026-01-21 11:36:46 +0000 UTC" firstStartedPulling="2026-01-21 11:36:47.590512255 +0000 UTC m=+2394.850468724" lastFinishedPulling="2026-01-21 11:36:48.286238115 +0000 UTC m=+2395.546194584" observedRunningTime="2026-01-21 11:36:49.440394404 +0000 UTC m=+2396.700350903" watchObservedRunningTime="2026-01-21 11:36:49.450576623 +0000 UTC m=+2396.710533092" Jan 21 11:36:57 crc kubenswrapper[4881]: I0121 11:36:57.312436 4881 scope.go:117] "RemoveContainer" containerID="ec38e0182d2afb5352817a93e9019bceb63eb1c3df53485164249f8ee9b9d46f" Jan 21 11:36:57 crc kubenswrapper[4881]: E0121 11:36:57.314076 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:36:59 crc kubenswrapper[4881]: I0121 11:36:59.533923 4881 generic.go:334] "Generic (PLEG): container finished" podID="828bd055-053d-43b7-b76f-746438bb9b41" containerID="6e3e0b0bdb0a610ffbd23e94a352b3de735fe924fe27e0ef3590b79f42b1d2cb" exitCode=0 Jan 21 11:36:59 crc kubenswrapper[4881]: I0121 11:36:59.534037 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rdchn" event={"ID":"828bd055-053d-43b7-b76f-746438bb9b41","Type":"ContainerDied","Data":"6e3e0b0bdb0a610ffbd23e94a352b3de735fe924fe27e0ef3590b79f42b1d2cb"} Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.008833 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rdchn" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.111522 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/828bd055-053d-43b7-b76f-746438bb9b41-inventory\") pod \"828bd055-053d-43b7-b76f-746438bb9b41\" (UID: \"828bd055-053d-43b7-b76f-746438bb9b41\") " Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.111882 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndnpw\" (UniqueName: \"kubernetes.io/projected/828bd055-053d-43b7-b76f-746438bb9b41-kube-api-access-ndnpw\") pod \"828bd055-053d-43b7-b76f-746438bb9b41\" (UID: \"828bd055-053d-43b7-b76f-746438bb9b41\") " Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.111937 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/828bd055-053d-43b7-b76f-746438bb9b41-ssh-key-openstack-edpm-ipam\") pod \"828bd055-053d-43b7-b76f-746438bb9b41\" (UID: \"828bd055-053d-43b7-b76f-746438bb9b41\") " Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.120108 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/828bd055-053d-43b7-b76f-746438bb9b41-kube-api-access-ndnpw" (OuterVolumeSpecName: "kube-api-access-ndnpw") pod "828bd055-053d-43b7-b76f-746438bb9b41" (UID: "828bd055-053d-43b7-b76f-746438bb9b41"). InnerVolumeSpecName "kube-api-access-ndnpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.143022 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/828bd055-053d-43b7-b76f-746438bb9b41-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "828bd055-053d-43b7-b76f-746438bb9b41" (UID: "828bd055-053d-43b7-b76f-746438bb9b41"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.144925 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/828bd055-053d-43b7-b76f-746438bb9b41-inventory" (OuterVolumeSpecName: "inventory") pod "828bd055-053d-43b7-b76f-746438bb9b41" (UID: "828bd055-053d-43b7-b76f-746438bb9b41"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.215069 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndnpw\" (UniqueName: \"kubernetes.io/projected/828bd055-053d-43b7-b76f-746438bb9b41-kube-api-access-ndnpw\") on node \"crc\" DevicePath \"\"" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.215108 4881 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/828bd055-053d-43b7-b76f-746438bb9b41-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.215123 4881 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/828bd055-053d-43b7-b76f-746438bb9b41-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.559601 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rdchn" event={"ID":"828bd055-053d-43b7-b76f-746438bb9b41","Type":"ContainerDied","Data":"29753d3eba82df008a09044e90acb3b1e9b17ea67ac8abcf21cad2cd4786c8d0"} Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.559645 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29753d3eba82df008a09044e90acb3b1e9b17ea67ac8abcf21cad2cd4786c8d0" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.559663 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-rdchn" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.650484 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l"] Jan 21 11:37:01 crc kubenswrapper[4881]: E0121 11:37:01.650924 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="828bd055-053d-43b7-b76f-746438bb9b41" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.650943 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="828bd055-053d-43b7-b76f-746438bb9b41" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.651141 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="828bd055-053d-43b7-b76f-746438bb9b41" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.651819 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.654763 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.655164 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.655426 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.655749 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.655952 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.656083 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.660121 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.668118 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fd7zg" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.679253 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l"] Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.825681 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1ef84c59-8554-4369-9f9f-877505b3b952-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.825812 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.825870 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.825929 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.825966 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.826096 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1ef84c59-8554-4369-9f9f-877505b3b952-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.826141 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.826177 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.826266 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1ef84c59-8554-4369-9f9f-877505b3b952-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.826287 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.826333 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.826427 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.826459 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1ef84c59-8554-4369-9f9f-877505b3b952-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.826570 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtl5j\" (UniqueName: \"kubernetes.io/projected/1ef84c59-8554-4369-9f9f-877505b3b952-kube-api-access-dtl5j\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.929442 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.930375 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.930473 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1ef84c59-8554-4369-9f9f-877505b3b952-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.930589 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtl5j\" (UniqueName: \"kubernetes.io/projected/1ef84c59-8554-4369-9f9f-877505b3b952-kube-api-access-dtl5j\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.930640 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1ef84c59-8554-4369-9f9f-877505b3b952-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.930677 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.930711 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.930779 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.930827 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.930895 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1ef84c59-8554-4369-9f9f-877505b3b952-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.930934 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.930973 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.931041 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1ef84c59-8554-4369-9f9f-877505b3b952-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.931068 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.936542 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.937407 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.938655 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1ef84c59-8554-4369-9f9f-877505b3b952-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.941281 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1ef84c59-8554-4369-9f9f-877505b3b952-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.943878 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.944041 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.944144 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.944640 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.945215 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.945359 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.945385 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1ef84c59-8554-4369-9f9f-877505b3b952-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.946129 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.949687 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtl5j\" (UniqueName: \"kubernetes.io/projected/1ef84c59-8554-4369-9f9f-877505b3b952-kube-api-access-dtl5j\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.954123 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1ef84c59-8554-4369-9f9f-877505b3b952-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-5l99l\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:01 crc kubenswrapper[4881]: I0121 11:37:01.973337 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:02 crc kubenswrapper[4881]: I0121 11:37:02.560429 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l"] Jan 21 11:37:02 crc kubenswrapper[4881]: I0121 11:37:02.572578 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" event={"ID":"1ef84c59-8554-4369-9f9f-877505b3b952","Type":"ContainerStarted","Data":"b627d71bc3743459a8f29f87d494d94cfa00a3d17cac848e85ffa73ca6514114"} Jan 21 11:37:04 crc kubenswrapper[4881]: I0121 11:37:04.593455 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" event={"ID":"1ef84c59-8554-4369-9f9f-877505b3b952","Type":"ContainerStarted","Data":"1381915837d6170a260b0381bdf5de357458d9bab9d662fd7948a15639c1985e"} Jan 21 11:37:04 crc kubenswrapper[4881]: I0121 11:37:04.627945 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" podStartSLOduration=2.796711032 podStartE2EDuration="3.627914656s" podCreationTimestamp="2026-01-21 11:37:01 +0000 UTC" firstStartedPulling="2026-01-21 11:37:02.55769026 +0000 UTC m=+2409.817646729" lastFinishedPulling="2026-01-21 11:37:03.388893844 +0000 UTC m=+2410.648850353" observedRunningTime="2026-01-21 11:37:04.621711755 +0000 UTC m=+2411.881668224" watchObservedRunningTime="2026-01-21 11:37:04.627914656 +0000 UTC m=+2411.887871125" Jan 21 11:37:09 crc kubenswrapper[4881]: I0121 11:37:09.312660 4881 scope.go:117] "RemoveContainer" containerID="ec38e0182d2afb5352817a93e9019bceb63eb1c3df53485164249f8ee9b9d46f" Jan 21 11:37:09 crc kubenswrapper[4881]: E0121 11:37:09.313691 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:37:20 crc kubenswrapper[4881]: I0121 11:37:20.310685 4881 scope.go:117] "RemoveContainer" containerID="ec38e0182d2afb5352817a93e9019bceb63eb1c3df53485164249f8ee9b9d46f" Jan 21 11:37:20 crc kubenswrapper[4881]: E0121 11:37:20.311766 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:37:33 crc kubenswrapper[4881]: I0121 11:37:33.318472 4881 scope.go:117] "RemoveContainer" containerID="ec38e0182d2afb5352817a93e9019bceb63eb1c3df53485164249f8ee9b9d46f" Jan 21 11:37:33 crc kubenswrapper[4881]: E0121 11:37:33.319209 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:37:47 crc kubenswrapper[4881]: I0121 11:37:47.310539 4881 scope.go:117] "RemoveContainer" containerID="ec38e0182d2afb5352817a93e9019bceb63eb1c3df53485164249f8ee9b9d46f" Jan 21 11:37:47 crc kubenswrapper[4881]: E0121 11:37:47.311288 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:37:48 crc kubenswrapper[4881]: I0121 11:37:48.050457 4881 generic.go:334] "Generic (PLEG): container finished" podID="1ef84c59-8554-4369-9f9f-877505b3b952" containerID="1381915837d6170a260b0381bdf5de357458d9bab9d662fd7948a15639c1985e" exitCode=0 Jan 21 11:37:48 crc kubenswrapper[4881]: I0121 11:37:48.050674 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" event={"ID":"1ef84c59-8554-4369-9f9f-877505b3b952","Type":"ContainerDied","Data":"1381915837d6170a260b0381bdf5de357458d9bab9d662fd7948a15639c1985e"} Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.514036 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.587120 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-libvirt-combined-ca-bundle\") pod \"1ef84c59-8554-4369-9f9f-877505b3b952\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.587261 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-ovn-combined-ca-bundle\") pod \"1ef84c59-8554-4369-9f9f-877505b3b952\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.587278 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-repo-setup-combined-ca-bundle\") pod \"1ef84c59-8554-4369-9f9f-877505b3b952\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.587303 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1ef84c59-8554-4369-9f9f-877505b3b952-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"1ef84c59-8554-4369-9f9f-877505b3b952\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.587388 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1ef84c59-8554-4369-9f9f-877505b3b952-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"1ef84c59-8554-4369-9f9f-877505b3b952\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.587418 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-ssh-key-openstack-edpm-ipam\") pod \"1ef84c59-8554-4369-9f9f-877505b3b952\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.587452 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-bootstrap-combined-ca-bundle\") pod \"1ef84c59-8554-4369-9f9f-877505b3b952\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.587497 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1ef84c59-8554-4369-9f9f-877505b3b952-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"1ef84c59-8554-4369-9f9f-877505b3b952\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.587547 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-neutron-metadata-combined-ca-bundle\") pod \"1ef84c59-8554-4369-9f9f-877505b3b952\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.587601 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-inventory\") pod \"1ef84c59-8554-4369-9f9f-877505b3b952\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.587675 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1ef84c59-8554-4369-9f9f-877505b3b952-openstack-edpm-ipam-ovn-default-certs-0\") pod \"1ef84c59-8554-4369-9f9f-877505b3b952\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.587712 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-nova-combined-ca-bundle\") pod \"1ef84c59-8554-4369-9f9f-877505b3b952\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.587746 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-telemetry-combined-ca-bundle\") pod \"1ef84c59-8554-4369-9f9f-877505b3b952\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.587795 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtl5j\" (UniqueName: \"kubernetes.io/projected/1ef84c59-8554-4369-9f9f-877505b3b952-kube-api-access-dtl5j\") pod \"1ef84c59-8554-4369-9f9f-877505b3b952\" (UID: \"1ef84c59-8554-4369-9f9f-877505b3b952\") " Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.595698 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "1ef84c59-8554-4369-9f9f-877505b3b952" (UID: "1ef84c59-8554-4369-9f9f-877505b3b952"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.595730 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "1ef84c59-8554-4369-9f9f-877505b3b952" (UID: "1ef84c59-8554-4369-9f9f-877505b3b952"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.595828 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "1ef84c59-8554-4369-9f9f-877505b3b952" (UID: "1ef84c59-8554-4369-9f9f-877505b3b952"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.596323 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ef84c59-8554-4369-9f9f-877505b3b952-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "1ef84c59-8554-4369-9f9f-877505b3b952" (UID: "1ef84c59-8554-4369-9f9f-877505b3b952"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.596475 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ef84c59-8554-4369-9f9f-877505b3b952-kube-api-access-dtl5j" (OuterVolumeSpecName: "kube-api-access-dtl5j") pod "1ef84c59-8554-4369-9f9f-877505b3b952" (UID: "1ef84c59-8554-4369-9f9f-877505b3b952"). InnerVolumeSpecName "kube-api-access-dtl5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.596538 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ef84c59-8554-4369-9f9f-877505b3b952-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "1ef84c59-8554-4369-9f9f-877505b3b952" (UID: "1ef84c59-8554-4369-9f9f-877505b3b952"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.596646 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ef84c59-8554-4369-9f9f-877505b3b952-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "1ef84c59-8554-4369-9f9f-877505b3b952" (UID: "1ef84c59-8554-4369-9f9f-877505b3b952"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.598500 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "1ef84c59-8554-4369-9f9f-877505b3b952" (UID: "1ef84c59-8554-4369-9f9f-877505b3b952"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.598963 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ef84c59-8554-4369-9f9f-877505b3b952-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "1ef84c59-8554-4369-9f9f-877505b3b952" (UID: "1ef84c59-8554-4369-9f9f-877505b3b952"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.600979 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "1ef84c59-8554-4369-9f9f-877505b3b952" (UID: "1ef84c59-8554-4369-9f9f-877505b3b952"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.600995 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "1ef84c59-8554-4369-9f9f-877505b3b952" (UID: "1ef84c59-8554-4369-9f9f-877505b3b952"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.601238 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "1ef84c59-8554-4369-9f9f-877505b3b952" (UID: "1ef84c59-8554-4369-9f9f-877505b3b952"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.624233 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1ef84c59-8554-4369-9f9f-877505b3b952" (UID: "1ef84c59-8554-4369-9f9f-877505b3b952"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.629538 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-inventory" (OuterVolumeSpecName: "inventory") pod "1ef84c59-8554-4369-9f9f-877505b3b952" (UID: "1ef84c59-8554-4369-9f9f-877505b3b952"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.691372 4881 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.691442 4881 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1ef84c59-8554-4369-9f9f-877505b3b952-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.691460 4881 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.691476 4881 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.691489 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtl5j\" (UniqueName: \"kubernetes.io/projected/1ef84c59-8554-4369-9f9f-877505b3b952-kube-api-access-dtl5j\") on node \"crc\" DevicePath \"\"" Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.691500 4881 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.691512 4881 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.691523 4881 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.691535 4881 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1ef84c59-8554-4369-9f9f-877505b3b952-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.691548 4881 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1ef84c59-8554-4369-9f9f-877505b3b952-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.691560 4881 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.691571 4881 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.691582 4881 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1ef84c59-8554-4369-9f9f-877505b3b952-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:37:49 crc kubenswrapper[4881]: I0121 11:37:49.691597 4881 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ef84c59-8554-4369-9f9f-877505b3b952-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:37:50 crc kubenswrapper[4881]: I0121 11:37:50.074829 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" event={"ID":"1ef84c59-8554-4369-9f9f-877505b3b952","Type":"ContainerDied","Data":"b627d71bc3743459a8f29f87d494d94cfa00a3d17cac848e85ffa73ca6514114"} Jan 21 11:37:50 crc kubenswrapper[4881]: I0121 11:37:50.075249 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b627d71bc3743459a8f29f87d494d94cfa00a3d17cac848e85ffa73ca6514114" Jan 21 11:37:50 crc kubenswrapper[4881]: I0121 11:37:50.074923 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-5l99l" Jan 21 11:37:50 crc kubenswrapper[4881]: I0121 11:37:50.179234 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-d4sgg"] Jan 21 11:37:50 crc kubenswrapper[4881]: E0121 11:37:50.179674 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ef84c59-8554-4369-9f9f-877505b3b952" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 21 11:37:50 crc kubenswrapper[4881]: I0121 11:37:50.179695 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ef84c59-8554-4369-9f9f-877505b3b952" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 21 11:37:50 crc kubenswrapper[4881]: I0121 11:37:50.179943 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ef84c59-8554-4369-9f9f-877505b3b952" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 21 11:37:50 crc kubenswrapper[4881]: I0121 11:37:50.181151 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d4sgg" Jan 21 11:37:50 crc kubenswrapper[4881]: I0121 11:37:50.185348 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fd7zg" Jan 21 11:37:50 crc kubenswrapper[4881]: I0121 11:37:50.185434 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 11:37:50 crc kubenswrapper[4881]: I0121 11:37:50.185433 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 21 11:37:50 crc kubenswrapper[4881]: I0121 11:37:50.186658 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 11:37:50 crc kubenswrapper[4881]: I0121 11:37:50.187193 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 11:37:50 crc kubenswrapper[4881]: I0121 11:37:50.193798 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-d4sgg"] Jan 21 11:37:50 crc kubenswrapper[4881]: I0121 11:37:50.306848 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11ba18fa-d69e-4a6b-9796-e92d95d702ec-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-d4sgg\" (UID: \"11ba18fa-d69e-4a6b-9796-e92d95d702ec\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d4sgg" Jan 21 11:37:50 crc kubenswrapper[4881]: I0121 11:37:50.306935 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11ba18fa-d69e-4a6b-9796-e92d95d702ec-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-d4sgg\" (UID: \"11ba18fa-d69e-4a6b-9796-e92d95d702ec\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d4sgg" Jan 21 11:37:50 crc kubenswrapper[4881]: I0121 11:37:50.306979 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/11ba18fa-d69e-4a6b-9796-e92d95d702ec-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-d4sgg\" (UID: \"11ba18fa-d69e-4a6b-9796-e92d95d702ec\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d4sgg" Jan 21 11:37:50 crc kubenswrapper[4881]: I0121 11:37:50.307020 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc7t5\" (UniqueName: \"kubernetes.io/projected/11ba18fa-d69e-4a6b-9796-e92d95d702ec-kube-api-access-jc7t5\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-d4sgg\" (UID: \"11ba18fa-d69e-4a6b-9796-e92d95d702ec\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d4sgg" Jan 21 11:37:50 crc kubenswrapper[4881]: I0121 11:37:50.307038 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11ba18fa-d69e-4a6b-9796-e92d95d702ec-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-d4sgg\" (UID: \"11ba18fa-d69e-4a6b-9796-e92d95d702ec\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d4sgg" Jan 21 11:37:50 crc kubenswrapper[4881]: I0121 11:37:50.409223 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11ba18fa-d69e-4a6b-9796-e92d95d702ec-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-d4sgg\" (UID: \"11ba18fa-d69e-4a6b-9796-e92d95d702ec\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d4sgg" Jan 21 11:37:50 crc kubenswrapper[4881]: I0121 11:37:50.409591 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11ba18fa-d69e-4a6b-9796-e92d95d702ec-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-d4sgg\" (UID: \"11ba18fa-d69e-4a6b-9796-e92d95d702ec\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d4sgg" Jan 21 11:37:50 crc kubenswrapper[4881]: I0121 11:37:50.409689 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/11ba18fa-d69e-4a6b-9796-e92d95d702ec-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-d4sgg\" (UID: \"11ba18fa-d69e-4a6b-9796-e92d95d702ec\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d4sgg" Jan 21 11:37:50 crc kubenswrapper[4881]: I0121 11:37:50.409728 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jc7t5\" (UniqueName: \"kubernetes.io/projected/11ba18fa-d69e-4a6b-9796-e92d95d702ec-kube-api-access-jc7t5\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-d4sgg\" (UID: \"11ba18fa-d69e-4a6b-9796-e92d95d702ec\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d4sgg" Jan 21 11:37:50 crc kubenswrapper[4881]: I0121 11:37:50.409762 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11ba18fa-d69e-4a6b-9796-e92d95d702ec-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-d4sgg\" (UID: \"11ba18fa-d69e-4a6b-9796-e92d95d702ec\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d4sgg" Jan 21 11:37:50 crc kubenswrapper[4881]: I0121 11:37:50.411287 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/11ba18fa-d69e-4a6b-9796-e92d95d702ec-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-d4sgg\" (UID: \"11ba18fa-d69e-4a6b-9796-e92d95d702ec\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d4sgg" Jan 21 11:37:50 crc kubenswrapper[4881]: I0121 11:37:50.414919 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11ba18fa-d69e-4a6b-9796-e92d95d702ec-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-d4sgg\" (UID: \"11ba18fa-d69e-4a6b-9796-e92d95d702ec\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d4sgg" Jan 21 11:37:50 crc kubenswrapper[4881]: I0121 11:37:50.414995 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11ba18fa-d69e-4a6b-9796-e92d95d702ec-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-d4sgg\" (UID: \"11ba18fa-d69e-4a6b-9796-e92d95d702ec\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d4sgg" Jan 21 11:37:50 crc kubenswrapper[4881]: I0121 11:37:50.415348 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11ba18fa-d69e-4a6b-9796-e92d95d702ec-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-d4sgg\" (UID: \"11ba18fa-d69e-4a6b-9796-e92d95d702ec\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d4sgg" Jan 21 11:37:50 crc kubenswrapper[4881]: I0121 11:37:50.430824 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jc7t5\" (UniqueName: \"kubernetes.io/projected/11ba18fa-d69e-4a6b-9796-e92d95d702ec-kube-api-access-jc7t5\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-d4sgg\" (UID: \"11ba18fa-d69e-4a6b-9796-e92d95d702ec\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d4sgg" Jan 21 11:37:50 crc kubenswrapper[4881]: I0121 11:37:50.499885 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d4sgg" Jan 21 11:37:51 crc kubenswrapper[4881]: I0121 11:37:51.175189 4881 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 11:37:51 crc kubenswrapper[4881]: I0121 11:37:51.176426 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-d4sgg"] Jan 21 11:37:52 crc kubenswrapper[4881]: I0121 11:37:52.102971 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d4sgg" event={"ID":"11ba18fa-d69e-4a6b-9796-e92d95d702ec","Type":"ContainerStarted","Data":"aa93fb13f72092ec97b0673ec20604bc730432dff0f5669249ccca4c35302da2"} Jan 21 11:37:53 crc kubenswrapper[4881]: I0121 11:37:53.116898 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d4sgg" event={"ID":"11ba18fa-d69e-4a6b-9796-e92d95d702ec","Type":"ContainerStarted","Data":"f2af05d022273527afe8fbabe5b1e255d94275ede6153a3e7df06926a5b97e4b"} Jan 21 11:37:53 crc kubenswrapper[4881]: I0121 11:37:53.141272 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d4sgg" podStartSLOduration=2.444332492 podStartE2EDuration="3.141250442s" podCreationTimestamp="2026-01-21 11:37:50 +0000 UTC" firstStartedPulling="2026-01-21 11:37:51.174821143 +0000 UTC m=+2458.434777632" lastFinishedPulling="2026-01-21 11:37:51.871739113 +0000 UTC m=+2459.131695582" observedRunningTime="2026-01-21 11:37:53.139151211 +0000 UTC m=+2460.399107690" watchObservedRunningTime="2026-01-21 11:37:53.141250442 +0000 UTC m=+2460.401206911" Jan 21 11:38:02 crc kubenswrapper[4881]: I0121 11:38:02.312342 4881 scope.go:117] "RemoveContainer" containerID="ec38e0182d2afb5352817a93e9019bceb63eb1c3df53485164249f8ee9b9d46f" Jan 21 11:38:02 crc kubenswrapper[4881]: E0121 11:38:02.313982 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:38:15 crc kubenswrapper[4881]: I0121 11:38:15.311014 4881 scope.go:117] "RemoveContainer" containerID="ec38e0182d2afb5352817a93e9019bceb63eb1c3df53485164249f8ee9b9d46f" Jan 21 11:38:15 crc kubenswrapper[4881]: E0121 11:38:15.311946 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:38:29 crc kubenswrapper[4881]: I0121 11:38:29.311051 4881 scope.go:117] "RemoveContainer" containerID="ec38e0182d2afb5352817a93e9019bceb63eb1c3df53485164249f8ee9b9d46f" Jan 21 11:38:29 crc kubenswrapper[4881]: E0121 11:38:29.311824 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:38:43 crc kubenswrapper[4881]: I0121 11:38:43.319428 4881 scope.go:117] "RemoveContainer" containerID="ec38e0182d2afb5352817a93e9019bceb63eb1c3df53485164249f8ee9b9d46f" Jan 21 11:38:43 crc kubenswrapper[4881]: E0121 11:38:43.320725 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:38:57 crc kubenswrapper[4881]: I0121 11:38:57.311058 4881 scope.go:117] "RemoveContainer" containerID="ec38e0182d2afb5352817a93e9019bceb63eb1c3df53485164249f8ee9b9d46f" Jan 21 11:38:57 crc kubenswrapper[4881]: E0121 11:38:57.311923 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:39:11 crc kubenswrapper[4881]: I0121 11:39:11.216765 4881 generic.go:334] "Generic (PLEG): container finished" podID="11ba18fa-d69e-4a6b-9796-e92d95d702ec" containerID="f2af05d022273527afe8fbabe5b1e255d94275ede6153a3e7df06926a5b97e4b" exitCode=0 Jan 21 11:39:11 crc kubenswrapper[4881]: I0121 11:39:11.216862 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d4sgg" event={"ID":"11ba18fa-d69e-4a6b-9796-e92d95d702ec","Type":"ContainerDied","Data":"f2af05d022273527afe8fbabe5b1e255d94275ede6153a3e7df06926a5b97e4b"} Jan 21 11:39:11 crc kubenswrapper[4881]: I0121 11:39:11.312219 4881 scope.go:117] "RemoveContainer" containerID="ec38e0182d2afb5352817a93e9019bceb63eb1c3df53485164249f8ee9b9d46f" Jan 21 11:39:11 crc kubenswrapper[4881]: E0121 11:39:11.312666 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:39:12 crc kubenswrapper[4881]: I0121 11:39:12.670630 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d4sgg" Jan 21 11:39:12 crc kubenswrapper[4881]: I0121 11:39:12.813304 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11ba18fa-d69e-4a6b-9796-e92d95d702ec-ssh-key-openstack-edpm-ipam\") pod \"11ba18fa-d69e-4a6b-9796-e92d95d702ec\" (UID: \"11ba18fa-d69e-4a6b-9796-e92d95d702ec\") " Jan 21 11:39:12 crc kubenswrapper[4881]: I0121 11:39:12.813371 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/11ba18fa-d69e-4a6b-9796-e92d95d702ec-ovncontroller-config-0\") pod \"11ba18fa-d69e-4a6b-9796-e92d95d702ec\" (UID: \"11ba18fa-d69e-4a6b-9796-e92d95d702ec\") " Jan 21 11:39:12 crc kubenswrapper[4881]: I0121 11:39:12.813455 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11ba18fa-d69e-4a6b-9796-e92d95d702ec-ovn-combined-ca-bundle\") pod \"11ba18fa-d69e-4a6b-9796-e92d95d702ec\" (UID: \"11ba18fa-d69e-4a6b-9796-e92d95d702ec\") " Jan 21 11:39:12 crc kubenswrapper[4881]: I0121 11:39:12.813507 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11ba18fa-d69e-4a6b-9796-e92d95d702ec-inventory\") pod \"11ba18fa-d69e-4a6b-9796-e92d95d702ec\" (UID: \"11ba18fa-d69e-4a6b-9796-e92d95d702ec\") " Jan 21 11:39:12 crc kubenswrapper[4881]: I0121 11:39:12.813561 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jc7t5\" (UniqueName: \"kubernetes.io/projected/11ba18fa-d69e-4a6b-9796-e92d95d702ec-kube-api-access-jc7t5\") pod \"11ba18fa-d69e-4a6b-9796-e92d95d702ec\" (UID: \"11ba18fa-d69e-4a6b-9796-e92d95d702ec\") " Jan 21 11:39:12 crc kubenswrapper[4881]: I0121 11:39:12.821057 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11ba18fa-d69e-4a6b-9796-e92d95d702ec-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "11ba18fa-d69e-4a6b-9796-e92d95d702ec" (UID: "11ba18fa-d69e-4a6b-9796-e92d95d702ec"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:39:12 crc kubenswrapper[4881]: I0121 11:39:12.821631 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11ba18fa-d69e-4a6b-9796-e92d95d702ec-kube-api-access-jc7t5" (OuterVolumeSpecName: "kube-api-access-jc7t5") pod "11ba18fa-d69e-4a6b-9796-e92d95d702ec" (UID: "11ba18fa-d69e-4a6b-9796-e92d95d702ec"). InnerVolumeSpecName "kube-api-access-jc7t5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:39:12 crc kubenswrapper[4881]: I0121 11:39:12.857776 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11ba18fa-d69e-4a6b-9796-e92d95d702ec-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "11ba18fa-d69e-4a6b-9796-e92d95d702ec" (UID: "11ba18fa-d69e-4a6b-9796-e92d95d702ec"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:39:12 crc kubenswrapper[4881]: I0121 11:39:12.866420 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11ba18fa-d69e-4a6b-9796-e92d95d702ec-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "11ba18fa-d69e-4a6b-9796-e92d95d702ec" (UID: "11ba18fa-d69e-4a6b-9796-e92d95d702ec"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:39:12 crc kubenswrapper[4881]: I0121 11:39:12.869688 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11ba18fa-d69e-4a6b-9796-e92d95d702ec-inventory" (OuterVolumeSpecName: "inventory") pod "11ba18fa-d69e-4a6b-9796-e92d95d702ec" (UID: "11ba18fa-d69e-4a6b-9796-e92d95d702ec"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:39:12 crc kubenswrapper[4881]: I0121 11:39:12.916656 4881 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11ba18fa-d69e-4a6b-9796-e92d95d702ec-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 11:39:12 crc kubenswrapper[4881]: I0121 11:39:12.916694 4881 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/11ba18fa-d69e-4a6b-9796-e92d95d702ec-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:39:12 crc kubenswrapper[4881]: I0121 11:39:12.916708 4881 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11ba18fa-d69e-4a6b-9796-e92d95d702ec-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:39:12 crc kubenswrapper[4881]: I0121 11:39:12.916726 4881 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11ba18fa-d69e-4a6b-9796-e92d95d702ec-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 11:39:12 crc kubenswrapper[4881]: I0121 11:39:12.916740 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jc7t5\" (UniqueName: \"kubernetes.io/projected/11ba18fa-d69e-4a6b-9796-e92d95d702ec-kube-api-access-jc7t5\") on node \"crc\" DevicePath \"\"" Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.234369 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d4sgg" event={"ID":"11ba18fa-d69e-4a6b-9796-e92d95d702ec","Type":"ContainerDied","Data":"aa93fb13f72092ec97b0673ec20604bc730432dff0f5669249ccca4c35302da2"} Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.234419 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa93fb13f72092ec97b0673ec20604bc730432dff0f5669249ccca4c35302da2" Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.234423 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-d4sgg" Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.438486 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp"] Jan 21 11:39:13 crc kubenswrapper[4881]: E0121 11:39:13.439384 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11ba18fa-d69e-4a6b-9796-e92d95d702ec" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.439408 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="11ba18fa-d69e-4a6b-9796-e92d95d702ec" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.439713 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="11ba18fa-d69e-4a6b-9796-e92d95d702ec" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.440569 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp" Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.443156 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.443316 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.443350 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.443438 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.443614 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fd7zg" Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.445051 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.458996 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp"] Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.530259 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0e428246-daf9-40a4-9049-74281259f82c-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp\" (UID: \"0e428246-daf9-40a4-9049-74281259f82c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp" Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.530356 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e428246-daf9-40a4-9049-74281259f82c-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp\" (UID: \"0e428246-daf9-40a4-9049-74281259f82c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp" Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.530455 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6fz5\" (UniqueName: \"kubernetes.io/projected/0e428246-daf9-40a4-9049-74281259f82c-kube-api-access-k6fz5\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp\" (UID: \"0e428246-daf9-40a4-9049-74281259f82c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp" Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.530537 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0e428246-daf9-40a4-9049-74281259f82c-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp\" (UID: \"0e428246-daf9-40a4-9049-74281259f82c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp" Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.530563 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0e428246-daf9-40a4-9049-74281259f82c-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp\" (UID: \"0e428246-daf9-40a4-9049-74281259f82c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp" Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.530600 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e428246-daf9-40a4-9049-74281259f82c-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp\" (UID: \"0e428246-daf9-40a4-9049-74281259f82c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp" Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.632490 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0e428246-daf9-40a4-9049-74281259f82c-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp\" (UID: \"0e428246-daf9-40a4-9049-74281259f82c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp" Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.632839 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e428246-daf9-40a4-9049-74281259f82c-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp\" (UID: \"0e428246-daf9-40a4-9049-74281259f82c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp" Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.632982 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6fz5\" (UniqueName: \"kubernetes.io/projected/0e428246-daf9-40a4-9049-74281259f82c-kube-api-access-k6fz5\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp\" (UID: \"0e428246-daf9-40a4-9049-74281259f82c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp" Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.633118 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0e428246-daf9-40a4-9049-74281259f82c-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp\" (UID: \"0e428246-daf9-40a4-9049-74281259f82c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp" Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.633223 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0e428246-daf9-40a4-9049-74281259f82c-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp\" (UID: \"0e428246-daf9-40a4-9049-74281259f82c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp" Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.633361 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e428246-daf9-40a4-9049-74281259f82c-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp\" (UID: \"0e428246-daf9-40a4-9049-74281259f82c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp" Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.637957 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0e428246-daf9-40a4-9049-74281259f82c-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp\" (UID: \"0e428246-daf9-40a4-9049-74281259f82c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp" Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.638166 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0e428246-daf9-40a4-9049-74281259f82c-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp\" (UID: \"0e428246-daf9-40a4-9049-74281259f82c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp" Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.638249 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0e428246-daf9-40a4-9049-74281259f82c-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp\" (UID: \"0e428246-daf9-40a4-9049-74281259f82c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp" Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.638310 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e428246-daf9-40a4-9049-74281259f82c-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp\" (UID: \"0e428246-daf9-40a4-9049-74281259f82c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp" Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.638532 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e428246-daf9-40a4-9049-74281259f82c-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp\" (UID: \"0e428246-daf9-40a4-9049-74281259f82c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp" Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.649695 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6fz5\" (UniqueName: \"kubernetes.io/projected/0e428246-daf9-40a4-9049-74281259f82c-kube-api-access-k6fz5\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp\" (UID: \"0e428246-daf9-40a4-9049-74281259f82c\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp" Jan 21 11:39:13 crc kubenswrapper[4881]: I0121 11:39:13.757409 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp" Jan 21 11:39:14 crc kubenswrapper[4881]: I0121 11:39:14.282778 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp"] Jan 21 11:39:15 crc kubenswrapper[4881]: I0121 11:39:15.255628 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp" event={"ID":"0e428246-daf9-40a4-9049-74281259f82c","Type":"ContainerStarted","Data":"6aec53e337dc4d6cd6cda8ace05ff6550a2e5c28e5ac964d4579632056bbce09"} Jan 21 11:39:15 crc kubenswrapper[4881]: I0121 11:39:15.256066 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp" event={"ID":"0e428246-daf9-40a4-9049-74281259f82c","Type":"ContainerStarted","Data":"608cb0dcffc24c7cbd1b5fbe53fd92536c3ad4a45a9899eb73a91b1b55cde671"} Jan 21 11:39:15 crc kubenswrapper[4881]: I0121 11:39:15.280156 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp" podStartSLOduration=1.771935754 podStartE2EDuration="2.280137929s" podCreationTimestamp="2026-01-21 11:39:13 +0000 UTC" firstStartedPulling="2026-01-21 11:39:14.295462614 +0000 UTC m=+2541.555419083" lastFinishedPulling="2026-01-21 11:39:14.803664779 +0000 UTC m=+2542.063621258" observedRunningTime="2026-01-21 11:39:15.274076721 +0000 UTC m=+2542.534033190" watchObservedRunningTime="2026-01-21 11:39:15.280137929 +0000 UTC m=+2542.540094398" Jan 21 11:39:25 crc kubenswrapper[4881]: I0121 11:39:25.311591 4881 scope.go:117] "RemoveContainer" containerID="ec38e0182d2afb5352817a93e9019bceb63eb1c3df53485164249f8ee9b9d46f" Jan 21 11:39:25 crc kubenswrapper[4881]: E0121 11:39:25.314090 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:39:37 crc kubenswrapper[4881]: I0121 11:39:37.311757 4881 scope.go:117] "RemoveContainer" containerID="ec38e0182d2afb5352817a93e9019bceb63eb1c3df53485164249f8ee9b9d46f" Jan 21 11:39:37 crc kubenswrapper[4881]: E0121 11:39:37.312878 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:39:48 crc kubenswrapper[4881]: I0121 11:39:48.311460 4881 scope.go:117] "RemoveContainer" containerID="ec38e0182d2afb5352817a93e9019bceb63eb1c3df53485164249f8ee9b9d46f" Jan 21 11:39:48 crc kubenswrapper[4881]: E0121 11:39:48.312430 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:40:02 crc kubenswrapper[4881]: I0121 11:40:02.310843 4881 scope.go:117] "RemoveContainer" containerID="ec38e0182d2afb5352817a93e9019bceb63eb1c3df53485164249f8ee9b9d46f" Jan 21 11:40:02 crc kubenswrapper[4881]: E0121 11:40:02.311514 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:40:13 crc kubenswrapper[4881]: I0121 11:40:13.317610 4881 scope.go:117] "RemoveContainer" containerID="ec38e0182d2afb5352817a93e9019bceb63eb1c3df53485164249f8ee9b9d46f" Jan 21 11:40:13 crc kubenswrapper[4881]: E0121 11:40:13.318429 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:40:14 crc kubenswrapper[4881]: I0121 11:40:14.155020 4881 generic.go:334] "Generic (PLEG): container finished" podID="0e428246-daf9-40a4-9049-74281259f82c" containerID="6aec53e337dc4d6cd6cda8ace05ff6550a2e5c28e5ac964d4579632056bbce09" exitCode=0 Jan 21 11:40:14 crc kubenswrapper[4881]: I0121 11:40:14.155082 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp" event={"ID":"0e428246-daf9-40a4-9049-74281259f82c","Type":"ContainerDied","Data":"6aec53e337dc4d6cd6cda8ace05ff6550a2e5c28e5ac964d4579632056bbce09"} Jan 21 11:40:15 crc kubenswrapper[4881]: I0121 11:40:15.614497 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp" Jan 21 11:40:15 crc kubenswrapper[4881]: I0121 11:40:15.725517 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0e428246-daf9-40a4-9049-74281259f82c-nova-metadata-neutron-config-0\") pod \"0e428246-daf9-40a4-9049-74281259f82c\" (UID: \"0e428246-daf9-40a4-9049-74281259f82c\") " Jan 21 11:40:15 crc kubenswrapper[4881]: I0121 11:40:15.725628 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0e428246-daf9-40a4-9049-74281259f82c-ssh-key-openstack-edpm-ipam\") pod \"0e428246-daf9-40a4-9049-74281259f82c\" (UID: \"0e428246-daf9-40a4-9049-74281259f82c\") " Jan 21 11:40:15 crc kubenswrapper[4881]: I0121 11:40:15.725728 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e428246-daf9-40a4-9049-74281259f82c-inventory\") pod \"0e428246-daf9-40a4-9049-74281259f82c\" (UID: \"0e428246-daf9-40a4-9049-74281259f82c\") " Jan 21 11:40:15 crc kubenswrapper[4881]: I0121 11:40:15.725803 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6fz5\" (UniqueName: \"kubernetes.io/projected/0e428246-daf9-40a4-9049-74281259f82c-kube-api-access-k6fz5\") pod \"0e428246-daf9-40a4-9049-74281259f82c\" (UID: \"0e428246-daf9-40a4-9049-74281259f82c\") " Jan 21 11:40:15 crc kubenswrapper[4881]: I0121 11:40:15.725882 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e428246-daf9-40a4-9049-74281259f82c-neutron-metadata-combined-ca-bundle\") pod \"0e428246-daf9-40a4-9049-74281259f82c\" (UID: \"0e428246-daf9-40a4-9049-74281259f82c\") " Jan 21 11:40:15 crc kubenswrapper[4881]: I0121 11:40:15.725923 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0e428246-daf9-40a4-9049-74281259f82c-neutron-ovn-metadata-agent-neutron-config-0\") pod \"0e428246-daf9-40a4-9049-74281259f82c\" (UID: \"0e428246-daf9-40a4-9049-74281259f82c\") " Jan 21 11:40:15 crc kubenswrapper[4881]: I0121 11:40:15.732173 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e428246-daf9-40a4-9049-74281259f82c-kube-api-access-k6fz5" (OuterVolumeSpecName: "kube-api-access-k6fz5") pod "0e428246-daf9-40a4-9049-74281259f82c" (UID: "0e428246-daf9-40a4-9049-74281259f82c"). InnerVolumeSpecName "kube-api-access-k6fz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:40:15 crc kubenswrapper[4881]: I0121 11:40:15.732354 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e428246-daf9-40a4-9049-74281259f82c-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "0e428246-daf9-40a4-9049-74281259f82c" (UID: "0e428246-daf9-40a4-9049-74281259f82c"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:40:15 crc kubenswrapper[4881]: I0121 11:40:15.754666 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e428246-daf9-40a4-9049-74281259f82c-inventory" (OuterVolumeSpecName: "inventory") pod "0e428246-daf9-40a4-9049-74281259f82c" (UID: "0e428246-daf9-40a4-9049-74281259f82c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:40:15 crc kubenswrapper[4881]: I0121 11:40:15.764832 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e428246-daf9-40a4-9049-74281259f82c-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "0e428246-daf9-40a4-9049-74281259f82c" (UID: "0e428246-daf9-40a4-9049-74281259f82c"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:40:15 crc kubenswrapper[4881]: I0121 11:40:15.766314 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e428246-daf9-40a4-9049-74281259f82c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0e428246-daf9-40a4-9049-74281259f82c" (UID: "0e428246-daf9-40a4-9049-74281259f82c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:40:15 crc kubenswrapper[4881]: I0121 11:40:15.772762 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e428246-daf9-40a4-9049-74281259f82c-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "0e428246-daf9-40a4-9049-74281259f82c" (UID: "0e428246-daf9-40a4-9049-74281259f82c"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:40:15 crc kubenswrapper[4881]: I0121 11:40:15.827980 4881 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e428246-daf9-40a4-9049-74281259f82c-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:40:15 crc kubenswrapper[4881]: I0121 11:40:15.828011 4881 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0e428246-daf9-40a4-9049-74281259f82c-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:40:15 crc kubenswrapper[4881]: I0121 11:40:15.828025 4881 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0e428246-daf9-40a4-9049-74281259f82c-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:40:15 crc kubenswrapper[4881]: I0121 11:40:15.828034 4881 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0e428246-daf9-40a4-9049-74281259f82c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 11:40:15 crc kubenswrapper[4881]: I0121 11:40:15.828045 4881 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e428246-daf9-40a4-9049-74281259f82c-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 11:40:15 crc kubenswrapper[4881]: I0121 11:40:15.828053 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6fz5\" (UniqueName: \"kubernetes.io/projected/0e428246-daf9-40a4-9049-74281259f82c-kube-api-access-k6fz5\") on node \"crc\" DevicePath \"\"" Jan 21 11:40:16 crc kubenswrapper[4881]: I0121 11:40:16.182758 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp" event={"ID":"0e428246-daf9-40a4-9049-74281259f82c","Type":"ContainerDied","Data":"608cb0dcffc24c7cbd1b5fbe53fd92536c3ad4a45a9899eb73a91b1b55cde671"} Jan 21 11:40:16 crc kubenswrapper[4881]: I0121 11:40:16.182831 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="608cb0dcffc24c7cbd1b5fbe53fd92536c3ad4a45a9899eb73a91b1b55cde671" Jan 21 11:40:16 crc kubenswrapper[4881]: I0121 11:40:16.182901 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp" Jan 21 11:40:16 crc kubenswrapper[4881]: I0121 11:40:16.475520 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq"] Jan 21 11:40:16 crc kubenswrapper[4881]: E0121 11:40:16.476157 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e428246-daf9-40a4-9049-74281259f82c" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 21 11:40:16 crc kubenswrapper[4881]: I0121 11:40:16.476172 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e428246-daf9-40a4-9049-74281259f82c" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 21 11:40:16 crc kubenswrapper[4881]: I0121 11:40:16.476377 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e428246-daf9-40a4-9049-74281259f82c" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 21 11:40:16 crc kubenswrapper[4881]: I0121 11:40:16.477367 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq" Jan 21 11:40:16 crc kubenswrapper[4881]: I0121 11:40:16.480699 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 21 11:40:16 crc kubenswrapper[4881]: I0121 11:40:16.481154 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 11:40:16 crc kubenswrapper[4881]: I0121 11:40:16.481480 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 11:40:16 crc kubenswrapper[4881]: I0121 11:40:16.482585 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 11:40:16 crc kubenswrapper[4881]: I0121 11:40:16.496572 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fd7zg" Jan 21 11:40:16 crc kubenswrapper[4881]: I0121 11:40:16.507451 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq"] Jan 21 11:40:16 crc kubenswrapper[4881]: I0121 11:40:16.599639 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptwlv\" (UniqueName: \"kubernetes.io/projected/38ac646b-177b-488d-853b-e04b22f267a4-kube-api-access-ptwlv\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq\" (UID: \"38ac646b-177b-488d-853b-e04b22f267a4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq" Jan 21 11:40:16 crc kubenswrapper[4881]: I0121 11:40:16.599710 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38ac646b-177b-488d-853b-e04b22f267a4-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq\" (UID: \"38ac646b-177b-488d-853b-e04b22f267a4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq" Jan 21 11:40:16 crc kubenswrapper[4881]: I0121 11:40:16.600108 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/38ac646b-177b-488d-853b-e04b22f267a4-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq\" (UID: \"38ac646b-177b-488d-853b-e04b22f267a4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq" Jan 21 11:40:16 crc kubenswrapper[4881]: I0121 11:40:16.600158 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/38ac646b-177b-488d-853b-e04b22f267a4-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq\" (UID: \"38ac646b-177b-488d-853b-e04b22f267a4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq" Jan 21 11:40:16 crc kubenswrapper[4881]: I0121 11:40:16.600198 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38ac646b-177b-488d-853b-e04b22f267a4-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq\" (UID: \"38ac646b-177b-488d-853b-e04b22f267a4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq" Jan 21 11:40:16 crc kubenswrapper[4881]: I0121 11:40:16.702436 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptwlv\" (UniqueName: \"kubernetes.io/projected/38ac646b-177b-488d-853b-e04b22f267a4-kube-api-access-ptwlv\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq\" (UID: \"38ac646b-177b-488d-853b-e04b22f267a4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq" Jan 21 11:40:16 crc kubenswrapper[4881]: I0121 11:40:16.702503 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38ac646b-177b-488d-853b-e04b22f267a4-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq\" (UID: \"38ac646b-177b-488d-853b-e04b22f267a4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq" Jan 21 11:40:16 crc kubenswrapper[4881]: I0121 11:40:16.702544 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/38ac646b-177b-488d-853b-e04b22f267a4-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq\" (UID: \"38ac646b-177b-488d-853b-e04b22f267a4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq" Jan 21 11:40:16 crc kubenswrapper[4881]: I0121 11:40:16.702586 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/38ac646b-177b-488d-853b-e04b22f267a4-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq\" (UID: \"38ac646b-177b-488d-853b-e04b22f267a4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq" Jan 21 11:40:16 crc kubenswrapper[4881]: I0121 11:40:16.702623 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38ac646b-177b-488d-853b-e04b22f267a4-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq\" (UID: \"38ac646b-177b-488d-853b-e04b22f267a4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq" Jan 21 11:40:16 crc kubenswrapper[4881]: I0121 11:40:16.708104 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38ac646b-177b-488d-853b-e04b22f267a4-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq\" (UID: \"38ac646b-177b-488d-853b-e04b22f267a4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq" Jan 21 11:40:16 crc kubenswrapper[4881]: I0121 11:40:16.708361 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/38ac646b-177b-488d-853b-e04b22f267a4-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq\" (UID: \"38ac646b-177b-488d-853b-e04b22f267a4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq" Jan 21 11:40:16 crc kubenswrapper[4881]: I0121 11:40:16.708655 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38ac646b-177b-488d-853b-e04b22f267a4-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq\" (UID: \"38ac646b-177b-488d-853b-e04b22f267a4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq" Jan 21 11:40:16 crc kubenswrapper[4881]: I0121 11:40:16.712848 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/38ac646b-177b-488d-853b-e04b22f267a4-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq\" (UID: \"38ac646b-177b-488d-853b-e04b22f267a4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq" Jan 21 11:40:16 crc kubenswrapper[4881]: I0121 11:40:16.727085 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptwlv\" (UniqueName: \"kubernetes.io/projected/38ac646b-177b-488d-853b-e04b22f267a4-kube-api-access-ptwlv\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq\" (UID: \"38ac646b-177b-488d-853b-e04b22f267a4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq" Jan 21 11:40:16 crc kubenswrapper[4881]: I0121 11:40:16.805867 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq" Jan 21 11:40:17 crc kubenswrapper[4881]: I0121 11:40:17.487227 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq"] Jan 21 11:40:18 crc kubenswrapper[4881]: I0121 11:40:18.207777 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq" event={"ID":"38ac646b-177b-488d-853b-e04b22f267a4","Type":"ContainerStarted","Data":"2e34d3926c62f8cffffc796ec975008bf3545972abcc913f207930e4451b062e"} Jan 21 11:40:18 crc kubenswrapper[4881]: I0121 11:40:18.208411 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq" event={"ID":"38ac646b-177b-488d-853b-e04b22f267a4","Type":"ContainerStarted","Data":"dc79678ab6ba1932de7e4e05e7465b949910c18ea04deeee070bef7c91f2f1e4"} Jan 21 11:40:18 crc kubenswrapper[4881]: I0121 11:40:18.410670 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq" podStartSLOduration=1.9996820450000001 podStartE2EDuration="2.410646552s" podCreationTimestamp="2026-01-21 11:40:16 +0000 UTC" firstStartedPulling="2026-01-21 11:40:17.497604489 +0000 UTC m=+2604.757560958" lastFinishedPulling="2026-01-21 11:40:17.908568996 +0000 UTC m=+2605.168525465" observedRunningTime="2026-01-21 11:40:18.234593867 +0000 UTC m=+2605.494550336" watchObservedRunningTime="2026-01-21 11:40:18.410646552 +0000 UTC m=+2605.670603021" Jan 21 11:40:24 crc kubenswrapper[4881]: I0121 11:40:24.312105 4881 scope.go:117] "RemoveContainer" containerID="ec38e0182d2afb5352817a93e9019bceb63eb1c3df53485164249f8ee9b9d46f" Jan 21 11:40:24 crc kubenswrapper[4881]: E0121 11:40:24.312861 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:40:35 crc kubenswrapper[4881]: I0121 11:40:35.312033 4881 scope.go:117] "RemoveContainer" containerID="ec38e0182d2afb5352817a93e9019bceb63eb1c3df53485164249f8ee9b9d46f" Jan 21 11:40:35 crc kubenswrapper[4881]: E0121 11:40:35.313103 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:40:50 crc kubenswrapper[4881]: I0121 11:40:50.312528 4881 scope.go:117] "RemoveContainer" containerID="ec38e0182d2afb5352817a93e9019bceb63eb1c3df53485164249f8ee9b9d46f" Jan 21 11:40:50 crc kubenswrapper[4881]: E0121 11:40:50.313744 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:41:01 crc kubenswrapper[4881]: I0121 11:41:01.312088 4881 scope.go:117] "RemoveContainer" containerID="ec38e0182d2afb5352817a93e9019bceb63eb1c3df53485164249f8ee9b9d46f" Jan 21 11:41:01 crc kubenswrapper[4881]: E0121 11:41:01.313408 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:41:14 crc kubenswrapper[4881]: I0121 11:41:14.312262 4881 scope.go:117] "RemoveContainer" containerID="ec38e0182d2afb5352817a93e9019bceb63eb1c3df53485164249f8ee9b9d46f" Jan 21 11:41:14 crc kubenswrapper[4881]: E0121 11:41:14.313165 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:41:29 crc kubenswrapper[4881]: I0121 11:41:29.310769 4881 scope.go:117] "RemoveContainer" containerID="ec38e0182d2afb5352817a93e9019bceb63eb1c3df53485164249f8ee9b9d46f" Jan 21 11:41:29 crc kubenswrapper[4881]: E0121 11:41:29.311678 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:41:43 crc kubenswrapper[4881]: I0121 11:41:43.313880 4881 scope.go:117] "RemoveContainer" containerID="ec38e0182d2afb5352817a93e9019bceb63eb1c3df53485164249f8ee9b9d46f" Jan 21 11:41:44 crc kubenswrapper[4881]: I0121 11:41:44.267753 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"40878d2da6716331f0a893f4c9f3938e30cde34eaf4eb8051eda58bfc84a6a6c"} Jan 21 11:43:41 crc kubenswrapper[4881]: I0121 11:43:41.806640 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-n45jf"] Jan 21 11:43:41 crc kubenswrapper[4881]: I0121 11:43:41.817172 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n45jf" Jan 21 11:43:41 crc kubenswrapper[4881]: I0121 11:43:41.820682 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n45jf"] Jan 21 11:43:41 crc kubenswrapper[4881]: I0121 11:43:41.823816 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ef5440b-a4c3-4e04-8e02-1055391021c7-catalog-content\") pod \"community-operators-n45jf\" (UID: \"1ef5440b-a4c3-4e04-8e02-1055391021c7\") " pod="openshift-marketplace/community-operators-n45jf" Jan 21 11:43:41 crc kubenswrapper[4881]: I0121 11:43:41.824120 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ef5440b-a4c3-4e04-8e02-1055391021c7-utilities\") pod \"community-operators-n45jf\" (UID: \"1ef5440b-a4c3-4e04-8e02-1055391021c7\") " pod="openshift-marketplace/community-operators-n45jf" Jan 21 11:43:41 crc kubenswrapper[4881]: I0121 11:43:41.824244 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk4qx\" (UniqueName: \"kubernetes.io/projected/1ef5440b-a4c3-4e04-8e02-1055391021c7-kube-api-access-qk4qx\") pod \"community-operators-n45jf\" (UID: \"1ef5440b-a4c3-4e04-8e02-1055391021c7\") " pod="openshift-marketplace/community-operators-n45jf" Jan 21 11:43:41 crc kubenswrapper[4881]: I0121 11:43:41.926186 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qk4qx\" (UniqueName: \"kubernetes.io/projected/1ef5440b-a4c3-4e04-8e02-1055391021c7-kube-api-access-qk4qx\") pod \"community-operators-n45jf\" (UID: \"1ef5440b-a4c3-4e04-8e02-1055391021c7\") " pod="openshift-marketplace/community-operators-n45jf" Jan 21 11:43:41 crc kubenswrapper[4881]: I0121 11:43:41.926315 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ef5440b-a4c3-4e04-8e02-1055391021c7-catalog-content\") pod \"community-operators-n45jf\" (UID: \"1ef5440b-a4c3-4e04-8e02-1055391021c7\") " pod="openshift-marketplace/community-operators-n45jf" Jan 21 11:43:41 crc kubenswrapper[4881]: I0121 11:43:41.926448 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ef5440b-a4c3-4e04-8e02-1055391021c7-utilities\") pod \"community-operators-n45jf\" (UID: \"1ef5440b-a4c3-4e04-8e02-1055391021c7\") " pod="openshift-marketplace/community-operators-n45jf" Jan 21 11:43:41 crc kubenswrapper[4881]: I0121 11:43:41.927024 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ef5440b-a4c3-4e04-8e02-1055391021c7-utilities\") pod \"community-operators-n45jf\" (UID: \"1ef5440b-a4c3-4e04-8e02-1055391021c7\") " pod="openshift-marketplace/community-operators-n45jf" Jan 21 11:43:41 crc kubenswrapper[4881]: I0121 11:43:41.927316 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ef5440b-a4c3-4e04-8e02-1055391021c7-catalog-content\") pod \"community-operators-n45jf\" (UID: \"1ef5440b-a4c3-4e04-8e02-1055391021c7\") " pod="openshift-marketplace/community-operators-n45jf" Jan 21 11:43:41 crc kubenswrapper[4881]: I0121 11:43:41.948687 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qk4qx\" (UniqueName: \"kubernetes.io/projected/1ef5440b-a4c3-4e04-8e02-1055391021c7-kube-api-access-qk4qx\") pod \"community-operators-n45jf\" (UID: \"1ef5440b-a4c3-4e04-8e02-1055391021c7\") " pod="openshift-marketplace/community-operators-n45jf" Jan 21 11:43:42 crc kubenswrapper[4881]: I0121 11:43:42.144604 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n45jf" Jan 21 11:43:42 crc kubenswrapper[4881]: I0121 11:43:42.854468 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n45jf"] Jan 21 11:43:42 crc kubenswrapper[4881]: W0121 11:43:42.872223 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ef5440b_a4c3_4e04_8e02_1055391021c7.slice/crio-3a13beca094a99c47c09db4ac9ab1071bf5ac21528dbfec02027e2662cc93ceb WatchSource:0}: Error finding container 3a13beca094a99c47c09db4ac9ab1071bf5ac21528dbfec02027e2662cc93ceb: Status 404 returned error can't find the container with id 3a13beca094a99c47c09db4ac9ab1071bf5ac21528dbfec02027e2662cc93ceb Jan 21 11:43:43 crc kubenswrapper[4881]: I0121 11:43:43.376444 4881 generic.go:334] "Generic (PLEG): container finished" podID="1ef5440b-a4c3-4e04-8e02-1055391021c7" containerID="c26997966243c289ddfac5194cdd80572f08ac7b80867bb4857e250a6d12a187" exitCode=0 Jan 21 11:43:43 crc kubenswrapper[4881]: I0121 11:43:43.376561 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n45jf" event={"ID":"1ef5440b-a4c3-4e04-8e02-1055391021c7","Type":"ContainerDied","Data":"c26997966243c289ddfac5194cdd80572f08ac7b80867bb4857e250a6d12a187"} Jan 21 11:43:43 crc kubenswrapper[4881]: I0121 11:43:43.376748 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n45jf" event={"ID":"1ef5440b-a4c3-4e04-8e02-1055391021c7","Type":"ContainerStarted","Data":"3a13beca094a99c47c09db4ac9ab1071bf5ac21528dbfec02027e2662cc93ceb"} Jan 21 11:43:43 crc kubenswrapper[4881]: I0121 11:43:43.381218 4881 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 11:43:44 crc kubenswrapper[4881]: I0121 11:43:44.390899 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n45jf" event={"ID":"1ef5440b-a4c3-4e04-8e02-1055391021c7","Type":"ContainerStarted","Data":"74a922df0e7b54b9e776f1526c41912451611cbc054076f1e93cb3380e97cb21"} Jan 21 11:43:46 crc kubenswrapper[4881]: I0121 11:43:46.330305 4881 generic.go:334] "Generic (PLEG): container finished" podID="1ef5440b-a4c3-4e04-8e02-1055391021c7" containerID="74a922df0e7b54b9e776f1526c41912451611cbc054076f1e93cb3380e97cb21" exitCode=0 Jan 21 11:43:46 crc kubenswrapper[4881]: I0121 11:43:46.335802 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n45jf" event={"ID":"1ef5440b-a4c3-4e04-8e02-1055391021c7","Type":"ContainerDied","Data":"74a922df0e7b54b9e776f1526c41912451611cbc054076f1e93cb3380e97cb21"} Jan 21 11:43:47 crc kubenswrapper[4881]: I0121 11:43:47.343296 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n45jf" event={"ID":"1ef5440b-a4c3-4e04-8e02-1055391021c7","Type":"ContainerStarted","Data":"a13ad4a41ae4ee64b135dabd15e29e05d4c5703bdb161dfc59694765aa20ae2c"} Jan 21 11:43:47 crc kubenswrapper[4881]: I0121 11:43:47.411153 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-n45jf" podStartSLOduration=3.035493656 podStartE2EDuration="6.4111189s" podCreationTimestamp="2026-01-21 11:43:41 +0000 UTC" firstStartedPulling="2026-01-21 11:43:43.380963802 +0000 UTC m=+2810.640920271" lastFinishedPulling="2026-01-21 11:43:46.756589046 +0000 UTC m=+2814.016545515" observedRunningTime="2026-01-21 11:43:47.396075912 +0000 UTC m=+2814.656032401" watchObservedRunningTime="2026-01-21 11:43:47.4111189 +0000 UTC m=+2814.671075389" Jan 21 11:43:52 crc kubenswrapper[4881]: I0121 11:43:52.145543 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-n45jf" Jan 21 11:43:52 crc kubenswrapper[4881]: I0121 11:43:52.146637 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-n45jf" Jan 21 11:43:52 crc kubenswrapper[4881]: I0121 11:43:52.227921 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-n45jf" Jan 21 11:43:52 crc kubenswrapper[4881]: I0121 11:43:52.697818 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-n45jf" Jan 21 11:43:52 crc kubenswrapper[4881]: I0121 11:43:52.748567 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n45jf"] Jan 21 11:43:54 crc kubenswrapper[4881]: I0121 11:43:54.668418 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-n45jf" podUID="1ef5440b-a4c3-4e04-8e02-1055391021c7" containerName="registry-server" containerID="cri-o://a13ad4a41ae4ee64b135dabd15e29e05d4c5703bdb161dfc59694765aa20ae2c" gracePeriod=2 Jan 21 11:43:55 crc kubenswrapper[4881]: I0121 11:43:55.221027 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n45jf" Jan 21 11:43:55 crc kubenswrapper[4881]: I0121 11:43:55.281242 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qk4qx\" (UniqueName: \"kubernetes.io/projected/1ef5440b-a4c3-4e04-8e02-1055391021c7-kube-api-access-qk4qx\") pod \"1ef5440b-a4c3-4e04-8e02-1055391021c7\" (UID: \"1ef5440b-a4c3-4e04-8e02-1055391021c7\") " Jan 21 11:43:55 crc kubenswrapper[4881]: I0121 11:43:55.281337 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ef5440b-a4c3-4e04-8e02-1055391021c7-catalog-content\") pod \"1ef5440b-a4c3-4e04-8e02-1055391021c7\" (UID: \"1ef5440b-a4c3-4e04-8e02-1055391021c7\") " Jan 21 11:43:55 crc kubenswrapper[4881]: I0121 11:43:55.281523 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ef5440b-a4c3-4e04-8e02-1055391021c7-utilities\") pod \"1ef5440b-a4c3-4e04-8e02-1055391021c7\" (UID: \"1ef5440b-a4c3-4e04-8e02-1055391021c7\") " Jan 21 11:43:55 crc kubenswrapper[4881]: I0121 11:43:55.283080 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ef5440b-a4c3-4e04-8e02-1055391021c7-utilities" (OuterVolumeSpecName: "utilities") pod "1ef5440b-a4c3-4e04-8e02-1055391021c7" (UID: "1ef5440b-a4c3-4e04-8e02-1055391021c7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:43:55 crc kubenswrapper[4881]: I0121 11:43:55.287492 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ef5440b-a4c3-4e04-8e02-1055391021c7-kube-api-access-qk4qx" (OuterVolumeSpecName: "kube-api-access-qk4qx") pod "1ef5440b-a4c3-4e04-8e02-1055391021c7" (UID: "1ef5440b-a4c3-4e04-8e02-1055391021c7"). InnerVolumeSpecName "kube-api-access-qk4qx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:43:55 crc kubenswrapper[4881]: I0121 11:43:55.377836 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ef5440b-a4c3-4e04-8e02-1055391021c7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1ef5440b-a4c3-4e04-8e02-1055391021c7" (UID: "1ef5440b-a4c3-4e04-8e02-1055391021c7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:43:55 crc kubenswrapper[4881]: I0121 11:43:55.385286 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qk4qx\" (UniqueName: \"kubernetes.io/projected/1ef5440b-a4c3-4e04-8e02-1055391021c7-kube-api-access-qk4qx\") on node \"crc\" DevicePath \"\"" Jan 21 11:43:55 crc kubenswrapper[4881]: I0121 11:43:55.385316 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ef5440b-a4c3-4e04-8e02-1055391021c7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:43:55 crc kubenswrapper[4881]: I0121 11:43:55.385328 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ef5440b-a4c3-4e04-8e02-1055391021c7-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:43:55 crc kubenswrapper[4881]: I0121 11:43:55.682985 4881 generic.go:334] "Generic (PLEG): container finished" podID="1ef5440b-a4c3-4e04-8e02-1055391021c7" containerID="a13ad4a41ae4ee64b135dabd15e29e05d4c5703bdb161dfc59694765aa20ae2c" exitCode=0 Jan 21 11:43:55 crc kubenswrapper[4881]: I0121 11:43:55.683085 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n45jf" Jan 21 11:43:55 crc kubenswrapper[4881]: I0121 11:43:55.684405 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n45jf" event={"ID":"1ef5440b-a4c3-4e04-8e02-1055391021c7","Type":"ContainerDied","Data":"a13ad4a41ae4ee64b135dabd15e29e05d4c5703bdb161dfc59694765aa20ae2c"} Jan 21 11:43:55 crc kubenswrapper[4881]: I0121 11:43:55.684609 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n45jf" event={"ID":"1ef5440b-a4c3-4e04-8e02-1055391021c7","Type":"ContainerDied","Data":"3a13beca094a99c47c09db4ac9ab1071bf5ac21528dbfec02027e2662cc93ceb"} Jan 21 11:43:55 crc kubenswrapper[4881]: I0121 11:43:55.684628 4881 scope.go:117] "RemoveContainer" containerID="a13ad4a41ae4ee64b135dabd15e29e05d4c5703bdb161dfc59694765aa20ae2c" Jan 21 11:43:55 crc kubenswrapper[4881]: I0121 11:43:55.709627 4881 scope.go:117] "RemoveContainer" containerID="74a922df0e7b54b9e776f1526c41912451611cbc054076f1e93cb3380e97cb21" Jan 21 11:43:55 crc kubenswrapper[4881]: I0121 11:43:55.727805 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n45jf"] Jan 21 11:43:55 crc kubenswrapper[4881]: I0121 11:43:55.740309 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-n45jf"] Jan 21 11:43:55 crc kubenswrapper[4881]: I0121 11:43:55.753493 4881 scope.go:117] "RemoveContainer" containerID="c26997966243c289ddfac5194cdd80572f08ac7b80867bb4857e250a6d12a187" Jan 21 11:43:55 crc kubenswrapper[4881]: I0121 11:43:55.797564 4881 scope.go:117] "RemoveContainer" containerID="a13ad4a41ae4ee64b135dabd15e29e05d4c5703bdb161dfc59694765aa20ae2c" Jan 21 11:43:55 crc kubenswrapper[4881]: E0121 11:43:55.797917 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a13ad4a41ae4ee64b135dabd15e29e05d4c5703bdb161dfc59694765aa20ae2c\": container with ID starting with a13ad4a41ae4ee64b135dabd15e29e05d4c5703bdb161dfc59694765aa20ae2c not found: ID does not exist" containerID="a13ad4a41ae4ee64b135dabd15e29e05d4c5703bdb161dfc59694765aa20ae2c" Jan 21 11:43:55 crc kubenswrapper[4881]: I0121 11:43:55.797958 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a13ad4a41ae4ee64b135dabd15e29e05d4c5703bdb161dfc59694765aa20ae2c"} err="failed to get container status \"a13ad4a41ae4ee64b135dabd15e29e05d4c5703bdb161dfc59694765aa20ae2c\": rpc error: code = NotFound desc = could not find container \"a13ad4a41ae4ee64b135dabd15e29e05d4c5703bdb161dfc59694765aa20ae2c\": container with ID starting with a13ad4a41ae4ee64b135dabd15e29e05d4c5703bdb161dfc59694765aa20ae2c not found: ID does not exist" Jan 21 11:43:55 crc kubenswrapper[4881]: I0121 11:43:55.797987 4881 scope.go:117] "RemoveContainer" containerID="74a922df0e7b54b9e776f1526c41912451611cbc054076f1e93cb3380e97cb21" Jan 21 11:43:55 crc kubenswrapper[4881]: E0121 11:43:55.798240 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74a922df0e7b54b9e776f1526c41912451611cbc054076f1e93cb3380e97cb21\": container with ID starting with 74a922df0e7b54b9e776f1526c41912451611cbc054076f1e93cb3380e97cb21 not found: ID does not exist" containerID="74a922df0e7b54b9e776f1526c41912451611cbc054076f1e93cb3380e97cb21" Jan 21 11:43:55 crc kubenswrapper[4881]: I0121 11:43:55.798264 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74a922df0e7b54b9e776f1526c41912451611cbc054076f1e93cb3380e97cb21"} err="failed to get container status \"74a922df0e7b54b9e776f1526c41912451611cbc054076f1e93cb3380e97cb21\": rpc error: code = NotFound desc = could not find container \"74a922df0e7b54b9e776f1526c41912451611cbc054076f1e93cb3380e97cb21\": container with ID starting with 74a922df0e7b54b9e776f1526c41912451611cbc054076f1e93cb3380e97cb21 not found: ID does not exist" Jan 21 11:43:55 crc kubenswrapper[4881]: I0121 11:43:55.798277 4881 scope.go:117] "RemoveContainer" containerID="c26997966243c289ddfac5194cdd80572f08ac7b80867bb4857e250a6d12a187" Jan 21 11:43:55 crc kubenswrapper[4881]: E0121 11:43:55.798513 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c26997966243c289ddfac5194cdd80572f08ac7b80867bb4857e250a6d12a187\": container with ID starting with c26997966243c289ddfac5194cdd80572f08ac7b80867bb4857e250a6d12a187 not found: ID does not exist" containerID="c26997966243c289ddfac5194cdd80572f08ac7b80867bb4857e250a6d12a187" Jan 21 11:43:55 crc kubenswrapper[4881]: I0121 11:43:55.798542 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c26997966243c289ddfac5194cdd80572f08ac7b80867bb4857e250a6d12a187"} err="failed to get container status \"c26997966243c289ddfac5194cdd80572f08ac7b80867bb4857e250a6d12a187\": rpc error: code = NotFound desc = could not find container \"c26997966243c289ddfac5194cdd80572f08ac7b80867bb4857e250a6d12a187\": container with ID starting with c26997966243c289ddfac5194cdd80572f08ac7b80867bb4857e250a6d12a187 not found: ID does not exist" Jan 21 11:43:57 crc kubenswrapper[4881]: I0121 11:43:57.326810 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ef5440b-a4c3-4e04-8e02-1055391021c7" path="/var/lib/kubelet/pods/1ef5440b-a4c3-4e04-8e02-1055391021c7/volumes" Jan 21 11:43:59 crc kubenswrapper[4881]: I0121 11:43:59.852274 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:43:59 crc kubenswrapper[4881]: I0121 11:43:59.852650 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:44:25 crc kubenswrapper[4881]: I0121 11:44:25.672214 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fdrvq"] Jan 21 11:44:25 crc kubenswrapper[4881]: E0121 11:44:25.673300 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ef5440b-a4c3-4e04-8e02-1055391021c7" containerName="extract-content" Jan 21 11:44:25 crc kubenswrapper[4881]: I0121 11:44:25.673314 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ef5440b-a4c3-4e04-8e02-1055391021c7" containerName="extract-content" Jan 21 11:44:25 crc kubenswrapper[4881]: E0121 11:44:25.673323 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ef5440b-a4c3-4e04-8e02-1055391021c7" containerName="extract-utilities" Jan 21 11:44:25 crc kubenswrapper[4881]: I0121 11:44:25.673330 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ef5440b-a4c3-4e04-8e02-1055391021c7" containerName="extract-utilities" Jan 21 11:44:25 crc kubenswrapper[4881]: E0121 11:44:25.673351 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ef5440b-a4c3-4e04-8e02-1055391021c7" containerName="registry-server" Jan 21 11:44:25 crc kubenswrapper[4881]: I0121 11:44:25.673357 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ef5440b-a4c3-4e04-8e02-1055391021c7" containerName="registry-server" Jan 21 11:44:25 crc kubenswrapper[4881]: I0121 11:44:25.673563 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ef5440b-a4c3-4e04-8e02-1055391021c7" containerName="registry-server" Jan 21 11:44:25 crc kubenswrapper[4881]: I0121 11:44:25.675194 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fdrvq" Jan 21 11:44:25 crc kubenswrapper[4881]: I0121 11:44:25.689943 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fdrvq"] Jan 21 11:44:25 crc kubenswrapper[4881]: I0121 11:44:25.755313 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d42aa8e-f444-4984-a8d7-7a207bf7c53f-catalog-content\") pod \"certified-operators-fdrvq\" (UID: \"2d42aa8e-f444-4984-a8d7-7a207bf7c53f\") " pod="openshift-marketplace/certified-operators-fdrvq" Jan 21 11:44:25 crc kubenswrapper[4881]: I0121 11:44:25.755582 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdtt4\" (UniqueName: \"kubernetes.io/projected/2d42aa8e-f444-4984-a8d7-7a207bf7c53f-kube-api-access-qdtt4\") pod \"certified-operators-fdrvq\" (UID: \"2d42aa8e-f444-4984-a8d7-7a207bf7c53f\") " pod="openshift-marketplace/certified-operators-fdrvq" Jan 21 11:44:25 crc kubenswrapper[4881]: I0121 11:44:25.755682 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d42aa8e-f444-4984-a8d7-7a207bf7c53f-utilities\") pod \"certified-operators-fdrvq\" (UID: \"2d42aa8e-f444-4984-a8d7-7a207bf7c53f\") " pod="openshift-marketplace/certified-operators-fdrvq" Jan 21 11:44:25 crc kubenswrapper[4881]: I0121 11:44:25.857655 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d42aa8e-f444-4984-a8d7-7a207bf7c53f-utilities\") pod \"certified-operators-fdrvq\" (UID: \"2d42aa8e-f444-4984-a8d7-7a207bf7c53f\") " pod="openshift-marketplace/certified-operators-fdrvq" Jan 21 11:44:25 crc kubenswrapper[4881]: I0121 11:44:25.857741 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d42aa8e-f444-4984-a8d7-7a207bf7c53f-catalog-content\") pod \"certified-operators-fdrvq\" (UID: \"2d42aa8e-f444-4984-a8d7-7a207bf7c53f\") " pod="openshift-marketplace/certified-operators-fdrvq" Jan 21 11:44:25 crc kubenswrapper[4881]: I0121 11:44:25.857878 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdtt4\" (UniqueName: \"kubernetes.io/projected/2d42aa8e-f444-4984-a8d7-7a207bf7c53f-kube-api-access-qdtt4\") pod \"certified-operators-fdrvq\" (UID: \"2d42aa8e-f444-4984-a8d7-7a207bf7c53f\") " pod="openshift-marketplace/certified-operators-fdrvq" Jan 21 11:44:25 crc kubenswrapper[4881]: I0121 11:44:25.858120 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d42aa8e-f444-4984-a8d7-7a207bf7c53f-utilities\") pod \"certified-operators-fdrvq\" (UID: \"2d42aa8e-f444-4984-a8d7-7a207bf7c53f\") " pod="openshift-marketplace/certified-operators-fdrvq" Jan 21 11:44:25 crc kubenswrapper[4881]: I0121 11:44:25.858237 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d42aa8e-f444-4984-a8d7-7a207bf7c53f-catalog-content\") pod \"certified-operators-fdrvq\" (UID: \"2d42aa8e-f444-4984-a8d7-7a207bf7c53f\") " pod="openshift-marketplace/certified-operators-fdrvq" Jan 21 11:44:25 crc kubenswrapper[4881]: I0121 11:44:25.876664 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdtt4\" (UniqueName: \"kubernetes.io/projected/2d42aa8e-f444-4984-a8d7-7a207bf7c53f-kube-api-access-qdtt4\") pod \"certified-operators-fdrvq\" (UID: \"2d42aa8e-f444-4984-a8d7-7a207bf7c53f\") " pod="openshift-marketplace/certified-operators-fdrvq" Jan 21 11:44:26 crc kubenswrapper[4881]: I0121 11:44:26.048582 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fdrvq" Jan 21 11:44:26 crc kubenswrapper[4881]: I0121 11:44:26.308054 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wc4t2"] Jan 21 11:44:26 crc kubenswrapper[4881]: I0121 11:44:26.310853 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wc4t2" Jan 21 11:44:26 crc kubenswrapper[4881]: I0121 11:44:26.342154 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wc4t2"] Jan 21 11:44:26 crc kubenswrapper[4881]: I0121 11:44:26.374128 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31857b1b-0b5b-40a8-8706-9002ca7c878b-utilities\") pod \"redhat-marketplace-wc4t2\" (UID: \"31857b1b-0b5b-40a8-8706-9002ca7c878b\") " pod="openshift-marketplace/redhat-marketplace-wc4t2" Jan 21 11:44:26 crc kubenswrapper[4881]: I0121 11:44:26.374341 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wv9xw\" (UniqueName: \"kubernetes.io/projected/31857b1b-0b5b-40a8-8706-9002ca7c878b-kube-api-access-wv9xw\") pod \"redhat-marketplace-wc4t2\" (UID: \"31857b1b-0b5b-40a8-8706-9002ca7c878b\") " pod="openshift-marketplace/redhat-marketplace-wc4t2" Jan 21 11:44:26 crc kubenswrapper[4881]: I0121 11:44:26.375566 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31857b1b-0b5b-40a8-8706-9002ca7c878b-catalog-content\") pod \"redhat-marketplace-wc4t2\" (UID: \"31857b1b-0b5b-40a8-8706-9002ca7c878b\") " pod="openshift-marketplace/redhat-marketplace-wc4t2" Jan 21 11:44:26 crc kubenswrapper[4881]: I0121 11:44:26.477333 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31857b1b-0b5b-40a8-8706-9002ca7c878b-utilities\") pod \"redhat-marketplace-wc4t2\" (UID: \"31857b1b-0b5b-40a8-8706-9002ca7c878b\") " pod="openshift-marketplace/redhat-marketplace-wc4t2" Jan 21 11:44:26 crc kubenswrapper[4881]: I0121 11:44:26.477480 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wv9xw\" (UniqueName: \"kubernetes.io/projected/31857b1b-0b5b-40a8-8706-9002ca7c878b-kube-api-access-wv9xw\") pod \"redhat-marketplace-wc4t2\" (UID: \"31857b1b-0b5b-40a8-8706-9002ca7c878b\") " pod="openshift-marketplace/redhat-marketplace-wc4t2" Jan 21 11:44:26 crc kubenswrapper[4881]: I0121 11:44:26.477615 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31857b1b-0b5b-40a8-8706-9002ca7c878b-catalog-content\") pod \"redhat-marketplace-wc4t2\" (UID: \"31857b1b-0b5b-40a8-8706-9002ca7c878b\") " pod="openshift-marketplace/redhat-marketplace-wc4t2" Jan 21 11:44:26 crc kubenswrapper[4881]: I0121 11:44:26.477953 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31857b1b-0b5b-40a8-8706-9002ca7c878b-utilities\") pod \"redhat-marketplace-wc4t2\" (UID: \"31857b1b-0b5b-40a8-8706-9002ca7c878b\") " pod="openshift-marketplace/redhat-marketplace-wc4t2" Jan 21 11:44:26 crc kubenswrapper[4881]: I0121 11:44:26.478245 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31857b1b-0b5b-40a8-8706-9002ca7c878b-catalog-content\") pod \"redhat-marketplace-wc4t2\" (UID: \"31857b1b-0b5b-40a8-8706-9002ca7c878b\") " pod="openshift-marketplace/redhat-marketplace-wc4t2" Jan 21 11:44:26 crc kubenswrapper[4881]: I0121 11:44:26.512410 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wv9xw\" (UniqueName: \"kubernetes.io/projected/31857b1b-0b5b-40a8-8706-9002ca7c878b-kube-api-access-wv9xw\") pod \"redhat-marketplace-wc4t2\" (UID: \"31857b1b-0b5b-40a8-8706-9002ca7c878b\") " pod="openshift-marketplace/redhat-marketplace-wc4t2" Jan 21 11:44:26 crc kubenswrapper[4881]: I0121 11:44:26.637939 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fdrvq"] Jan 21 11:44:26 crc kubenswrapper[4881]: I0121 11:44:26.657459 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wc4t2" Jan 21 11:44:27 crc kubenswrapper[4881]: I0121 11:44:27.179844 4881 generic.go:334] "Generic (PLEG): container finished" podID="2d42aa8e-f444-4984-a8d7-7a207bf7c53f" containerID="be36f6ad834ca00233eadc7451dfda0c9752d18ed8499ac6ad57c9815db2567a" exitCode=0 Jan 21 11:44:27 crc kubenswrapper[4881]: I0121 11:44:27.180112 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdrvq" event={"ID":"2d42aa8e-f444-4984-a8d7-7a207bf7c53f","Type":"ContainerDied","Data":"be36f6ad834ca00233eadc7451dfda0c9752d18ed8499ac6ad57c9815db2567a"} Jan 21 11:44:27 crc kubenswrapper[4881]: I0121 11:44:27.180137 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdrvq" event={"ID":"2d42aa8e-f444-4984-a8d7-7a207bf7c53f","Type":"ContainerStarted","Data":"d69f26819e41f24884704883945e98a254587e278d32d1d4a11013c821014e32"} Jan 21 11:44:27 crc kubenswrapper[4881]: I0121 11:44:27.257601 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wc4t2"] Jan 21 11:44:28 crc kubenswrapper[4881]: I0121 11:44:28.399304 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-txhzl"] Jan 21 11:44:28 crc kubenswrapper[4881]: I0121 11:44:28.401919 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-txhzl" Jan 21 11:44:28 crc kubenswrapper[4881]: I0121 11:44:28.405845 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdrvq" event={"ID":"2d42aa8e-f444-4984-a8d7-7a207bf7c53f","Type":"ContainerStarted","Data":"a3b87112cc4e2f5703453d1593b9d75e4be1102fb918a336d940180bb24d7b53"} Jan 21 11:44:28 crc kubenswrapper[4881]: I0121 11:44:28.408136 4881 generic.go:334] "Generic (PLEG): container finished" podID="31857b1b-0b5b-40a8-8706-9002ca7c878b" containerID="336832bcd33835056ce008b8a53f0d9baf232ba9edeed16ee2274ae9fd33d3e1" exitCode=0 Jan 21 11:44:28 crc kubenswrapper[4881]: I0121 11:44:28.408972 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wc4t2" event={"ID":"31857b1b-0b5b-40a8-8706-9002ca7c878b","Type":"ContainerDied","Data":"336832bcd33835056ce008b8a53f0d9baf232ba9edeed16ee2274ae9fd33d3e1"} Jan 21 11:44:28 crc kubenswrapper[4881]: I0121 11:44:28.409015 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wc4t2" event={"ID":"31857b1b-0b5b-40a8-8706-9002ca7c878b","Type":"ContainerStarted","Data":"faddb564d26733e0b7d65cf614390493c38cce9c895f446c1284fb2526e50080"} Jan 21 11:44:28 crc kubenswrapper[4881]: I0121 11:44:28.413613 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-txhzl"] Jan 21 11:44:28 crc kubenswrapper[4881]: I0121 11:44:28.493307 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fknht\" (UniqueName: \"kubernetes.io/projected/a0e7b801-0b42-4a0f-9d8a-6098f067d197-kube-api-access-fknht\") pod \"redhat-operators-txhzl\" (UID: \"a0e7b801-0b42-4a0f-9d8a-6098f067d197\") " pod="openshift-marketplace/redhat-operators-txhzl" Jan 21 11:44:28 crc kubenswrapper[4881]: I0121 11:44:28.493609 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0e7b801-0b42-4a0f-9d8a-6098f067d197-catalog-content\") pod \"redhat-operators-txhzl\" (UID: \"a0e7b801-0b42-4a0f-9d8a-6098f067d197\") " pod="openshift-marketplace/redhat-operators-txhzl" Jan 21 11:44:28 crc kubenswrapper[4881]: I0121 11:44:28.493687 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0e7b801-0b42-4a0f-9d8a-6098f067d197-utilities\") pod \"redhat-operators-txhzl\" (UID: \"a0e7b801-0b42-4a0f-9d8a-6098f067d197\") " pod="openshift-marketplace/redhat-operators-txhzl" Jan 21 11:44:28 crc kubenswrapper[4881]: I0121 11:44:28.598149 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0e7b801-0b42-4a0f-9d8a-6098f067d197-catalog-content\") pod \"redhat-operators-txhzl\" (UID: \"a0e7b801-0b42-4a0f-9d8a-6098f067d197\") " pod="openshift-marketplace/redhat-operators-txhzl" Jan 21 11:44:28 crc kubenswrapper[4881]: I0121 11:44:28.598339 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0e7b801-0b42-4a0f-9d8a-6098f067d197-utilities\") pod \"redhat-operators-txhzl\" (UID: \"a0e7b801-0b42-4a0f-9d8a-6098f067d197\") " pod="openshift-marketplace/redhat-operators-txhzl" Jan 21 11:44:28 crc kubenswrapper[4881]: I0121 11:44:28.598611 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fknht\" (UniqueName: \"kubernetes.io/projected/a0e7b801-0b42-4a0f-9d8a-6098f067d197-kube-api-access-fknht\") pod \"redhat-operators-txhzl\" (UID: \"a0e7b801-0b42-4a0f-9d8a-6098f067d197\") " pod="openshift-marketplace/redhat-operators-txhzl" Jan 21 11:44:28 crc kubenswrapper[4881]: I0121 11:44:28.599674 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0e7b801-0b42-4a0f-9d8a-6098f067d197-utilities\") pod \"redhat-operators-txhzl\" (UID: \"a0e7b801-0b42-4a0f-9d8a-6098f067d197\") " pod="openshift-marketplace/redhat-operators-txhzl" Jan 21 11:44:28 crc kubenswrapper[4881]: I0121 11:44:28.599727 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0e7b801-0b42-4a0f-9d8a-6098f067d197-catalog-content\") pod \"redhat-operators-txhzl\" (UID: \"a0e7b801-0b42-4a0f-9d8a-6098f067d197\") " pod="openshift-marketplace/redhat-operators-txhzl" Jan 21 11:44:28 crc kubenswrapper[4881]: I0121 11:44:28.618589 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fknht\" (UniqueName: \"kubernetes.io/projected/a0e7b801-0b42-4a0f-9d8a-6098f067d197-kube-api-access-fknht\") pod \"redhat-operators-txhzl\" (UID: \"a0e7b801-0b42-4a0f-9d8a-6098f067d197\") " pod="openshift-marketplace/redhat-operators-txhzl" Jan 21 11:44:28 crc kubenswrapper[4881]: I0121 11:44:28.936468 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-txhzl" Jan 21 11:44:29 crc kubenswrapper[4881]: I0121 11:44:29.425471 4881 generic.go:334] "Generic (PLEG): container finished" podID="2d42aa8e-f444-4984-a8d7-7a207bf7c53f" containerID="a3b87112cc4e2f5703453d1593b9d75e4be1102fb918a336d940180bb24d7b53" exitCode=0 Jan 21 11:44:29 crc kubenswrapper[4881]: I0121 11:44:29.425549 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdrvq" event={"ID":"2d42aa8e-f444-4984-a8d7-7a207bf7c53f","Type":"ContainerDied","Data":"a3b87112cc4e2f5703453d1593b9d75e4be1102fb918a336d940180bb24d7b53"} Jan 21 11:44:29 crc kubenswrapper[4881]: I0121 11:44:29.488718 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-txhzl"] Jan 21 11:44:29 crc kubenswrapper[4881]: I0121 11:44:29.851428 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:44:29 crc kubenswrapper[4881]: I0121 11:44:29.851835 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:44:30 crc kubenswrapper[4881]: I0121 11:44:30.439440 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdrvq" event={"ID":"2d42aa8e-f444-4984-a8d7-7a207bf7c53f","Type":"ContainerStarted","Data":"c46a2a4d819c8a32cc07d84e8693331645ce9fdf0d2715fdb9ac2374aedc71ff"} Jan 21 11:44:30 crc kubenswrapper[4881]: I0121 11:44:30.442069 4881 generic.go:334] "Generic (PLEG): container finished" podID="31857b1b-0b5b-40a8-8706-9002ca7c878b" containerID="c6c4d44fb090872e6b0605107087391221f656d81db179f5a1c6b09418925f51" exitCode=0 Jan 21 11:44:30 crc kubenswrapper[4881]: I0121 11:44:30.442163 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wc4t2" event={"ID":"31857b1b-0b5b-40a8-8706-9002ca7c878b","Type":"ContainerDied","Data":"c6c4d44fb090872e6b0605107087391221f656d81db179f5a1c6b09418925f51"} Jan 21 11:44:30 crc kubenswrapper[4881]: I0121 11:44:30.443598 4881 generic.go:334] "Generic (PLEG): container finished" podID="a0e7b801-0b42-4a0f-9d8a-6098f067d197" containerID="cdbde46c3239eb1d76f2261767f55cedbe3ed1340d6b7f6c84f70007b06ecd03" exitCode=0 Jan 21 11:44:30 crc kubenswrapper[4881]: I0121 11:44:30.443629 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-txhzl" event={"ID":"a0e7b801-0b42-4a0f-9d8a-6098f067d197","Type":"ContainerDied","Data":"cdbde46c3239eb1d76f2261767f55cedbe3ed1340d6b7f6c84f70007b06ecd03"} Jan 21 11:44:30 crc kubenswrapper[4881]: I0121 11:44:30.443655 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-txhzl" event={"ID":"a0e7b801-0b42-4a0f-9d8a-6098f067d197","Type":"ContainerStarted","Data":"d0d563f9da5b6ef6d9aeb469bdb9a55a96af9c6b6f7a766f8209eb73233aaf4a"} Jan 21 11:44:30 crc kubenswrapper[4881]: I0121 11:44:30.485129 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fdrvq" podStartSLOduration=2.806435882 podStartE2EDuration="5.485101261s" podCreationTimestamp="2026-01-21 11:44:25 +0000 UTC" firstStartedPulling="2026-01-21 11:44:27.181860976 +0000 UTC m=+2854.441817445" lastFinishedPulling="2026-01-21 11:44:29.860526345 +0000 UTC m=+2857.120482824" observedRunningTime="2026-01-21 11:44:30.478866967 +0000 UTC m=+2857.738823436" watchObservedRunningTime="2026-01-21 11:44:30.485101261 +0000 UTC m=+2857.745057730" Jan 21 11:44:31 crc kubenswrapper[4881]: I0121 11:44:31.681702 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wc4t2" event={"ID":"31857b1b-0b5b-40a8-8706-9002ca7c878b","Type":"ContainerStarted","Data":"d82428e58c47c9e949608c486e52ec560ea14cbe4c81a28efff4fb76bfd1d58d"} Jan 21 11:44:31 crc kubenswrapper[4881]: I0121 11:44:31.701087 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-txhzl" event={"ID":"a0e7b801-0b42-4a0f-9d8a-6098f067d197","Type":"ContainerStarted","Data":"da4f8fc6bf0374e81ec0a1951df788719674fcd294eade9800200cfc352dfb8d"} Jan 21 11:44:31 crc kubenswrapper[4881]: I0121 11:44:31.709482 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wc4t2" podStartSLOduration=3.277819687 podStartE2EDuration="5.709463844s" podCreationTimestamp="2026-01-21 11:44:26 +0000 UTC" firstStartedPulling="2026-01-21 11:44:28.413036297 +0000 UTC m=+2855.672992766" lastFinishedPulling="2026-01-21 11:44:30.844680454 +0000 UTC m=+2858.104636923" observedRunningTime="2026-01-21 11:44:31.709322131 +0000 UTC m=+2858.969278610" watchObservedRunningTime="2026-01-21 11:44:31.709463844 +0000 UTC m=+2858.969420313" Jan 21 11:44:36 crc kubenswrapper[4881]: I0121 11:44:36.049632 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fdrvq" Jan 21 11:44:36 crc kubenswrapper[4881]: I0121 11:44:36.050259 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fdrvq" Jan 21 11:44:36 crc kubenswrapper[4881]: I0121 11:44:36.113057 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fdrvq" Jan 21 11:44:36 crc kubenswrapper[4881]: I0121 11:44:36.658523 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wc4t2" Jan 21 11:44:36 crc kubenswrapper[4881]: I0121 11:44:36.658622 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wc4t2" Jan 21 11:44:36 crc kubenswrapper[4881]: I0121 11:44:36.737760 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wc4t2" Jan 21 11:44:36 crc kubenswrapper[4881]: I0121 11:44:36.778629 4881 generic.go:334] "Generic (PLEG): container finished" podID="a0e7b801-0b42-4a0f-9d8a-6098f067d197" containerID="da4f8fc6bf0374e81ec0a1951df788719674fcd294eade9800200cfc352dfb8d" exitCode=0 Jan 21 11:44:36 crc kubenswrapper[4881]: I0121 11:44:36.778726 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-txhzl" event={"ID":"a0e7b801-0b42-4a0f-9d8a-6098f067d197","Type":"ContainerDied","Data":"da4f8fc6bf0374e81ec0a1951df788719674fcd294eade9800200cfc352dfb8d"} Jan 21 11:44:36 crc kubenswrapper[4881]: I0121 11:44:36.858560 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wc4t2" Jan 21 11:44:36 crc kubenswrapper[4881]: I0121 11:44:36.859387 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fdrvq" Jan 21 11:44:37 crc kubenswrapper[4881]: I0121 11:44:37.791989 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-txhzl" event={"ID":"a0e7b801-0b42-4a0f-9d8a-6098f067d197","Type":"ContainerStarted","Data":"e8f0bb41b337b4197d37181f051ccd1921807a454bc397a6fe4fe06dfc3f10b7"} Jan 21 11:44:37 crc kubenswrapper[4881]: I0121 11:44:37.825392 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-txhzl" podStartSLOduration=3.068594896 podStartE2EDuration="9.825366576s" podCreationTimestamp="2026-01-21 11:44:28 +0000 UTC" firstStartedPulling="2026-01-21 11:44:30.445888276 +0000 UTC m=+2857.705844735" lastFinishedPulling="2026-01-21 11:44:37.202659946 +0000 UTC m=+2864.462616415" observedRunningTime="2026-01-21 11:44:37.813250949 +0000 UTC m=+2865.073207438" watchObservedRunningTime="2026-01-21 11:44:37.825366576 +0000 UTC m=+2865.085323055" Jan 21 11:44:38 crc kubenswrapper[4881]: I0121 11:44:38.061397 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wc4t2"] Jan 21 11:44:38 crc kubenswrapper[4881]: I0121 11:44:38.804207 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wc4t2" podUID="31857b1b-0b5b-40a8-8706-9002ca7c878b" containerName="registry-server" containerID="cri-o://d82428e58c47c9e949608c486e52ec560ea14cbe4c81a28efff4fb76bfd1d58d" gracePeriod=2 Jan 21 11:44:38 crc kubenswrapper[4881]: I0121 11:44:38.938522 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-txhzl" Jan 21 11:44:38 crc kubenswrapper[4881]: I0121 11:44:38.938891 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-txhzl" Jan 21 11:44:39 crc kubenswrapper[4881]: I0121 11:44:39.308667 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wc4t2" Jan 21 11:44:39 crc kubenswrapper[4881]: I0121 11:44:39.486276 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wv9xw\" (UniqueName: \"kubernetes.io/projected/31857b1b-0b5b-40a8-8706-9002ca7c878b-kube-api-access-wv9xw\") pod \"31857b1b-0b5b-40a8-8706-9002ca7c878b\" (UID: \"31857b1b-0b5b-40a8-8706-9002ca7c878b\") " Jan 21 11:44:39 crc kubenswrapper[4881]: I0121 11:44:39.486683 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31857b1b-0b5b-40a8-8706-9002ca7c878b-utilities\") pod \"31857b1b-0b5b-40a8-8706-9002ca7c878b\" (UID: \"31857b1b-0b5b-40a8-8706-9002ca7c878b\") " Jan 21 11:44:39 crc kubenswrapper[4881]: I0121 11:44:39.486914 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31857b1b-0b5b-40a8-8706-9002ca7c878b-catalog-content\") pod \"31857b1b-0b5b-40a8-8706-9002ca7c878b\" (UID: \"31857b1b-0b5b-40a8-8706-9002ca7c878b\") " Jan 21 11:44:39 crc kubenswrapper[4881]: I0121 11:44:39.488309 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31857b1b-0b5b-40a8-8706-9002ca7c878b-utilities" (OuterVolumeSpecName: "utilities") pod "31857b1b-0b5b-40a8-8706-9002ca7c878b" (UID: "31857b1b-0b5b-40a8-8706-9002ca7c878b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:44:39 crc kubenswrapper[4881]: I0121 11:44:39.492480 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31857b1b-0b5b-40a8-8706-9002ca7c878b-kube-api-access-wv9xw" (OuterVolumeSpecName: "kube-api-access-wv9xw") pod "31857b1b-0b5b-40a8-8706-9002ca7c878b" (UID: "31857b1b-0b5b-40a8-8706-9002ca7c878b"). InnerVolumeSpecName "kube-api-access-wv9xw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:44:39 crc kubenswrapper[4881]: I0121 11:44:39.508456 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31857b1b-0b5b-40a8-8706-9002ca7c878b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31857b1b-0b5b-40a8-8706-9002ca7c878b" (UID: "31857b1b-0b5b-40a8-8706-9002ca7c878b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:44:39 crc kubenswrapper[4881]: I0121 11:44:39.590422 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wv9xw\" (UniqueName: \"kubernetes.io/projected/31857b1b-0b5b-40a8-8706-9002ca7c878b-kube-api-access-wv9xw\") on node \"crc\" DevicePath \"\"" Jan 21 11:44:39 crc kubenswrapper[4881]: I0121 11:44:39.590515 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31857b1b-0b5b-40a8-8706-9002ca7c878b-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:44:39 crc kubenswrapper[4881]: I0121 11:44:39.590538 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31857b1b-0b5b-40a8-8706-9002ca7c878b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:44:39 crc kubenswrapper[4881]: I0121 11:44:39.819351 4881 generic.go:334] "Generic (PLEG): container finished" podID="31857b1b-0b5b-40a8-8706-9002ca7c878b" containerID="d82428e58c47c9e949608c486e52ec560ea14cbe4c81a28efff4fb76bfd1d58d" exitCode=0 Jan 21 11:44:39 crc kubenswrapper[4881]: I0121 11:44:39.819419 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wc4t2" event={"ID":"31857b1b-0b5b-40a8-8706-9002ca7c878b","Type":"ContainerDied","Data":"d82428e58c47c9e949608c486e52ec560ea14cbe4c81a28efff4fb76bfd1d58d"} Jan 21 11:44:39 crc kubenswrapper[4881]: I0121 11:44:39.819492 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wc4t2" event={"ID":"31857b1b-0b5b-40a8-8706-9002ca7c878b","Type":"ContainerDied","Data":"faddb564d26733e0b7d65cf614390493c38cce9c895f446c1284fb2526e50080"} Jan 21 11:44:39 crc kubenswrapper[4881]: I0121 11:44:39.819490 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wc4t2" Jan 21 11:44:39 crc kubenswrapper[4881]: I0121 11:44:39.819520 4881 scope.go:117] "RemoveContainer" containerID="d82428e58c47c9e949608c486e52ec560ea14cbe4c81a28efff4fb76bfd1d58d" Jan 21 11:44:39 crc kubenswrapper[4881]: I0121 11:44:39.846078 4881 scope.go:117] "RemoveContainer" containerID="c6c4d44fb090872e6b0605107087391221f656d81db179f5a1c6b09418925f51" Jan 21 11:44:39 crc kubenswrapper[4881]: I0121 11:44:39.869320 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wc4t2"] Jan 21 11:44:39 crc kubenswrapper[4881]: I0121 11:44:39.879594 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wc4t2"] Jan 21 11:44:39 crc kubenswrapper[4881]: I0121 11:44:39.894237 4881 scope.go:117] "RemoveContainer" containerID="336832bcd33835056ce008b8a53f0d9baf232ba9edeed16ee2274ae9fd33d3e1" Jan 21 11:44:39 crc kubenswrapper[4881]: I0121 11:44:39.935925 4881 scope.go:117] "RemoveContainer" containerID="d82428e58c47c9e949608c486e52ec560ea14cbe4c81a28efff4fb76bfd1d58d" Jan 21 11:44:39 crc kubenswrapper[4881]: E0121 11:44:39.936522 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d82428e58c47c9e949608c486e52ec560ea14cbe4c81a28efff4fb76bfd1d58d\": container with ID starting with d82428e58c47c9e949608c486e52ec560ea14cbe4c81a28efff4fb76bfd1d58d not found: ID does not exist" containerID="d82428e58c47c9e949608c486e52ec560ea14cbe4c81a28efff4fb76bfd1d58d" Jan 21 11:44:39 crc kubenswrapper[4881]: I0121 11:44:39.936593 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d82428e58c47c9e949608c486e52ec560ea14cbe4c81a28efff4fb76bfd1d58d"} err="failed to get container status \"d82428e58c47c9e949608c486e52ec560ea14cbe4c81a28efff4fb76bfd1d58d\": rpc error: code = NotFound desc = could not find container \"d82428e58c47c9e949608c486e52ec560ea14cbe4c81a28efff4fb76bfd1d58d\": container with ID starting with d82428e58c47c9e949608c486e52ec560ea14cbe4c81a28efff4fb76bfd1d58d not found: ID does not exist" Jan 21 11:44:39 crc kubenswrapper[4881]: I0121 11:44:39.936636 4881 scope.go:117] "RemoveContainer" containerID="c6c4d44fb090872e6b0605107087391221f656d81db179f5a1c6b09418925f51" Jan 21 11:44:39 crc kubenswrapper[4881]: E0121 11:44:39.937450 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6c4d44fb090872e6b0605107087391221f656d81db179f5a1c6b09418925f51\": container with ID starting with c6c4d44fb090872e6b0605107087391221f656d81db179f5a1c6b09418925f51 not found: ID does not exist" containerID="c6c4d44fb090872e6b0605107087391221f656d81db179f5a1c6b09418925f51" Jan 21 11:44:39 crc kubenswrapper[4881]: I0121 11:44:39.937505 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6c4d44fb090872e6b0605107087391221f656d81db179f5a1c6b09418925f51"} err="failed to get container status \"c6c4d44fb090872e6b0605107087391221f656d81db179f5a1c6b09418925f51\": rpc error: code = NotFound desc = could not find container \"c6c4d44fb090872e6b0605107087391221f656d81db179f5a1c6b09418925f51\": container with ID starting with c6c4d44fb090872e6b0605107087391221f656d81db179f5a1c6b09418925f51 not found: ID does not exist" Jan 21 11:44:39 crc kubenswrapper[4881]: I0121 11:44:39.937546 4881 scope.go:117] "RemoveContainer" containerID="336832bcd33835056ce008b8a53f0d9baf232ba9edeed16ee2274ae9fd33d3e1" Jan 21 11:44:39 crc kubenswrapper[4881]: E0121 11:44:39.938006 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"336832bcd33835056ce008b8a53f0d9baf232ba9edeed16ee2274ae9fd33d3e1\": container with ID starting with 336832bcd33835056ce008b8a53f0d9baf232ba9edeed16ee2274ae9fd33d3e1 not found: ID does not exist" containerID="336832bcd33835056ce008b8a53f0d9baf232ba9edeed16ee2274ae9fd33d3e1" Jan 21 11:44:39 crc kubenswrapper[4881]: I0121 11:44:39.938047 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"336832bcd33835056ce008b8a53f0d9baf232ba9edeed16ee2274ae9fd33d3e1"} err="failed to get container status \"336832bcd33835056ce008b8a53f0d9baf232ba9edeed16ee2274ae9fd33d3e1\": rpc error: code = NotFound desc = could not find container \"336832bcd33835056ce008b8a53f0d9baf232ba9edeed16ee2274ae9fd33d3e1\": container with ID starting with 336832bcd33835056ce008b8a53f0d9baf232ba9edeed16ee2274ae9fd33d3e1 not found: ID does not exist" Jan 21 11:44:39 crc kubenswrapper[4881]: I0121 11:44:39.996355 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-txhzl" podUID="a0e7b801-0b42-4a0f-9d8a-6098f067d197" containerName="registry-server" probeResult="failure" output=< Jan 21 11:44:39 crc kubenswrapper[4881]: timeout: failed to connect service ":50051" within 1s Jan 21 11:44:39 crc kubenswrapper[4881]: > Jan 21 11:44:40 crc kubenswrapper[4881]: I0121 11:44:40.462173 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fdrvq"] Jan 21 11:44:40 crc kubenswrapper[4881]: I0121 11:44:40.462693 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fdrvq" podUID="2d42aa8e-f444-4984-a8d7-7a207bf7c53f" containerName="registry-server" containerID="cri-o://c46a2a4d819c8a32cc07d84e8693331645ce9fdf0d2715fdb9ac2374aedc71ff" gracePeriod=2 Jan 21 11:44:40 crc kubenswrapper[4881]: I0121 11:44:40.832891 4881 generic.go:334] "Generic (PLEG): container finished" podID="2d42aa8e-f444-4984-a8d7-7a207bf7c53f" containerID="c46a2a4d819c8a32cc07d84e8693331645ce9fdf0d2715fdb9ac2374aedc71ff" exitCode=0 Jan 21 11:44:40 crc kubenswrapper[4881]: I0121 11:44:40.832961 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdrvq" event={"ID":"2d42aa8e-f444-4984-a8d7-7a207bf7c53f","Type":"ContainerDied","Data":"c46a2a4d819c8a32cc07d84e8693331645ce9fdf0d2715fdb9ac2374aedc71ff"} Jan 21 11:44:40 crc kubenswrapper[4881]: I0121 11:44:40.833003 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fdrvq" event={"ID":"2d42aa8e-f444-4984-a8d7-7a207bf7c53f","Type":"ContainerDied","Data":"d69f26819e41f24884704883945e98a254587e278d32d1d4a11013c821014e32"} Jan 21 11:44:40 crc kubenswrapper[4881]: I0121 11:44:40.833017 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d69f26819e41f24884704883945e98a254587e278d32d1d4a11013c821014e32" Jan 21 11:44:40 crc kubenswrapper[4881]: I0121 11:44:40.908723 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fdrvq" Jan 21 11:44:41 crc kubenswrapper[4881]: I0121 11:44:41.019268 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d42aa8e-f444-4984-a8d7-7a207bf7c53f-utilities\") pod \"2d42aa8e-f444-4984-a8d7-7a207bf7c53f\" (UID: \"2d42aa8e-f444-4984-a8d7-7a207bf7c53f\") " Jan 21 11:44:41 crc kubenswrapper[4881]: I0121 11:44:41.019820 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d42aa8e-f444-4984-a8d7-7a207bf7c53f-catalog-content\") pod \"2d42aa8e-f444-4984-a8d7-7a207bf7c53f\" (UID: \"2d42aa8e-f444-4984-a8d7-7a207bf7c53f\") " Jan 21 11:44:41 crc kubenswrapper[4881]: I0121 11:44:41.019965 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d42aa8e-f444-4984-a8d7-7a207bf7c53f-utilities" (OuterVolumeSpecName: "utilities") pod "2d42aa8e-f444-4984-a8d7-7a207bf7c53f" (UID: "2d42aa8e-f444-4984-a8d7-7a207bf7c53f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:44:41 crc kubenswrapper[4881]: I0121 11:44:41.020279 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdtt4\" (UniqueName: \"kubernetes.io/projected/2d42aa8e-f444-4984-a8d7-7a207bf7c53f-kube-api-access-qdtt4\") pod \"2d42aa8e-f444-4984-a8d7-7a207bf7c53f\" (UID: \"2d42aa8e-f444-4984-a8d7-7a207bf7c53f\") " Jan 21 11:44:41 crc kubenswrapper[4881]: I0121 11:44:41.022494 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d42aa8e-f444-4984-a8d7-7a207bf7c53f-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:44:41 crc kubenswrapper[4881]: I0121 11:44:41.027074 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d42aa8e-f444-4984-a8d7-7a207bf7c53f-kube-api-access-qdtt4" (OuterVolumeSpecName: "kube-api-access-qdtt4") pod "2d42aa8e-f444-4984-a8d7-7a207bf7c53f" (UID: "2d42aa8e-f444-4984-a8d7-7a207bf7c53f"). InnerVolumeSpecName "kube-api-access-qdtt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:44:41 crc kubenswrapper[4881]: I0121 11:44:41.086814 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d42aa8e-f444-4984-a8d7-7a207bf7c53f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2d42aa8e-f444-4984-a8d7-7a207bf7c53f" (UID: "2d42aa8e-f444-4984-a8d7-7a207bf7c53f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:44:41 crc kubenswrapper[4881]: I0121 11:44:41.124377 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d42aa8e-f444-4984-a8d7-7a207bf7c53f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:44:41 crc kubenswrapper[4881]: I0121 11:44:41.124434 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdtt4\" (UniqueName: \"kubernetes.io/projected/2d42aa8e-f444-4984-a8d7-7a207bf7c53f-kube-api-access-qdtt4\") on node \"crc\" DevicePath \"\"" Jan 21 11:44:41 crc kubenswrapper[4881]: I0121 11:44:41.328265 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31857b1b-0b5b-40a8-8706-9002ca7c878b" path="/var/lib/kubelet/pods/31857b1b-0b5b-40a8-8706-9002ca7c878b/volumes" Jan 21 11:44:41 crc kubenswrapper[4881]: I0121 11:44:41.844376 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fdrvq" Jan 21 11:44:41 crc kubenswrapper[4881]: I0121 11:44:41.873698 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fdrvq"] Jan 21 11:44:41 crc kubenswrapper[4881]: I0121 11:44:41.883679 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fdrvq"] Jan 21 11:44:43 crc kubenswrapper[4881]: I0121 11:44:43.329644 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d42aa8e-f444-4984-a8d7-7a207bf7c53f" path="/var/lib/kubelet/pods/2d42aa8e-f444-4984-a8d7-7a207bf7c53f/volumes" Jan 21 11:44:49 crc kubenswrapper[4881]: I0121 11:44:49.003003 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-txhzl" Jan 21 11:44:49 crc kubenswrapper[4881]: I0121 11:44:49.068445 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-txhzl" Jan 21 11:44:49 crc kubenswrapper[4881]: I0121 11:44:49.244948 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-txhzl"] Jan 21 11:44:50 crc kubenswrapper[4881]: I0121 11:44:50.931066 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-txhzl" podUID="a0e7b801-0b42-4a0f-9d8a-6098f067d197" containerName="registry-server" containerID="cri-o://e8f0bb41b337b4197d37181f051ccd1921807a454bc397a6fe4fe06dfc3f10b7" gracePeriod=2 Jan 21 11:44:51 crc kubenswrapper[4881]: I0121 11:44:51.422295 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-txhzl" Jan 21 11:44:51 crc kubenswrapper[4881]: I0121 11:44:51.464991 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0e7b801-0b42-4a0f-9d8a-6098f067d197-catalog-content\") pod \"a0e7b801-0b42-4a0f-9d8a-6098f067d197\" (UID: \"a0e7b801-0b42-4a0f-9d8a-6098f067d197\") " Jan 21 11:44:51 crc kubenswrapper[4881]: I0121 11:44:51.569569 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fknht\" (UniqueName: \"kubernetes.io/projected/a0e7b801-0b42-4a0f-9d8a-6098f067d197-kube-api-access-fknht\") pod \"a0e7b801-0b42-4a0f-9d8a-6098f067d197\" (UID: \"a0e7b801-0b42-4a0f-9d8a-6098f067d197\") " Jan 21 11:44:51 crc kubenswrapper[4881]: I0121 11:44:51.569893 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0e7b801-0b42-4a0f-9d8a-6098f067d197-utilities\") pod \"a0e7b801-0b42-4a0f-9d8a-6098f067d197\" (UID: \"a0e7b801-0b42-4a0f-9d8a-6098f067d197\") " Jan 21 11:44:51 crc kubenswrapper[4881]: I0121 11:44:51.570987 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0e7b801-0b42-4a0f-9d8a-6098f067d197-utilities" (OuterVolumeSpecName: "utilities") pod "a0e7b801-0b42-4a0f-9d8a-6098f067d197" (UID: "a0e7b801-0b42-4a0f-9d8a-6098f067d197"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:44:51 crc kubenswrapper[4881]: I0121 11:44:51.571973 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0e7b801-0b42-4a0f-9d8a-6098f067d197-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:44:51 crc kubenswrapper[4881]: I0121 11:44:51.576865 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0e7b801-0b42-4a0f-9d8a-6098f067d197-kube-api-access-fknht" (OuterVolumeSpecName: "kube-api-access-fknht") pod "a0e7b801-0b42-4a0f-9d8a-6098f067d197" (UID: "a0e7b801-0b42-4a0f-9d8a-6098f067d197"). InnerVolumeSpecName "kube-api-access-fknht". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:44:51 crc kubenswrapper[4881]: I0121 11:44:51.610652 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0e7b801-0b42-4a0f-9d8a-6098f067d197-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a0e7b801-0b42-4a0f-9d8a-6098f067d197" (UID: "a0e7b801-0b42-4a0f-9d8a-6098f067d197"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:44:51 crc kubenswrapper[4881]: I0121 11:44:51.673606 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fknht\" (UniqueName: \"kubernetes.io/projected/a0e7b801-0b42-4a0f-9d8a-6098f067d197-kube-api-access-fknht\") on node \"crc\" DevicePath \"\"" Jan 21 11:44:51 crc kubenswrapper[4881]: I0121 11:44:51.673646 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0e7b801-0b42-4a0f-9d8a-6098f067d197-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:44:51 crc kubenswrapper[4881]: I0121 11:44:51.946596 4881 generic.go:334] "Generic (PLEG): container finished" podID="a0e7b801-0b42-4a0f-9d8a-6098f067d197" containerID="e8f0bb41b337b4197d37181f051ccd1921807a454bc397a6fe4fe06dfc3f10b7" exitCode=0 Jan 21 11:44:51 crc kubenswrapper[4881]: I0121 11:44:51.946654 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-txhzl" event={"ID":"a0e7b801-0b42-4a0f-9d8a-6098f067d197","Type":"ContainerDied","Data":"e8f0bb41b337b4197d37181f051ccd1921807a454bc397a6fe4fe06dfc3f10b7"} Jan 21 11:44:51 crc kubenswrapper[4881]: I0121 11:44:51.946688 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-txhzl" event={"ID":"a0e7b801-0b42-4a0f-9d8a-6098f067d197","Type":"ContainerDied","Data":"d0d563f9da5b6ef6d9aeb469bdb9a55a96af9c6b6f7a766f8209eb73233aaf4a"} Jan 21 11:44:51 crc kubenswrapper[4881]: I0121 11:44:51.946709 4881 scope.go:117] "RemoveContainer" containerID="e8f0bb41b337b4197d37181f051ccd1921807a454bc397a6fe4fe06dfc3f10b7" Jan 21 11:44:51 crc kubenswrapper[4881]: I0121 11:44:51.946900 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-txhzl" Jan 21 11:44:51 crc kubenswrapper[4881]: I0121 11:44:51.980738 4881 scope.go:117] "RemoveContainer" containerID="da4f8fc6bf0374e81ec0a1951df788719674fcd294eade9800200cfc352dfb8d" Jan 21 11:44:52 crc kubenswrapper[4881]: I0121 11:44:52.000946 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-txhzl"] Jan 21 11:44:52 crc kubenswrapper[4881]: I0121 11:44:52.014417 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-txhzl"] Jan 21 11:44:52 crc kubenswrapper[4881]: I0121 11:44:52.024376 4881 scope.go:117] "RemoveContainer" containerID="cdbde46c3239eb1d76f2261767f55cedbe3ed1340d6b7f6c84f70007b06ecd03" Jan 21 11:44:52 crc kubenswrapper[4881]: I0121 11:44:52.092386 4881 scope.go:117] "RemoveContainer" containerID="e8f0bb41b337b4197d37181f051ccd1921807a454bc397a6fe4fe06dfc3f10b7" Jan 21 11:44:52 crc kubenswrapper[4881]: E0121 11:44:52.092998 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8f0bb41b337b4197d37181f051ccd1921807a454bc397a6fe4fe06dfc3f10b7\": container with ID starting with e8f0bb41b337b4197d37181f051ccd1921807a454bc397a6fe4fe06dfc3f10b7 not found: ID does not exist" containerID="e8f0bb41b337b4197d37181f051ccd1921807a454bc397a6fe4fe06dfc3f10b7" Jan 21 11:44:52 crc kubenswrapper[4881]: I0121 11:44:52.093057 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8f0bb41b337b4197d37181f051ccd1921807a454bc397a6fe4fe06dfc3f10b7"} err="failed to get container status \"e8f0bb41b337b4197d37181f051ccd1921807a454bc397a6fe4fe06dfc3f10b7\": rpc error: code = NotFound desc = could not find container \"e8f0bb41b337b4197d37181f051ccd1921807a454bc397a6fe4fe06dfc3f10b7\": container with ID starting with e8f0bb41b337b4197d37181f051ccd1921807a454bc397a6fe4fe06dfc3f10b7 not found: ID does not exist" Jan 21 11:44:52 crc kubenswrapper[4881]: I0121 11:44:52.093089 4881 scope.go:117] "RemoveContainer" containerID="da4f8fc6bf0374e81ec0a1951df788719674fcd294eade9800200cfc352dfb8d" Jan 21 11:44:52 crc kubenswrapper[4881]: E0121 11:44:52.093412 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da4f8fc6bf0374e81ec0a1951df788719674fcd294eade9800200cfc352dfb8d\": container with ID starting with da4f8fc6bf0374e81ec0a1951df788719674fcd294eade9800200cfc352dfb8d not found: ID does not exist" containerID="da4f8fc6bf0374e81ec0a1951df788719674fcd294eade9800200cfc352dfb8d" Jan 21 11:44:52 crc kubenswrapper[4881]: I0121 11:44:52.093443 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da4f8fc6bf0374e81ec0a1951df788719674fcd294eade9800200cfc352dfb8d"} err="failed to get container status \"da4f8fc6bf0374e81ec0a1951df788719674fcd294eade9800200cfc352dfb8d\": rpc error: code = NotFound desc = could not find container \"da4f8fc6bf0374e81ec0a1951df788719674fcd294eade9800200cfc352dfb8d\": container with ID starting with da4f8fc6bf0374e81ec0a1951df788719674fcd294eade9800200cfc352dfb8d not found: ID does not exist" Jan 21 11:44:52 crc kubenswrapper[4881]: I0121 11:44:52.093459 4881 scope.go:117] "RemoveContainer" containerID="cdbde46c3239eb1d76f2261767f55cedbe3ed1340d6b7f6c84f70007b06ecd03" Jan 21 11:44:52 crc kubenswrapper[4881]: E0121 11:44:52.093700 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdbde46c3239eb1d76f2261767f55cedbe3ed1340d6b7f6c84f70007b06ecd03\": container with ID starting with cdbde46c3239eb1d76f2261767f55cedbe3ed1340d6b7f6c84f70007b06ecd03 not found: ID does not exist" containerID="cdbde46c3239eb1d76f2261767f55cedbe3ed1340d6b7f6c84f70007b06ecd03" Jan 21 11:44:52 crc kubenswrapper[4881]: I0121 11:44:52.093727 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdbde46c3239eb1d76f2261767f55cedbe3ed1340d6b7f6c84f70007b06ecd03"} err="failed to get container status \"cdbde46c3239eb1d76f2261767f55cedbe3ed1340d6b7f6c84f70007b06ecd03\": rpc error: code = NotFound desc = could not find container \"cdbde46c3239eb1d76f2261767f55cedbe3ed1340d6b7f6c84f70007b06ecd03\": container with ID starting with cdbde46c3239eb1d76f2261767f55cedbe3ed1340d6b7f6c84f70007b06ecd03 not found: ID does not exist" Jan 21 11:44:53 crc kubenswrapper[4881]: I0121 11:44:53.327117 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0e7b801-0b42-4a0f-9d8a-6098f067d197" path="/var/lib/kubelet/pods/a0e7b801-0b42-4a0f-9d8a-6098f067d197/volumes" Jan 21 11:44:59 crc kubenswrapper[4881]: I0121 11:44:59.850837 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:44:59 crc kubenswrapper[4881]: I0121 11:44:59.851467 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:44:59 crc kubenswrapper[4881]: I0121 11:44:59.851521 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 11:44:59 crc kubenswrapper[4881]: I0121 11:44:59.852689 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"40878d2da6716331f0a893f4c9f3938e30cde34eaf4eb8051eda58bfc84a6a6c"} pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 11:44:59 crc kubenswrapper[4881]: I0121 11:44:59.852755 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" containerID="cri-o://40878d2da6716331f0a893f4c9f3938e30cde34eaf4eb8051eda58bfc84a6a6c" gracePeriod=600 Jan 21 11:45:00 crc kubenswrapper[4881]: I0121 11:45:00.067878 4881 generic.go:334] "Generic (PLEG): container finished" podID="3687b313-1df2-4274-80db-8c758b51bf2d" containerID="40878d2da6716331f0a893f4c9f3938e30cde34eaf4eb8051eda58bfc84a6a6c" exitCode=0 Jan 21 11:45:00 crc kubenswrapper[4881]: I0121 11:45:00.067930 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerDied","Data":"40878d2da6716331f0a893f4c9f3938e30cde34eaf4eb8051eda58bfc84a6a6c"} Jan 21 11:45:00 crc kubenswrapper[4881]: I0121 11:45:00.067973 4881 scope.go:117] "RemoveContainer" containerID="ec38e0182d2afb5352817a93e9019bceb63eb1c3df53485164249f8ee9b9d46f" Jan 21 11:45:00 crc kubenswrapper[4881]: I0121 11:45:00.155691 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483265-wh6tk"] Jan 21 11:45:00 crc kubenswrapper[4881]: E0121 11:45:00.156233 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31857b1b-0b5b-40a8-8706-9002ca7c878b" containerName="extract-content" Jan 21 11:45:00 crc kubenswrapper[4881]: I0121 11:45:00.156253 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="31857b1b-0b5b-40a8-8706-9002ca7c878b" containerName="extract-content" Jan 21 11:45:00 crc kubenswrapper[4881]: E0121 11:45:00.156276 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31857b1b-0b5b-40a8-8706-9002ca7c878b" containerName="extract-utilities" Jan 21 11:45:00 crc kubenswrapper[4881]: I0121 11:45:00.156284 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="31857b1b-0b5b-40a8-8706-9002ca7c878b" containerName="extract-utilities" Jan 21 11:45:00 crc kubenswrapper[4881]: E0121 11:45:00.156308 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0e7b801-0b42-4a0f-9d8a-6098f067d197" containerName="registry-server" Jan 21 11:45:00 crc kubenswrapper[4881]: I0121 11:45:00.156317 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0e7b801-0b42-4a0f-9d8a-6098f067d197" containerName="registry-server" Jan 21 11:45:00 crc kubenswrapper[4881]: E0121 11:45:00.156334 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d42aa8e-f444-4984-a8d7-7a207bf7c53f" containerName="extract-content" Jan 21 11:45:00 crc kubenswrapper[4881]: I0121 11:45:00.156343 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d42aa8e-f444-4984-a8d7-7a207bf7c53f" containerName="extract-content" Jan 21 11:45:00 crc kubenswrapper[4881]: E0121 11:45:00.156358 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0e7b801-0b42-4a0f-9d8a-6098f067d197" containerName="extract-utilities" Jan 21 11:45:00 crc kubenswrapper[4881]: I0121 11:45:00.156366 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0e7b801-0b42-4a0f-9d8a-6098f067d197" containerName="extract-utilities" Jan 21 11:45:00 crc kubenswrapper[4881]: E0121 11:45:00.156388 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d42aa8e-f444-4984-a8d7-7a207bf7c53f" containerName="registry-server" Jan 21 11:45:00 crc kubenswrapper[4881]: I0121 11:45:00.156396 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d42aa8e-f444-4984-a8d7-7a207bf7c53f" containerName="registry-server" Jan 21 11:45:00 crc kubenswrapper[4881]: E0121 11:45:00.156428 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31857b1b-0b5b-40a8-8706-9002ca7c878b" containerName="registry-server" Jan 21 11:45:00 crc kubenswrapper[4881]: I0121 11:45:00.156435 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="31857b1b-0b5b-40a8-8706-9002ca7c878b" containerName="registry-server" Jan 21 11:45:00 crc kubenswrapper[4881]: E0121 11:45:00.156452 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0e7b801-0b42-4a0f-9d8a-6098f067d197" containerName="extract-content" Jan 21 11:45:00 crc kubenswrapper[4881]: I0121 11:45:00.156460 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0e7b801-0b42-4a0f-9d8a-6098f067d197" containerName="extract-content" Jan 21 11:45:00 crc kubenswrapper[4881]: E0121 11:45:00.156481 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d42aa8e-f444-4984-a8d7-7a207bf7c53f" containerName="extract-utilities" Jan 21 11:45:00 crc kubenswrapper[4881]: I0121 11:45:00.156489 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d42aa8e-f444-4984-a8d7-7a207bf7c53f" containerName="extract-utilities" Jan 21 11:45:00 crc kubenswrapper[4881]: I0121 11:45:00.156709 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0e7b801-0b42-4a0f-9d8a-6098f067d197" containerName="registry-server" Jan 21 11:45:00 crc kubenswrapper[4881]: I0121 11:45:00.156729 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d42aa8e-f444-4984-a8d7-7a207bf7c53f" containerName="registry-server" Jan 21 11:45:00 crc kubenswrapper[4881]: I0121 11:45:00.156763 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="31857b1b-0b5b-40a8-8706-9002ca7c878b" containerName="registry-server" Jan 21 11:45:00 crc kubenswrapper[4881]: I0121 11:45:00.157620 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483265-wh6tk" Jan 21 11:45:00 crc kubenswrapper[4881]: I0121 11:45:00.160098 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 11:45:00 crc kubenswrapper[4881]: I0121 11:45:00.162577 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 11:45:00 crc kubenswrapper[4881]: I0121 11:45:00.181034 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483265-wh6tk"] Jan 21 11:45:00 crc kubenswrapper[4881]: I0121 11:45:00.202191 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49387e54-5709-46bd-9f76-cd79369d9abe-config-volume\") pod \"collect-profiles-29483265-wh6tk\" (UID: \"49387e54-5709-46bd-9f76-cd79369d9abe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483265-wh6tk" Jan 21 11:45:00 crc kubenswrapper[4881]: I0121 11:45:00.202323 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/49387e54-5709-46bd-9f76-cd79369d9abe-secret-volume\") pod \"collect-profiles-29483265-wh6tk\" (UID: \"49387e54-5709-46bd-9f76-cd79369d9abe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483265-wh6tk" Jan 21 11:45:00 crc kubenswrapper[4881]: I0121 11:45:00.202384 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn9xt\" (UniqueName: \"kubernetes.io/projected/49387e54-5709-46bd-9f76-cd79369d9abe-kube-api-access-sn9xt\") pod \"collect-profiles-29483265-wh6tk\" (UID: \"49387e54-5709-46bd-9f76-cd79369d9abe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483265-wh6tk" Jan 21 11:45:00 crc kubenswrapper[4881]: I0121 11:45:00.304119 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49387e54-5709-46bd-9f76-cd79369d9abe-config-volume\") pod \"collect-profiles-29483265-wh6tk\" (UID: \"49387e54-5709-46bd-9f76-cd79369d9abe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483265-wh6tk" Jan 21 11:45:00 crc kubenswrapper[4881]: I0121 11:45:00.305289 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49387e54-5709-46bd-9f76-cd79369d9abe-config-volume\") pod \"collect-profiles-29483265-wh6tk\" (UID: \"49387e54-5709-46bd-9f76-cd79369d9abe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483265-wh6tk" Jan 21 11:45:00 crc kubenswrapper[4881]: I0121 11:45:00.305566 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/49387e54-5709-46bd-9f76-cd79369d9abe-secret-volume\") pod \"collect-profiles-29483265-wh6tk\" (UID: \"49387e54-5709-46bd-9f76-cd79369d9abe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483265-wh6tk" Jan 21 11:45:00 crc kubenswrapper[4881]: I0121 11:45:00.306512 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sn9xt\" (UniqueName: \"kubernetes.io/projected/49387e54-5709-46bd-9f76-cd79369d9abe-kube-api-access-sn9xt\") pod \"collect-profiles-29483265-wh6tk\" (UID: \"49387e54-5709-46bd-9f76-cd79369d9abe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483265-wh6tk" Jan 21 11:45:00 crc kubenswrapper[4881]: I0121 11:45:00.312602 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/49387e54-5709-46bd-9f76-cd79369d9abe-secret-volume\") pod \"collect-profiles-29483265-wh6tk\" (UID: \"49387e54-5709-46bd-9f76-cd79369d9abe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483265-wh6tk" Jan 21 11:45:00 crc kubenswrapper[4881]: I0121 11:45:00.332671 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sn9xt\" (UniqueName: \"kubernetes.io/projected/49387e54-5709-46bd-9f76-cd79369d9abe-kube-api-access-sn9xt\") pod \"collect-profiles-29483265-wh6tk\" (UID: \"49387e54-5709-46bd-9f76-cd79369d9abe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483265-wh6tk" Jan 21 11:45:00 crc kubenswrapper[4881]: I0121 11:45:00.492038 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483265-wh6tk" Jan 21 11:45:01 crc kubenswrapper[4881]: I0121 11:45:01.000398 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483265-wh6tk"] Jan 21 11:45:01 crc kubenswrapper[4881]: W0121 11:45:01.000966 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49387e54_5709_46bd_9f76_cd79369d9abe.slice/crio-ac4b1cff99fea5fc8da2ef32e7c40ee41c09df8b42122cd3aa4373de9aed23c2 WatchSource:0}: Error finding container ac4b1cff99fea5fc8da2ef32e7c40ee41c09df8b42122cd3aa4373de9aed23c2: Status 404 returned error can't find the container with id ac4b1cff99fea5fc8da2ef32e7c40ee41c09df8b42122cd3aa4373de9aed23c2 Jan 21 11:45:01 crc kubenswrapper[4881]: I0121 11:45:01.096507 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483265-wh6tk" event={"ID":"49387e54-5709-46bd-9f76-cd79369d9abe","Type":"ContainerStarted","Data":"ac4b1cff99fea5fc8da2ef32e7c40ee41c09df8b42122cd3aa4373de9aed23c2"} Jan 21 11:45:01 crc kubenswrapper[4881]: I0121 11:45:01.107365 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"d8ad2c92af12e692917f97f6e76a6242fb5c00dc2c38dcea5e2ce39cd5dfeb57"} Jan 21 11:45:02 crc kubenswrapper[4881]: I0121 11:45:02.122006 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483265-wh6tk" event={"ID":"49387e54-5709-46bd-9f76-cd79369d9abe","Type":"ContainerDied","Data":"03feba2a29229654c706a38fc1bff6c4df03df1eca6406a125ce3ee72913286b"} Jan 21 11:45:02 crc kubenswrapper[4881]: I0121 11:45:02.123120 4881 generic.go:334] "Generic (PLEG): container finished" podID="49387e54-5709-46bd-9f76-cd79369d9abe" containerID="03feba2a29229654c706a38fc1bff6c4df03df1eca6406a125ce3ee72913286b" exitCode=0 Jan 21 11:45:03 crc kubenswrapper[4881]: I0121 11:45:03.166277 4881 generic.go:334] "Generic (PLEG): container finished" podID="38ac646b-177b-488d-853b-e04b22f267a4" containerID="2e34d3926c62f8cffffc796ec975008bf3545972abcc913f207930e4451b062e" exitCode=0 Jan 21 11:45:03 crc kubenswrapper[4881]: I0121 11:45:03.166369 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq" event={"ID":"38ac646b-177b-488d-853b-e04b22f267a4","Type":"ContainerDied","Data":"2e34d3926c62f8cffffc796ec975008bf3545972abcc913f207930e4451b062e"} Jan 21 11:45:03 crc kubenswrapper[4881]: I0121 11:45:03.496584 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483265-wh6tk" Jan 21 11:45:03 crc kubenswrapper[4881]: I0121 11:45:03.698424 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sn9xt\" (UniqueName: \"kubernetes.io/projected/49387e54-5709-46bd-9f76-cd79369d9abe-kube-api-access-sn9xt\") pod \"49387e54-5709-46bd-9f76-cd79369d9abe\" (UID: \"49387e54-5709-46bd-9f76-cd79369d9abe\") " Jan 21 11:45:03 crc kubenswrapper[4881]: I0121 11:45:03.698705 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49387e54-5709-46bd-9f76-cd79369d9abe-config-volume\") pod \"49387e54-5709-46bd-9f76-cd79369d9abe\" (UID: \"49387e54-5709-46bd-9f76-cd79369d9abe\") " Jan 21 11:45:03 crc kubenswrapper[4881]: I0121 11:45:03.699406 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49387e54-5709-46bd-9f76-cd79369d9abe-config-volume" (OuterVolumeSpecName: "config-volume") pod "49387e54-5709-46bd-9f76-cd79369d9abe" (UID: "49387e54-5709-46bd-9f76-cd79369d9abe"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:45:03 crc kubenswrapper[4881]: I0121 11:45:03.699477 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/49387e54-5709-46bd-9f76-cd79369d9abe-secret-volume\") pod \"49387e54-5709-46bd-9f76-cd79369d9abe\" (UID: \"49387e54-5709-46bd-9f76-cd79369d9abe\") " Jan 21 11:45:03 crc kubenswrapper[4881]: I0121 11:45:03.700209 4881 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49387e54-5709-46bd-9f76-cd79369d9abe-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 11:45:03 crc kubenswrapper[4881]: I0121 11:45:03.704345 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49387e54-5709-46bd-9f76-cd79369d9abe-kube-api-access-sn9xt" (OuterVolumeSpecName: "kube-api-access-sn9xt") pod "49387e54-5709-46bd-9f76-cd79369d9abe" (UID: "49387e54-5709-46bd-9f76-cd79369d9abe"). InnerVolumeSpecName "kube-api-access-sn9xt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:45:03 crc kubenswrapper[4881]: I0121 11:45:03.704770 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49387e54-5709-46bd-9f76-cd79369d9abe-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "49387e54-5709-46bd-9f76-cd79369d9abe" (UID: "49387e54-5709-46bd-9f76-cd79369d9abe"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:45:03 crc kubenswrapper[4881]: I0121 11:45:03.801262 4881 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/49387e54-5709-46bd-9f76-cd79369d9abe-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 11:45:03 crc kubenswrapper[4881]: I0121 11:45:03.801294 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sn9xt\" (UniqueName: \"kubernetes.io/projected/49387e54-5709-46bd-9f76-cd79369d9abe-kube-api-access-sn9xt\") on node \"crc\" DevicePath \"\"" Jan 21 11:45:04 crc kubenswrapper[4881]: I0121 11:45:04.184163 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483265-wh6tk" Jan 21 11:45:04 crc kubenswrapper[4881]: I0121 11:45:04.184186 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483265-wh6tk" event={"ID":"49387e54-5709-46bd-9f76-cd79369d9abe","Type":"ContainerDied","Data":"ac4b1cff99fea5fc8da2ef32e7c40ee41c09df8b42122cd3aa4373de9aed23c2"} Jan 21 11:45:04 crc kubenswrapper[4881]: I0121 11:45:04.184223 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac4b1cff99fea5fc8da2ef32e7c40ee41c09df8b42122cd3aa4373de9aed23c2" Jan 21 11:45:04 crc kubenswrapper[4881]: I0121 11:45:04.585042 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483220-2jmrb"] Jan 21 11:45:04 crc kubenswrapper[4881]: I0121 11:45:04.600114 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483220-2jmrb"] Jan 21 11:45:04 crc kubenswrapper[4881]: I0121 11:45:04.718508 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq" Jan 21 11:45:04 crc kubenswrapper[4881]: I0121 11:45:04.832418 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/38ac646b-177b-488d-853b-e04b22f267a4-libvirt-secret-0\") pod \"38ac646b-177b-488d-853b-e04b22f267a4\" (UID: \"38ac646b-177b-488d-853b-e04b22f267a4\") " Jan 21 11:45:04 crc kubenswrapper[4881]: I0121 11:45:04.832538 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38ac646b-177b-488d-853b-e04b22f267a4-libvirt-combined-ca-bundle\") pod \"38ac646b-177b-488d-853b-e04b22f267a4\" (UID: \"38ac646b-177b-488d-853b-e04b22f267a4\") " Jan 21 11:45:04 crc kubenswrapper[4881]: I0121 11:45:04.832599 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptwlv\" (UniqueName: \"kubernetes.io/projected/38ac646b-177b-488d-853b-e04b22f267a4-kube-api-access-ptwlv\") pod \"38ac646b-177b-488d-853b-e04b22f267a4\" (UID: \"38ac646b-177b-488d-853b-e04b22f267a4\") " Jan 21 11:45:04 crc kubenswrapper[4881]: I0121 11:45:04.832618 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38ac646b-177b-488d-853b-e04b22f267a4-inventory\") pod \"38ac646b-177b-488d-853b-e04b22f267a4\" (UID: \"38ac646b-177b-488d-853b-e04b22f267a4\") " Jan 21 11:45:04 crc kubenswrapper[4881]: I0121 11:45:04.832666 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/38ac646b-177b-488d-853b-e04b22f267a4-ssh-key-openstack-edpm-ipam\") pod \"38ac646b-177b-488d-853b-e04b22f267a4\" (UID: \"38ac646b-177b-488d-853b-e04b22f267a4\") " Jan 21 11:45:04 crc kubenswrapper[4881]: I0121 11:45:04.838451 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38ac646b-177b-488d-853b-e04b22f267a4-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "38ac646b-177b-488d-853b-e04b22f267a4" (UID: "38ac646b-177b-488d-853b-e04b22f267a4"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:45:04 crc kubenswrapper[4881]: I0121 11:45:04.839189 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38ac646b-177b-488d-853b-e04b22f267a4-kube-api-access-ptwlv" (OuterVolumeSpecName: "kube-api-access-ptwlv") pod "38ac646b-177b-488d-853b-e04b22f267a4" (UID: "38ac646b-177b-488d-853b-e04b22f267a4"). InnerVolumeSpecName "kube-api-access-ptwlv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:45:04 crc kubenswrapper[4881]: I0121 11:45:04.863852 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38ac646b-177b-488d-853b-e04b22f267a4-inventory" (OuterVolumeSpecName: "inventory") pod "38ac646b-177b-488d-853b-e04b22f267a4" (UID: "38ac646b-177b-488d-853b-e04b22f267a4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:45:04 crc kubenswrapper[4881]: I0121 11:45:04.864311 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38ac646b-177b-488d-853b-e04b22f267a4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "38ac646b-177b-488d-853b-e04b22f267a4" (UID: "38ac646b-177b-488d-853b-e04b22f267a4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:45:04 crc kubenswrapper[4881]: I0121 11:45:04.875564 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38ac646b-177b-488d-853b-e04b22f267a4-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "38ac646b-177b-488d-853b-e04b22f267a4" (UID: "38ac646b-177b-488d-853b-e04b22f267a4"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:45:04 crc kubenswrapper[4881]: I0121 11:45:04.935709 4881 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/38ac646b-177b-488d-853b-e04b22f267a4-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:45:04 crc kubenswrapper[4881]: I0121 11:45:04.935747 4881 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38ac646b-177b-488d-853b-e04b22f267a4-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:45:04 crc kubenswrapper[4881]: I0121 11:45:04.935763 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptwlv\" (UniqueName: \"kubernetes.io/projected/38ac646b-177b-488d-853b-e04b22f267a4-kube-api-access-ptwlv\") on node \"crc\" DevicePath \"\"" Jan 21 11:45:04 crc kubenswrapper[4881]: I0121 11:45:04.935775 4881 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38ac646b-177b-488d-853b-e04b22f267a4-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 11:45:04 crc kubenswrapper[4881]: I0121 11:45:04.935808 4881 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/38ac646b-177b-488d-853b-e04b22f267a4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.199751 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq" event={"ID":"38ac646b-177b-488d-853b-e04b22f267a4","Type":"ContainerDied","Data":"dc79678ab6ba1932de7e4e05e7465b949910c18ea04deeee070bef7c91f2f1e4"} Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.199839 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc79678ab6ba1932de7e4e05e7465b949910c18ea04deeee070bef7c91f2f1e4" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.199886 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.275027 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m"] Jan 21 11:45:05 crc kubenswrapper[4881]: E0121 11:45:05.275504 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38ac646b-177b-488d-853b-e04b22f267a4" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.275531 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="38ac646b-177b-488d-853b-e04b22f267a4" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 21 11:45:05 crc kubenswrapper[4881]: E0121 11:45:05.275591 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49387e54-5709-46bd-9f76-cd79369d9abe" containerName="collect-profiles" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.275600 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="49387e54-5709-46bd-9f76-cd79369d9abe" containerName="collect-profiles" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.275948 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="38ac646b-177b-488d-853b-e04b22f267a4" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.276063 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="49387e54-5709-46bd-9f76-cd79369d9abe" containerName="collect-profiles" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.277282 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.281405 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.281620 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.281669 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.281507 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fd7zg" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.281625 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.281859 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.281532 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.303544 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m"] Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.332861 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65c09a3a-6389-443c-888b-fe83557dd508" path="/var/lib/kubelet/pods/65c09a3a-6389-443c-888b-fe83557dd508/volumes" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.445269 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t495m\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.445368 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t495m\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.445435 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t495m\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.445461 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t495m\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.445500 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t495m\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.445532 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbfq5\" (UniqueName: \"kubernetes.io/projected/bfc5a115-aedb-4364-8b0d-59b8379346cb-kube-api-access-hbfq5\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t495m\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.445565 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t495m\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.445605 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t495m\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.445677 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t495m\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.548142 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t495m\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.548829 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t495m\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.548954 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t495m\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.549071 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t495m\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.549180 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t495m\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.549282 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbfq5\" (UniqueName: \"kubernetes.io/projected/bfc5a115-aedb-4364-8b0d-59b8379346cb-kube-api-access-hbfq5\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t495m\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.549432 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t495m\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.549562 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t495m\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.549735 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t495m\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.551090 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t495m\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.554218 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t495m\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.554936 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t495m\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.555167 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t495m\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.555498 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t495m\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.555503 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t495m\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.556767 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t495m\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.562229 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t495m\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.573513 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbfq5\" (UniqueName: \"kubernetes.io/projected/bfc5a115-aedb-4364-8b0d-59b8379346cb-kube-api-access-hbfq5\") pod \"nova-edpm-deployment-openstack-edpm-ipam-t495m\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" Jan 21 11:45:05 crc kubenswrapper[4881]: I0121 11:45:05.611084 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" Jan 21 11:45:06 crc kubenswrapper[4881]: I0121 11:45:06.205070 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m"] Jan 21 11:45:07 crc kubenswrapper[4881]: I0121 11:45:07.245954 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" event={"ID":"bfc5a115-aedb-4364-8b0d-59b8379346cb","Type":"ContainerStarted","Data":"88686bced315f81283d95e59e4f2403c8b2d8fed5959e3b75d3616a3313db4e6"} Jan 21 11:45:07 crc kubenswrapper[4881]: I0121 11:45:07.247388 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" event={"ID":"bfc5a115-aedb-4364-8b0d-59b8379346cb","Type":"ContainerStarted","Data":"e961a6307da8e32005ab966a01a4319c67608126400b0a7e33b34ae83eadc3c1"} Jan 21 11:45:07 crc kubenswrapper[4881]: I0121 11:45:07.271805 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" podStartSLOduration=1.812258646 podStartE2EDuration="2.271761279s" podCreationTimestamp="2026-01-21 11:45:05 +0000 UTC" firstStartedPulling="2026-01-21 11:45:06.21449066 +0000 UTC m=+2893.474447129" lastFinishedPulling="2026-01-21 11:45:06.673993293 +0000 UTC m=+2893.933949762" observedRunningTime="2026-01-21 11:45:07.263383103 +0000 UTC m=+2894.523339572" watchObservedRunningTime="2026-01-21 11:45:07.271761279 +0000 UTC m=+2894.531717758" Jan 21 11:45:31 crc kubenswrapper[4881]: I0121 11:45:31.761204 4881 scope.go:117] "RemoveContainer" containerID="506baee9263f2e28d3f1ef1ef645da28ead83f7c212d5255ebc44d13c43d15f7" Jan 21 11:47:29 crc kubenswrapper[4881]: I0121 11:47:29.851540 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:47:29 crc kubenswrapper[4881]: I0121 11:47:29.852204 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:47:48 crc kubenswrapper[4881]: I0121 11:47:48.204105 4881 generic.go:334] "Generic (PLEG): container finished" podID="bfc5a115-aedb-4364-8b0d-59b8379346cb" containerID="88686bced315f81283d95e59e4f2403c8b2d8fed5959e3b75d3616a3313db4e6" exitCode=0 Jan 21 11:47:48 crc kubenswrapper[4881]: I0121 11:47:48.204192 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" event={"ID":"bfc5a115-aedb-4364-8b0d-59b8379346cb","Type":"ContainerDied","Data":"88686bced315f81283d95e59e4f2403c8b2d8fed5959e3b75d3616a3313db4e6"} Jan 21 11:47:49 crc kubenswrapper[4881]: I0121 11:47:49.992198 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.084481 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-extra-config-0\") pod \"bfc5a115-aedb-4364-8b0d-59b8379346cb\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.084539 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-inventory\") pod \"bfc5a115-aedb-4364-8b0d-59b8379346cb\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.084639 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-ssh-key-openstack-edpm-ipam\") pod \"bfc5a115-aedb-4364-8b0d-59b8379346cb\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.085441 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbfq5\" (UniqueName: \"kubernetes.io/projected/bfc5a115-aedb-4364-8b0d-59b8379346cb-kube-api-access-hbfq5\") pod \"bfc5a115-aedb-4364-8b0d-59b8379346cb\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.086213 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-combined-ca-bundle\") pod \"bfc5a115-aedb-4364-8b0d-59b8379346cb\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.086270 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-migration-ssh-key-0\") pod \"bfc5a115-aedb-4364-8b0d-59b8379346cb\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.086376 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-cell1-compute-config-1\") pod \"bfc5a115-aedb-4364-8b0d-59b8379346cb\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.086414 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-migration-ssh-key-1\") pod \"bfc5a115-aedb-4364-8b0d-59b8379346cb\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.086507 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-cell1-compute-config-0\") pod \"bfc5a115-aedb-4364-8b0d-59b8379346cb\" (UID: \"bfc5a115-aedb-4364-8b0d-59b8379346cb\") " Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.091868 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "bfc5a115-aedb-4364-8b0d-59b8379346cb" (UID: "bfc5a115-aedb-4364-8b0d-59b8379346cb"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.093833 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfc5a115-aedb-4364-8b0d-59b8379346cb-kube-api-access-hbfq5" (OuterVolumeSpecName: "kube-api-access-hbfq5") pod "bfc5a115-aedb-4364-8b0d-59b8379346cb" (UID: "bfc5a115-aedb-4364-8b0d-59b8379346cb"). InnerVolumeSpecName "kube-api-access-hbfq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.115127 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "bfc5a115-aedb-4364-8b0d-59b8379346cb" (UID: "bfc5a115-aedb-4364-8b0d-59b8379346cb"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.120218 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "bfc5a115-aedb-4364-8b0d-59b8379346cb" (UID: "bfc5a115-aedb-4364-8b0d-59b8379346cb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.120326 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-inventory" (OuterVolumeSpecName: "inventory") pod "bfc5a115-aedb-4364-8b0d-59b8379346cb" (UID: "bfc5a115-aedb-4364-8b0d-59b8379346cb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.121568 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "bfc5a115-aedb-4364-8b0d-59b8379346cb" (UID: "bfc5a115-aedb-4364-8b0d-59b8379346cb"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.123731 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "bfc5a115-aedb-4364-8b0d-59b8379346cb" (UID: "bfc5a115-aedb-4364-8b0d-59b8379346cb"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.130959 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "bfc5a115-aedb-4364-8b0d-59b8379346cb" (UID: "bfc5a115-aedb-4364-8b0d-59b8379346cb"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.132433 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "bfc5a115-aedb-4364-8b0d-59b8379346cb" (UID: "bfc5a115-aedb-4364-8b0d-59b8379346cb"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.189035 4881 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.189077 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbfq5\" (UniqueName: \"kubernetes.io/projected/bfc5a115-aedb-4364-8b0d-59b8379346cb-kube-api-access-hbfq5\") on node \"crc\" DevicePath \"\"" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.189086 4881 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.189095 4881 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.189134 4881 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.189145 4881 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.189154 4881 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.189164 4881 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bfc5a115-aedb-4364-8b0d-59b8379346cb-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.189175 4881 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/bfc5a115-aedb-4364-8b0d-59b8379346cb-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.527278 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" event={"ID":"bfc5a115-aedb-4364-8b0d-59b8379346cb","Type":"ContainerDied","Data":"e961a6307da8e32005ab966a01a4319c67608126400b0a7e33b34ae83eadc3c1"} Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.527571 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e961a6307da8e32005ab966a01a4319c67608126400b0a7e33b34ae83eadc3c1" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.527372 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-t495m" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.613972 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr"] Jan 21 11:47:50 crc kubenswrapper[4881]: E0121 11:47:50.614578 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfc5a115-aedb-4364-8b0d-59b8379346cb" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.614603 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfc5a115-aedb-4364-8b0d-59b8379346cb" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.614940 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfc5a115-aedb-4364-8b0d-59b8379346cb" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.616236 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.620033 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.620701 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.620772 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-fd7zg" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.622633 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr"] Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.624722 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.624987 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.802099 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2kv7\" (UniqueName: \"kubernetes.io/projected/2f9f4763-a2f6-4558-82fa-be718012fc12-kube-api-access-l2kv7\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr\" (UID: \"2f9f4763-a2f6-4558-82fa-be718012fc12\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.802602 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr\" (UID: \"2f9f4763-a2f6-4558-82fa-be718012fc12\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.802969 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr\" (UID: \"2f9f4763-a2f6-4558-82fa-be718012fc12\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.803315 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr\" (UID: \"2f9f4763-a2f6-4558-82fa-be718012fc12\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.803587 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr\" (UID: \"2f9f4763-a2f6-4558-82fa-be718012fc12\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.803945 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr\" (UID: \"2f9f4763-a2f6-4558-82fa-be718012fc12\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.804130 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr\" (UID: \"2f9f4763-a2f6-4558-82fa-be718012fc12\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.906203 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr\" (UID: \"2f9f4763-a2f6-4558-82fa-be718012fc12\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.906265 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr\" (UID: \"2f9f4763-a2f6-4558-82fa-be718012fc12\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.906313 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2kv7\" (UniqueName: \"kubernetes.io/projected/2f9f4763-a2f6-4558-82fa-be718012fc12-kube-api-access-l2kv7\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr\" (UID: \"2f9f4763-a2f6-4558-82fa-be718012fc12\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.906355 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr\" (UID: \"2f9f4763-a2f6-4558-82fa-be718012fc12\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.906458 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr\" (UID: \"2f9f4763-a2f6-4558-82fa-be718012fc12\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.906523 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr\" (UID: \"2f9f4763-a2f6-4558-82fa-be718012fc12\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.906545 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr\" (UID: \"2f9f4763-a2f6-4558-82fa-be718012fc12\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.911377 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr\" (UID: \"2f9f4763-a2f6-4558-82fa-be718012fc12\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.912256 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr\" (UID: \"2f9f4763-a2f6-4558-82fa-be718012fc12\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.914075 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr\" (UID: \"2f9f4763-a2f6-4558-82fa-be718012fc12\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.914737 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr\" (UID: \"2f9f4763-a2f6-4558-82fa-be718012fc12\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.915130 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr\" (UID: \"2f9f4763-a2f6-4558-82fa-be718012fc12\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.916258 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr\" (UID: \"2f9f4763-a2f6-4558-82fa-be718012fc12\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr" Jan 21 11:47:50 crc kubenswrapper[4881]: I0121 11:47:50.935799 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2kv7\" (UniqueName: \"kubernetes.io/projected/2f9f4763-a2f6-4558-82fa-be718012fc12-kube-api-access-l2kv7\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr\" (UID: \"2f9f4763-a2f6-4558-82fa-be718012fc12\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr" Jan 21 11:47:51 crc kubenswrapper[4881]: I0121 11:47:51.233325 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr" Jan 21 11:47:51 crc kubenswrapper[4881]: I0121 11:47:51.936681 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr"] Jan 21 11:47:52 crc kubenswrapper[4881]: I0121 11:47:52.586521 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr" event={"ID":"2f9f4763-a2f6-4558-82fa-be718012fc12","Type":"ContainerStarted","Data":"2bd3402b9e27d9638a3014022bc0917662606afb76306548d94c2dbe1498c53a"} Jan 21 11:47:53 crc kubenswrapper[4881]: I0121 11:47:53.597048 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr" event={"ID":"2f9f4763-a2f6-4558-82fa-be718012fc12","Type":"ContainerStarted","Data":"d3be7960d0b27110197d7181b46b708d56c6c1ea3312bb674678bb754bbcd27d"} Jan 21 11:47:53 crc kubenswrapper[4881]: I0121 11:47:53.621426 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr" podStartSLOduration=3.083341475 podStartE2EDuration="3.621400031s" podCreationTimestamp="2026-01-21 11:47:50 +0000 UTC" firstStartedPulling="2026-01-21 11:47:51.94301643 +0000 UTC m=+3059.202972899" lastFinishedPulling="2026-01-21 11:47:52.481074966 +0000 UTC m=+3059.741031455" observedRunningTime="2026-01-21 11:47:53.620234633 +0000 UTC m=+3060.880191112" watchObservedRunningTime="2026-01-21 11:47:53.621400031 +0000 UTC m=+3060.881356510" Jan 21 11:47:59 crc kubenswrapper[4881]: I0121 11:47:59.850866 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:47:59 crc kubenswrapper[4881]: I0121 11:47:59.851550 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:48:29 crc kubenswrapper[4881]: I0121 11:48:29.850963 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:48:29 crc kubenswrapper[4881]: I0121 11:48:29.851527 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:48:29 crc kubenswrapper[4881]: I0121 11:48:29.851581 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 11:48:29 crc kubenswrapper[4881]: I0121 11:48:29.852494 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d8ad2c92af12e692917f97f6e76a6242fb5c00dc2c38dcea5e2ce39cd5dfeb57"} pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 11:48:29 crc kubenswrapper[4881]: I0121 11:48:29.852564 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" containerID="cri-o://d8ad2c92af12e692917f97f6e76a6242fb5c00dc2c38dcea5e2ce39cd5dfeb57" gracePeriod=600 Jan 21 11:48:30 crc kubenswrapper[4881]: E0121 11:48:30.519616 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:48:30 crc kubenswrapper[4881]: I0121 11:48:30.987164 4881 generic.go:334] "Generic (PLEG): container finished" podID="3687b313-1df2-4274-80db-8c758b51bf2d" containerID="d8ad2c92af12e692917f97f6e76a6242fb5c00dc2c38dcea5e2ce39cd5dfeb57" exitCode=0 Jan 21 11:48:30 crc kubenswrapper[4881]: I0121 11:48:30.987259 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerDied","Data":"d8ad2c92af12e692917f97f6e76a6242fb5c00dc2c38dcea5e2ce39cd5dfeb57"} Jan 21 11:48:30 crc kubenswrapper[4881]: I0121 11:48:30.987377 4881 scope.go:117] "RemoveContainer" containerID="40878d2da6716331f0a893f4c9f3938e30cde34eaf4eb8051eda58bfc84a6a6c" Jan 21 11:48:30 crc kubenswrapper[4881]: I0121 11:48:30.988487 4881 scope.go:117] "RemoveContainer" containerID="d8ad2c92af12e692917f97f6e76a6242fb5c00dc2c38dcea5e2ce39cd5dfeb57" Jan 21 11:48:30 crc kubenswrapper[4881]: E0121 11:48:30.989287 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:48:44 crc kubenswrapper[4881]: I0121 11:48:44.311793 4881 scope.go:117] "RemoveContainer" containerID="d8ad2c92af12e692917f97f6e76a6242fb5c00dc2c38dcea5e2ce39cd5dfeb57" Jan 21 11:48:44 crc kubenswrapper[4881]: E0121 11:48:44.312617 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:48:56 crc kubenswrapper[4881]: I0121 11:48:56.311298 4881 scope.go:117] "RemoveContainer" containerID="d8ad2c92af12e692917f97f6e76a6242fb5c00dc2c38dcea5e2ce39cd5dfeb57" Jan 21 11:48:56 crc kubenswrapper[4881]: E0121 11:48:56.312705 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:49:09 crc kubenswrapper[4881]: I0121 11:49:09.312887 4881 scope.go:117] "RemoveContainer" containerID="d8ad2c92af12e692917f97f6e76a6242fb5c00dc2c38dcea5e2ce39cd5dfeb57" Jan 21 11:49:09 crc kubenswrapper[4881]: E0121 11:49:09.313911 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:49:20 crc kubenswrapper[4881]: I0121 11:49:20.312389 4881 scope.go:117] "RemoveContainer" containerID="d8ad2c92af12e692917f97f6e76a6242fb5c00dc2c38dcea5e2ce39cd5dfeb57" Jan 21 11:49:20 crc kubenswrapper[4881]: E0121 11:49:20.313770 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:49:32 crc kubenswrapper[4881]: I0121 11:49:32.311986 4881 scope.go:117] "RemoveContainer" containerID="d8ad2c92af12e692917f97f6e76a6242fb5c00dc2c38dcea5e2ce39cd5dfeb57" Jan 21 11:49:32 crc kubenswrapper[4881]: E0121 11:49:32.312835 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:49:45 crc kubenswrapper[4881]: I0121 11:49:45.311240 4881 scope.go:117] "RemoveContainer" containerID="d8ad2c92af12e692917f97f6e76a6242fb5c00dc2c38dcea5e2ce39cd5dfeb57" Jan 21 11:49:45 crc kubenswrapper[4881]: E0121 11:49:45.312404 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:49:59 crc kubenswrapper[4881]: I0121 11:49:59.311168 4881 scope.go:117] "RemoveContainer" containerID="d8ad2c92af12e692917f97f6e76a6242fb5c00dc2c38dcea5e2ce39cd5dfeb57" Jan 21 11:49:59 crc kubenswrapper[4881]: E0121 11:49:59.312525 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:50:11 crc kubenswrapper[4881]: I0121 11:50:11.311296 4881 scope.go:117] "RemoveContainer" containerID="d8ad2c92af12e692917f97f6e76a6242fb5c00dc2c38dcea5e2ce39cd5dfeb57" Jan 21 11:50:11 crc kubenswrapper[4881]: E0121 11:50:11.312315 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:50:18 crc kubenswrapper[4881]: I0121 11:50:18.158200 4881 generic.go:334] "Generic (PLEG): container finished" podID="2f9f4763-a2f6-4558-82fa-be718012fc12" containerID="d3be7960d0b27110197d7181b46b708d56c6c1ea3312bb674678bb754bbcd27d" exitCode=0 Jan 21 11:50:18 crc kubenswrapper[4881]: I0121 11:50:18.158328 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr" event={"ID":"2f9f4763-a2f6-4558-82fa-be718012fc12","Type":"ContainerDied","Data":"d3be7960d0b27110197d7181b46b708d56c6c1ea3312bb674678bb754bbcd27d"} Jan 21 11:50:19 crc kubenswrapper[4881]: I0121 11:50:19.671773 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr" Jan 21 11:50:19 crc kubenswrapper[4881]: I0121 11:50:19.849764 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-ssh-key-openstack-edpm-ipam\") pod \"2f9f4763-a2f6-4558-82fa-be718012fc12\" (UID: \"2f9f4763-a2f6-4558-82fa-be718012fc12\") " Jan 21 11:50:19 crc kubenswrapper[4881]: I0121 11:50:19.849882 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-ceilometer-compute-config-data-2\") pod \"2f9f4763-a2f6-4558-82fa-be718012fc12\" (UID: \"2f9f4763-a2f6-4558-82fa-be718012fc12\") " Jan 21 11:50:19 crc kubenswrapper[4881]: I0121 11:50:19.849941 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-ceilometer-compute-config-data-0\") pod \"2f9f4763-a2f6-4558-82fa-be718012fc12\" (UID: \"2f9f4763-a2f6-4558-82fa-be718012fc12\") " Jan 21 11:50:19 crc kubenswrapper[4881]: I0121 11:50:19.850031 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-ceilometer-compute-config-data-1\") pod \"2f9f4763-a2f6-4558-82fa-be718012fc12\" (UID: \"2f9f4763-a2f6-4558-82fa-be718012fc12\") " Jan 21 11:50:19 crc kubenswrapper[4881]: I0121 11:50:19.850117 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-inventory\") pod \"2f9f4763-a2f6-4558-82fa-be718012fc12\" (UID: \"2f9f4763-a2f6-4558-82fa-be718012fc12\") " Jan 21 11:50:19 crc kubenswrapper[4881]: I0121 11:50:19.850157 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2kv7\" (UniqueName: \"kubernetes.io/projected/2f9f4763-a2f6-4558-82fa-be718012fc12-kube-api-access-l2kv7\") pod \"2f9f4763-a2f6-4558-82fa-be718012fc12\" (UID: \"2f9f4763-a2f6-4558-82fa-be718012fc12\") " Jan 21 11:50:19 crc kubenswrapper[4881]: I0121 11:50:19.850222 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-telemetry-combined-ca-bundle\") pod \"2f9f4763-a2f6-4558-82fa-be718012fc12\" (UID: \"2f9f4763-a2f6-4558-82fa-be718012fc12\") " Jan 21 11:50:19 crc kubenswrapper[4881]: I0121 11:50:19.855976 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "2f9f4763-a2f6-4558-82fa-be718012fc12" (UID: "2f9f4763-a2f6-4558-82fa-be718012fc12"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:50:19 crc kubenswrapper[4881]: I0121 11:50:19.857671 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f9f4763-a2f6-4558-82fa-be718012fc12-kube-api-access-l2kv7" (OuterVolumeSpecName: "kube-api-access-l2kv7") pod "2f9f4763-a2f6-4558-82fa-be718012fc12" (UID: "2f9f4763-a2f6-4558-82fa-be718012fc12"). InnerVolumeSpecName "kube-api-access-l2kv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:50:19 crc kubenswrapper[4881]: I0121 11:50:19.884053 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2f9f4763-a2f6-4558-82fa-be718012fc12" (UID: "2f9f4763-a2f6-4558-82fa-be718012fc12"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:50:19 crc kubenswrapper[4881]: I0121 11:50:19.891904 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-inventory" (OuterVolumeSpecName: "inventory") pod "2f9f4763-a2f6-4558-82fa-be718012fc12" (UID: "2f9f4763-a2f6-4558-82fa-be718012fc12"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:50:19 crc kubenswrapper[4881]: I0121 11:50:19.896346 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "2f9f4763-a2f6-4558-82fa-be718012fc12" (UID: "2f9f4763-a2f6-4558-82fa-be718012fc12"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:50:19 crc kubenswrapper[4881]: I0121 11:50:19.898390 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "2f9f4763-a2f6-4558-82fa-be718012fc12" (UID: "2f9f4763-a2f6-4558-82fa-be718012fc12"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:50:19 crc kubenswrapper[4881]: I0121 11:50:19.910354 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "2f9f4763-a2f6-4558-82fa-be718012fc12" (UID: "2f9f4763-a2f6-4558-82fa-be718012fc12"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:50:19 crc kubenswrapper[4881]: I0121 11:50:19.953774 4881 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 11:50:19 crc kubenswrapper[4881]: I0121 11:50:19.953866 4881 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 21 11:50:19 crc kubenswrapper[4881]: I0121 11:50:19.953882 4881 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:50:19 crc kubenswrapper[4881]: I0121 11:50:19.953896 4881 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 21 11:50:19 crc kubenswrapper[4881]: I0121 11:50:19.953909 4881 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 11:50:19 crc kubenswrapper[4881]: I0121 11:50:19.953933 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2kv7\" (UniqueName: \"kubernetes.io/projected/2f9f4763-a2f6-4558-82fa-be718012fc12-kube-api-access-l2kv7\") on node \"crc\" DevicePath \"\"" Jan 21 11:50:19 crc kubenswrapper[4881]: I0121 11:50:19.953950 4881 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f9f4763-a2f6-4558-82fa-be718012fc12-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:50:20 crc kubenswrapper[4881]: I0121 11:50:20.182077 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr" event={"ID":"2f9f4763-a2f6-4558-82fa-be718012fc12","Type":"ContainerDied","Data":"2bd3402b9e27d9638a3014022bc0917662606afb76306548d94c2dbe1498c53a"} Jan 21 11:50:20 crc kubenswrapper[4881]: I0121 11:50:20.182171 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2bd3402b9e27d9638a3014022bc0917662606afb76306548d94c2dbe1498c53a" Jan 21 11:50:20 crc kubenswrapper[4881]: I0121 11:50:20.182191 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr" Jan 21 11:50:22 crc kubenswrapper[4881]: I0121 11:50:22.311640 4881 scope.go:117] "RemoveContainer" containerID="d8ad2c92af12e692917f97f6e76a6242fb5c00dc2c38dcea5e2ce39cd5dfeb57" Jan 21 11:50:22 crc kubenswrapper[4881]: E0121 11:50:22.312285 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:50:31 crc kubenswrapper[4881]: I0121 11:50:31.964726 4881 scope.go:117] "RemoveContainer" containerID="a3b87112cc4e2f5703453d1593b9d75e4be1102fb918a336d940180bb24d7b53" Jan 21 11:50:32 crc kubenswrapper[4881]: I0121 11:50:32.054825 4881 scope.go:117] "RemoveContainer" containerID="c46a2a4d819c8a32cc07d84e8693331645ce9fdf0d2715fdb9ac2374aedc71ff" Jan 21 11:50:32 crc kubenswrapper[4881]: I0121 11:50:32.107318 4881 scope.go:117] "RemoveContainer" containerID="be36f6ad834ca00233eadc7451dfda0c9752d18ed8499ac6ad57c9815db2567a" Jan 21 11:50:37 crc kubenswrapper[4881]: I0121 11:50:37.312202 4881 scope.go:117] "RemoveContainer" containerID="d8ad2c92af12e692917f97f6e76a6242fb5c00dc2c38dcea5e2ce39cd5dfeb57" Jan 21 11:50:37 crc kubenswrapper[4881]: E0121 11:50:37.313571 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:50:52 crc kubenswrapper[4881]: I0121 11:50:52.311339 4881 scope.go:117] "RemoveContainer" containerID="d8ad2c92af12e692917f97f6e76a6242fb5c00dc2c38dcea5e2ce39cd5dfeb57" Jan 21 11:50:52 crc kubenswrapper[4881]: E0121 11:50:52.312020 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:51:00 crc kubenswrapper[4881]: I0121 11:51:00.748843 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Jan 21 11:51:00 crc kubenswrapper[4881]: E0121 11:51:00.749696 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f9f4763-a2f6-4558-82fa-be718012fc12" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 21 11:51:00 crc kubenswrapper[4881]: I0121 11:51:00.749710 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f9f4763-a2f6-4558-82fa-be718012fc12" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 21 11:51:00 crc kubenswrapper[4881]: I0121 11:51:00.749938 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f9f4763-a2f6-4558-82fa-be718012fc12" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 21 11:51:00 crc kubenswrapper[4881]: I0121 11:51:00.751046 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 21 11:51:00 crc kubenswrapper[4881]: I0121 11:51:00.753272 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Jan 21 11:51:00 crc kubenswrapper[4881]: I0121 11:51:00.776491 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.144590 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/306aceba-6a20-4b47-a19a-fb193a27e2bd-dev\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.144827 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/306aceba-6a20-4b47-a19a-fb193a27e2bd-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.144906 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/306aceba-6a20-4b47-a19a-fb193a27e2bd-run\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.144957 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/306aceba-6a20-4b47-a19a-fb193a27e2bd-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.145904 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjkhf\" (UniqueName: \"kubernetes.io/projected/306aceba-6a20-4b47-a19a-fb193a27e2bd-kube-api-access-vjkhf\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.145981 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/306aceba-6a20-4b47-a19a-fb193a27e2bd-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.146049 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/306aceba-6a20-4b47-a19a-fb193a27e2bd-sys\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.146071 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/306aceba-6a20-4b47-a19a-fb193a27e2bd-lib-modules\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.146134 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/306aceba-6a20-4b47-a19a-fb193a27e2bd-config-data-custom\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.146203 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/306aceba-6a20-4b47-a19a-fb193a27e2bd-config-data\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.146286 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/306aceba-6a20-4b47-a19a-fb193a27e2bd-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.146482 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/306aceba-6a20-4b47-a19a-fb193a27e2bd-etc-nvme\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.146530 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/306aceba-6a20-4b47-a19a-fb193a27e2bd-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.146670 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/306aceba-6a20-4b47-a19a-fb193a27e2bd-scripts\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.146768 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/306aceba-6a20-4b47-a19a-fb193a27e2bd-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.186137 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-nfs-0"] Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.189874 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.193952 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-nfs-config-data" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.199759 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-0"] Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.210438 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.212762 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.214581 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-nfs-2-config-data" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.227825 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.249547 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/112f53db-2aaa-4a3d-bc89-fd86952639ab-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.249603 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/306aceba-6a20-4b47-a19a-fb193a27e2bd-scripts\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.249625 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/112f53db-2aaa-4a3d-bc89-fd86952639ab-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.249642 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8c912ca5-a82b-4083-8579-f0f6f506eebb-sys\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.249687 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/112f53db-2aaa-4a3d-bc89-fd86952639ab-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.249710 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8c912ca5-a82b-4083-8579-f0f6f506eebb-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.249731 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/306aceba-6a20-4b47-a19a-fb193a27e2bd-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.249750 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/112f53db-2aaa-4a3d-bc89-fd86952639ab-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.249775 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/112f53db-2aaa-4a3d-bc89-fd86952639ab-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.249815 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/306aceba-6a20-4b47-a19a-fb193a27e2bd-dev\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.249838 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/306aceba-6a20-4b47-a19a-fb193a27e2bd-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.249840 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/306aceba-6a20-4b47-a19a-fb193a27e2bd-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.249862 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/8c912ca5-a82b-4083-8579-f0f6f506eebb-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.249892 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/306aceba-6a20-4b47-a19a-fb193a27e2bd-dev\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.249943 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c912ca5-a82b-4083-8579-f0f6f506eebb-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.249978 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/8c912ca5-a82b-4083-8579-f0f6f506eebb-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250007 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/112f53db-2aaa-4a3d-bc89-fd86952639ab-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250039 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/8c912ca5-a82b-4083-8579-f0f6f506eebb-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250096 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8c912ca5-a82b-4083-8579-f0f6f506eebb-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250143 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/306aceba-6a20-4b47-a19a-fb193a27e2bd-run\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250166 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/112f53db-2aaa-4a3d-bc89-fd86952639ab-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250191 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/306aceba-6a20-4b47-a19a-fb193a27e2bd-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250221 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/8c912ca5-a82b-4083-8579-f0f6f506eebb-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250253 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/8c912ca5-a82b-4083-8579-f0f6f506eebb-run\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250273 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/112f53db-2aaa-4a3d-bc89-fd86952639ab-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250304 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/112f53db-2aaa-4a3d-bc89-fd86952639ab-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250334 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/112f53db-2aaa-4a3d-bc89-fd86952639ab-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250350 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/112f53db-2aaa-4a3d-bc89-fd86952639ab-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250379 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjkhf\" (UniqueName: \"kubernetes.io/projected/306aceba-6a20-4b47-a19a-fb193a27e2bd-kube-api-access-vjkhf\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250398 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c912ca5-a82b-4083-8579-f0f6f506eebb-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250431 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/306aceba-6a20-4b47-a19a-fb193a27e2bd-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250449 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/8c912ca5-a82b-4083-8579-f0f6f506eebb-dev\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250467 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/306aceba-6a20-4b47-a19a-fb193a27e2bd-lib-modules\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250483 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/306aceba-6a20-4b47-a19a-fb193a27e2bd-sys\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250501 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/306aceba-6a20-4b47-a19a-fb193a27e2bd-config-data-custom\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250516 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/8c912ca5-a82b-4083-8579-f0f6f506eebb-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250534 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qv6v\" (UniqueName: \"kubernetes.io/projected/8c912ca5-a82b-4083-8579-f0f6f506eebb-kube-api-access-7qv6v\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250553 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c912ca5-a82b-4083-8579-f0f6f506eebb-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250593 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/306aceba-6a20-4b47-a19a-fb193a27e2bd-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250607 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/306aceba-6a20-4b47-a19a-fb193a27e2bd-config-data\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250656 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/112f53db-2aaa-4a3d-bc89-fd86952639ab-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250702 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pf44\" (UniqueName: \"kubernetes.io/projected/112f53db-2aaa-4a3d-bc89-fd86952639ab-kube-api-access-4pf44\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250733 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/306aceba-6a20-4b47-a19a-fb193a27e2bd-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250463 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/306aceba-6a20-4b47-a19a-fb193a27e2bd-run\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250768 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/306aceba-6a20-4b47-a19a-fb193a27e2bd-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250813 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/306aceba-6a20-4b47-a19a-fb193a27e2bd-lib-modules\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250816 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/112f53db-2aaa-4a3d-bc89-fd86952639ab-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250931 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/306aceba-6a20-4b47-a19a-fb193a27e2bd-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250943 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/112f53db-2aaa-4a3d-bc89-fd86952639ab-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250965 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/306aceba-6a20-4b47-a19a-fb193a27e2bd-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.250986 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/306aceba-6a20-4b47-a19a-fb193a27e2bd-sys\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.251128 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/306aceba-6a20-4b47-a19a-fb193a27e2bd-etc-nvme\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.251148 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/306aceba-6a20-4b47-a19a-fb193a27e2bd-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.251193 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8c912ca5-a82b-4083-8579-f0f6f506eebb-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.252264 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/306aceba-6a20-4b47-a19a-fb193a27e2bd-etc-nvme\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.257185 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/306aceba-6a20-4b47-a19a-fb193a27e2bd-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.257214 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/306aceba-6a20-4b47-a19a-fb193a27e2bd-config-data\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.268208 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjkhf\" (UniqueName: \"kubernetes.io/projected/306aceba-6a20-4b47-a19a-fb193a27e2bd-kube-api-access-vjkhf\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.269801 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/306aceba-6a20-4b47-a19a-fb193a27e2bd-scripts\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.289208 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/306aceba-6a20-4b47-a19a-fb193a27e2bd-config-data-custom\") pod \"cinder-backup-0\" (UID: \"306aceba-6a20-4b47-a19a-fb193a27e2bd\") " pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.352889 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/112f53db-2aaa-4a3d-bc89-fd86952639ab-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.352933 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/112f53db-2aaa-4a3d-bc89-fd86952639ab-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.353082 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8c912ca5-a82b-4083-8579-f0f6f506eebb-sys\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.354002 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/112f53db-2aaa-4a3d-bc89-fd86952639ab-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.353448 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/112f53db-2aaa-4a3d-bc89-fd86952639ab-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.354073 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8c912ca5-a82b-4083-8579-f0f6f506eebb-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.354076 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/112f53db-2aaa-4a3d-bc89-fd86952639ab-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.353661 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8c912ca5-a82b-4083-8579-f0f6f506eebb-sys\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.354145 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8c912ca5-a82b-4083-8579-f0f6f506eebb-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.354211 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/112f53db-2aaa-4a3d-bc89-fd86952639ab-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.354232 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/112f53db-2aaa-4a3d-bc89-fd86952639ab-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.354273 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/8c912ca5-a82b-4083-8579-f0f6f506eebb-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.354291 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c912ca5-a82b-4083-8579-f0f6f506eebb-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.354306 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/8c912ca5-a82b-4083-8579-f0f6f506eebb-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.354333 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/112f53db-2aaa-4a3d-bc89-fd86952639ab-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.354354 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/8c912ca5-a82b-4083-8579-f0f6f506eebb-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.354375 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8c912ca5-a82b-4083-8579-f0f6f506eebb-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.354408 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/112f53db-2aaa-4a3d-bc89-fd86952639ab-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.354451 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/8c912ca5-a82b-4083-8579-f0f6f506eebb-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.354469 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/8c912ca5-a82b-4083-8579-f0f6f506eebb-run\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.354491 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/112f53db-2aaa-4a3d-bc89-fd86952639ab-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.354526 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/112f53db-2aaa-4a3d-bc89-fd86952639ab-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.354559 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/112f53db-2aaa-4a3d-bc89-fd86952639ab-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.354575 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/112f53db-2aaa-4a3d-bc89-fd86952639ab-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.354616 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c912ca5-a82b-4083-8579-f0f6f506eebb-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.354644 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/8c912ca5-a82b-4083-8579-f0f6f506eebb-dev\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.354689 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/8c912ca5-a82b-4083-8579-f0f6f506eebb-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.354707 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c912ca5-a82b-4083-8579-f0f6f506eebb-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.354724 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qv6v\" (UniqueName: \"kubernetes.io/projected/8c912ca5-a82b-4083-8579-f0f6f506eebb-kube-api-access-7qv6v\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.354754 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/112f53db-2aaa-4a3d-bc89-fd86952639ab-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.354778 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pf44\" (UniqueName: \"kubernetes.io/projected/112f53db-2aaa-4a3d-bc89-fd86952639ab-kube-api-access-4pf44\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.354881 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/112f53db-2aaa-4a3d-bc89-fd86952639ab-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.354911 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/112f53db-2aaa-4a3d-bc89-fd86952639ab-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.354954 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8c912ca5-a82b-4083-8579-f0f6f506eebb-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.355022 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/112f53db-2aaa-4a3d-bc89-fd86952639ab-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.355618 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/112f53db-2aaa-4a3d-bc89-fd86952639ab-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.355745 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/8c912ca5-a82b-4083-8579-f0f6f506eebb-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.355762 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/112f53db-2aaa-4a3d-bc89-fd86952639ab-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.355798 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8c912ca5-a82b-4083-8579-f0f6f506eebb-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.355924 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/8c912ca5-a82b-4083-8579-f0f6f506eebb-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.356128 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/112f53db-2aaa-4a3d-bc89-fd86952639ab-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.356158 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/8c912ca5-a82b-4083-8579-f0f6f506eebb-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.356178 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/112f53db-2aaa-4a3d-bc89-fd86952639ab-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.356538 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/8c912ca5-a82b-4083-8579-f0f6f506eebb-dev\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.356702 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/8c912ca5-a82b-4083-8579-f0f6f506eebb-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.356759 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/112f53db-2aaa-4a3d-bc89-fd86952639ab-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.357563 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/8c912ca5-a82b-4083-8579-f0f6f506eebb-run\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.357838 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/112f53db-2aaa-4a3d-bc89-fd86952639ab-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.357885 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/8c912ca5-a82b-4083-8579-f0f6f506eebb-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.358911 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/112f53db-2aaa-4a3d-bc89-fd86952639ab-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.360304 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/112f53db-2aaa-4a3d-bc89-fd86952639ab-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.360449 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/112f53db-2aaa-4a3d-bc89-fd86952639ab-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.360597 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/112f53db-2aaa-4a3d-bc89-fd86952639ab-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.361317 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8c912ca5-a82b-4083-8579-f0f6f506eebb-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.361505 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/112f53db-2aaa-4a3d-bc89-fd86952639ab-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.362164 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c912ca5-a82b-4083-8579-f0f6f506eebb-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.362484 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c912ca5-a82b-4083-8579-f0f6f506eebb-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.362487 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c912ca5-a82b-4083-8579-f0f6f506eebb-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.372002 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qv6v\" (UniqueName: \"kubernetes.io/projected/8c912ca5-a82b-4083-8579-f0f6f506eebb-kube-api-access-7qv6v\") pod \"cinder-volume-nfs-0\" (UID: \"8c912ca5-a82b-4083-8579-f0f6f506eebb\") " pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.372480 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.381328 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pf44\" (UniqueName: \"kubernetes.io/projected/112f53db-2aaa-4a3d-bc89-fd86952639ab-kube-api-access-4pf44\") pod \"cinder-volume-nfs-2-0\" (UID: \"112f53db-2aaa-4a3d-bc89-fd86952639ab\") " pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.525922 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:01 crc kubenswrapper[4881]: I0121 11:51:01.535815 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:02 crc kubenswrapper[4881]: I0121 11:51:02.036880 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 21 11:51:02 crc kubenswrapper[4881]: I0121 11:51:02.040613 4881 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 11:51:02 crc kubenswrapper[4881]: I0121 11:51:02.175093 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"306aceba-6a20-4b47-a19a-fb193a27e2bd","Type":"ContainerStarted","Data":"88e62150086ddc64733a5fbe0b1661bba3ff3d3940cf9e954f6c44084e9add0d"} Jan 21 11:51:02 crc kubenswrapper[4881]: I0121 11:51:02.245069 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-0"] Jan 21 11:51:03 crc kubenswrapper[4881]: I0121 11:51:03.215069 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"8c912ca5-a82b-4083-8579-f0f6f506eebb","Type":"ContainerStarted","Data":"c64419b0f7588b60b091afdc05906d8f4c63760c6fa6bf5710b9012941fc09e2"} Jan 21 11:51:03 crc kubenswrapper[4881]: I0121 11:51:03.335057 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Jan 21 11:51:04 crc kubenswrapper[4881]: I0121 11:51:04.226911 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"8c912ca5-a82b-4083-8579-f0f6f506eebb","Type":"ContainerStarted","Data":"7e23f895a3e3240ba0d64f3af69bd387fa9627ecbd8f77e31aedca0cbe2abfd1"} Jan 21 11:51:04 crc kubenswrapper[4881]: I0121 11:51:04.227450 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"8c912ca5-a82b-4083-8579-f0f6f506eebb","Type":"ContainerStarted","Data":"82b1ddd2a7192aecee0cb6c979adac6a1822ef5362bcd3ed72cefa8f4fb43255"} Jan 21 11:51:04 crc kubenswrapper[4881]: I0121 11:51:04.229378 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"112f53db-2aaa-4a3d-bc89-fd86952639ab","Type":"ContainerStarted","Data":"c242f7d24b426e3c7f6e8f921fcccde19ffb9e0c2de9853a8b6dab2745aecbe9"} Jan 21 11:51:04 crc kubenswrapper[4881]: I0121 11:51:04.229447 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"112f53db-2aaa-4a3d-bc89-fd86952639ab","Type":"ContainerStarted","Data":"8e4078cf91b68cbdd344bf8cec14191a784934146e78034db355bd0ce3c45085"} Jan 21 11:51:04 crc kubenswrapper[4881]: I0121 11:51:04.229462 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"112f53db-2aaa-4a3d-bc89-fd86952639ab","Type":"ContainerStarted","Data":"0f9c7c8e501d39bc5e4aeb520b3757995e1277aec6df6917f4ddf1ff65a1a031"} Jan 21 11:51:04 crc kubenswrapper[4881]: I0121 11:51:04.231647 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"306aceba-6a20-4b47-a19a-fb193a27e2bd","Type":"ContainerStarted","Data":"376a801a5f723a90aca788b2db2d06aceabf31e9141502d8dcbce2528567a939"} Jan 21 11:51:04 crc kubenswrapper[4881]: I0121 11:51:04.231686 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"306aceba-6a20-4b47-a19a-fb193a27e2bd","Type":"ContainerStarted","Data":"bb24a1186fc46593e1f17b841ada4b3372147ce1e352d15abdbc3cb14e043eb9"} Jan 21 11:51:04 crc kubenswrapper[4881]: I0121 11:51:04.275569 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-nfs-0" podStartSLOduration=2.327771643 podStartE2EDuration="3.275553197s" podCreationTimestamp="2026-01-21 11:51:01 +0000 UTC" firstStartedPulling="2026-01-21 11:51:02.285290028 +0000 UTC m=+3249.545246497" lastFinishedPulling="2026-01-21 11:51:03.233071582 +0000 UTC m=+3250.493028051" observedRunningTime="2026-01-21 11:51:04.270644407 +0000 UTC m=+3251.530600876" watchObservedRunningTime="2026-01-21 11:51:04.275553197 +0000 UTC m=+3251.535509666" Jan 21 11:51:04 crc kubenswrapper[4881]: I0121 11:51:04.307748 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-nfs-2-0" podStartSLOduration=3.30773298 podStartE2EDuration="3.30773298s" podCreationTimestamp="2026-01-21 11:51:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:51:04.30406157 +0000 UTC m=+3251.564018029" watchObservedRunningTime="2026-01-21 11:51:04.30773298 +0000 UTC m=+3251.567689449" Jan 21 11:51:04 crc kubenswrapper[4881]: I0121 11:51:04.341571 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=4.045509104 podStartE2EDuration="4.341551422s" podCreationTimestamp="2026-01-21 11:51:00 +0000 UTC" firstStartedPulling="2026-01-21 11:51:02.040334937 +0000 UTC m=+3249.300291406" lastFinishedPulling="2026-01-21 11:51:02.336377255 +0000 UTC m=+3249.596333724" observedRunningTime="2026-01-21 11:51:04.330777408 +0000 UTC m=+3251.590733877" watchObservedRunningTime="2026-01-21 11:51:04.341551422 +0000 UTC m=+3251.601507881" Jan 21 11:51:06 crc kubenswrapper[4881]: I0121 11:51:06.373608 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Jan 21 11:51:06 crc kubenswrapper[4881]: I0121 11:51:06.526162 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:06 crc kubenswrapper[4881]: I0121 11:51:06.537112 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:07 crc kubenswrapper[4881]: I0121 11:51:07.447865 4881 scope.go:117] "RemoveContainer" containerID="d8ad2c92af12e692917f97f6e76a6242fb5c00dc2c38dcea5e2ce39cd5dfeb57" Jan 21 11:51:07 crc kubenswrapper[4881]: E0121 11:51:07.448355 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:51:11 crc kubenswrapper[4881]: I0121 11:51:11.605604 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Jan 21 11:51:11 crc kubenswrapper[4881]: I0121 11:51:11.709742 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-nfs-2-0" Jan 21 11:51:11 crc kubenswrapper[4881]: I0121 11:51:11.778242 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-nfs-0" Jan 21 11:51:20 crc kubenswrapper[4881]: I0121 11:51:20.311577 4881 scope.go:117] "RemoveContainer" containerID="d8ad2c92af12e692917f97f6e76a6242fb5c00dc2c38dcea5e2ce39cd5dfeb57" Jan 21 11:51:20 crc kubenswrapper[4881]: E0121 11:51:20.312950 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:51:35 crc kubenswrapper[4881]: I0121 11:51:35.310872 4881 scope.go:117] "RemoveContainer" containerID="d8ad2c92af12e692917f97f6e76a6242fb5c00dc2c38dcea5e2ce39cd5dfeb57" Jan 21 11:51:35 crc kubenswrapper[4881]: E0121 11:51:35.311729 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:51:50 crc kubenswrapper[4881]: I0121 11:51:50.311295 4881 scope.go:117] "RemoveContainer" containerID="d8ad2c92af12e692917f97f6e76a6242fb5c00dc2c38dcea5e2ce39cd5dfeb57" Jan 21 11:51:50 crc kubenswrapper[4881]: E0121 11:51:50.313141 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:52:05 crc kubenswrapper[4881]: I0121 11:52:05.311367 4881 scope.go:117] "RemoveContainer" containerID="d8ad2c92af12e692917f97f6e76a6242fb5c00dc2c38dcea5e2ce39cd5dfeb57" Jan 21 11:52:05 crc kubenswrapper[4881]: E0121 11:52:05.312394 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:52:07 crc kubenswrapper[4881]: I0121 11:52:07.447083 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 11:52:07 crc kubenswrapper[4881]: I0121 11:52:07.447757 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="c5ae3126-d6d3-4268-8e35-e216eabcc6f4" containerName="prometheus" containerID="cri-o://8325ef681bcdbc9f213b1b50d5070cda09f322843e0e7d334a000739ac240fa4" gracePeriod=600 Jan 21 11:52:07 crc kubenswrapper[4881]: I0121 11:52:07.447937 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="c5ae3126-d6d3-4268-8e35-e216eabcc6f4" containerName="thanos-sidecar" containerID="cri-o://c140acf6f14058c82c2022005acd28d679f35f983dc5582ed33c0dd219896e01" gracePeriod=600 Jan 21 11:52:07 crc kubenswrapper[4881]: I0121 11:52:07.448004 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="c5ae3126-d6d3-4268-8e35-e216eabcc6f4" containerName="config-reloader" containerID="cri-o://ef9d78c9c5e22c01f5e8274cad9637d465377b5339dc20fcbf444a1190841bcb" gracePeriod=600 Jan 21 11:52:07 crc kubenswrapper[4881]: E0121 11:52:07.602934 4881 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5ae3126_d6d3_4268_8e35_e216eabcc6f4.slice/crio-conmon-c140acf6f14058c82c2022005acd28d679f35f983dc5582ed33c0dd219896e01.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5ae3126_d6d3_4268_8e35_e216eabcc6f4.slice/crio-c140acf6f14058c82c2022005acd28d679f35f983dc5582ed33c0dd219896e01.scope\": RecentStats: unable to find data in memory cache]" Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.443267 4881 generic.go:334] "Generic (PLEG): container finished" podID="c5ae3126-d6d3-4268-8e35-e216eabcc6f4" containerID="c140acf6f14058c82c2022005acd28d679f35f983dc5582ed33c0dd219896e01" exitCode=0 Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.443753 4881 generic.go:334] "Generic (PLEG): container finished" podID="c5ae3126-d6d3-4268-8e35-e216eabcc6f4" containerID="ef9d78c9c5e22c01f5e8274cad9637d465377b5339dc20fcbf444a1190841bcb" exitCode=0 Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.443763 4881 generic.go:334] "Generic (PLEG): container finished" podID="c5ae3126-d6d3-4268-8e35-e216eabcc6f4" containerID="8325ef681bcdbc9f213b1b50d5070cda09f322843e0e7d334a000739ac240fa4" exitCode=0 Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.444019 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c5ae3126-d6d3-4268-8e35-e216eabcc6f4","Type":"ContainerDied","Data":"c140acf6f14058c82c2022005acd28d679f35f983dc5582ed33c0dd219896e01"} Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.444053 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c5ae3126-d6d3-4268-8e35-e216eabcc6f4","Type":"ContainerDied","Data":"ef9d78c9c5e22c01f5e8274cad9637d465377b5339dc20fcbf444a1190841bcb"} Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.444064 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c5ae3126-d6d3-4268-8e35-e216eabcc6f4","Type":"ContainerDied","Data":"8325ef681bcdbc9f213b1b50d5070cda09f322843e0e7d334a000739ac240fa4"} Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.444075 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c5ae3126-d6d3-4268-8e35-e216eabcc6f4","Type":"ContainerDied","Data":"044ed91f90f2699cb0b2df7171e316d9c18fb8084140392d8cb4307802d39a3c"} Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.444084 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="044ed91f90f2699cb0b2df7171e316d9c18fb8084140392d8cb4307802d39a3c" Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.518559 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.693760 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-prometheus-metric-storage-rulefiles-2\") pod \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.693973 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-thanos-prometheus-http-client-file\") pod \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.694004 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9ng7\" (UniqueName: \"kubernetes.io/projected/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-kube-api-access-d9ng7\") pod \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.694030 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-prometheus-metric-storage-rulefiles-0\") pod \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.694066 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.694315 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "c5ae3126-d6d3-4268-8e35-e216eabcc6f4" (UID: "c5ae3126-d6d3-4268-8e35-e216eabcc6f4"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.694653 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "c5ae3126-d6d3-4268-8e35-e216eabcc6f4" (UID: "c5ae3126-d6d3-4268-8e35-e216eabcc6f4"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.694904 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\") pod \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.695021 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-config\") pod \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.695086 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.695129 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-config-out\") pod \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.695163 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-tls-assets\") pod \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.695200 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-prometheus-metric-storage-rulefiles-1\") pod \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.695222 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-web-config\") pod \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.695252 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-secret-combined-ca-bundle\") pod \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\" (UID: \"c5ae3126-d6d3-4268-8e35-e216eabcc6f4\") " Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.695676 4881 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.695704 4881 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.698569 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "c5ae3126-d6d3-4268-8e35-e216eabcc6f4" (UID: "c5ae3126-d6d3-4268-8e35-e216eabcc6f4"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.701541 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-kube-api-access-d9ng7" (OuterVolumeSpecName: "kube-api-access-d9ng7") pod "c5ae3126-d6d3-4268-8e35-e216eabcc6f4" (UID: "c5ae3126-d6d3-4268-8e35-e216eabcc6f4"). InnerVolumeSpecName "kube-api-access-d9ng7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.701578 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d") pod "c5ae3126-d6d3-4268-8e35-e216eabcc6f4" (UID: "c5ae3126-d6d3-4268-8e35-e216eabcc6f4"). InnerVolumeSpecName "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.702088 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-secret-combined-ca-bundle" (OuterVolumeSpecName: "secret-combined-ca-bundle") pod "c5ae3126-d6d3-4268-8e35-e216eabcc6f4" (UID: "c5ae3126-d6d3-4268-8e35-e216eabcc6f4"). InnerVolumeSpecName "secret-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.703317 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "c5ae3126-d6d3-4268-8e35-e216eabcc6f4" (UID: "c5ae3126-d6d3-4268-8e35-e216eabcc6f4"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.704327 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-config" (OuterVolumeSpecName: "config") pod "c5ae3126-d6d3-4268-8e35-e216eabcc6f4" (UID: "c5ae3126-d6d3-4268-8e35-e216eabcc6f4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.705049 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d") pod "c5ae3126-d6d3-4268-8e35-e216eabcc6f4" (UID: "c5ae3126-d6d3-4268-8e35-e216eabcc6f4"). InnerVolumeSpecName "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.705704 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "c5ae3126-d6d3-4268-8e35-e216eabcc6f4" (UID: "c5ae3126-d6d3-4268-8e35-e216eabcc6f4"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.723931 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-config-out" (OuterVolumeSpecName: "config-out") pod "c5ae3126-d6d3-4268-8e35-e216eabcc6f4" (UID: "c5ae3126-d6d3-4268-8e35-e216eabcc6f4"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.782760 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "c5ae3126-d6d3-4268-8e35-e216eabcc6f4" (UID: "c5ae3126-d6d3-4268-8e35-e216eabcc6f4"). InnerVolumeSpecName "pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.798306 4881 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.798362 4881 reconciler_common.go:293] "Volume detached for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-secret-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.798381 4881 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.798403 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9ng7\" (UniqueName: \"kubernetes.io/projected/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-kube-api-access-d9ng7\") on node \"crc\" DevicePath \"\"" Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.798423 4881 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") on node \"crc\" DevicePath \"\"" Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.798476 4881 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\") on node \"crc\" " Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.798495 4881 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.798514 4881 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") on node \"crc\" DevicePath \"\"" Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.798533 4881 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-config-out\") on node \"crc\" DevicePath \"\"" Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.798550 4881 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.822419 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-web-config" (OuterVolumeSpecName: "web-config") pod "c5ae3126-d6d3-4268-8e35-e216eabcc6f4" (UID: "c5ae3126-d6d3-4268-8e35-e216eabcc6f4"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.850859 4881 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.851283 4881 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a") on node "crc" Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.900933 4881 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c5ae3126-d6d3-4268-8e35-e216eabcc6f4-web-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:52:08 crc kubenswrapper[4881]: I0121 11:52:08.901200 4881 reconciler_common.go:293] "Volume detached for volume \"pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\") on node \"crc\" DevicePath \"\"" Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.461814 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.496833 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.505457 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.539489 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 11:52:09 crc kubenswrapper[4881]: E0121 11:52:09.541117 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5ae3126-d6d3-4268-8e35-e216eabcc6f4" containerName="init-config-reloader" Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.541239 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5ae3126-d6d3-4268-8e35-e216eabcc6f4" containerName="init-config-reloader" Jan 21 11:52:09 crc kubenswrapper[4881]: E0121 11:52:09.541342 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5ae3126-d6d3-4268-8e35-e216eabcc6f4" containerName="prometheus" Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.541432 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5ae3126-d6d3-4268-8e35-e216eabcc6f4" containerName="prometheus" Jan 21 11:52:09 crc kubenswrapper[4881]: E0121 11:52:09.541502 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5ae3126-d6d3-4268-8e35-e216eabcc6f4" containerName="config-reloader" Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.541558 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5ae3126-d6d3-4268-8e35-e216eabcc6f4" containerName="config-reloader" Jan 21 11:52:09 crc kubenswrapper[4881]: E0121 11:52:09.541648 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5ae3126-d6d3-4268-8e35-e216eabcc6f4" containerName="thanos-sidecar" Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.541837 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5ae3126-d6d3-4268-8e35-e216eabcc6f4" containerName="thanos-sidecar" Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.542186 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5ae3126-d6d3-4268-8e35-e216eabcc6f4" containerName="prometheus" Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.542304 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5ae3126-d6d3-4268-8e35-e216eabcc6f4" containerName="thanos-sidecar" Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.542374 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5ae3126-d6d3-4268-8e35-e216eabcc6f4" containerName="config-reloader" Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.544649 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.547390 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.547434 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.547774 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.547989 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.548360 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.548583 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-jwvdx" Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.550940 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.555031 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.569860 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.904228 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/4a412b1e-29ac-4420-920d-6054e2c03d53-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.904308 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbc92\" (UniqueName: \"kubernetes.io/projected/4a412b1e-29ac-4420-920d-6054e2c03d53-kube-api-access-nbc92\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.904343 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/4a412b1e-29ac-4420-920d-6054e2c03d53-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.904388 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/4a412b1e-29ac-4420-920d-6054e2c03d53-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.904435 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4a412b1e-29ac-4420-920d-6054e2c03d53-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.904454 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4a412b1e-29ac-4420-920d-6054e2c03d53-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.904478 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/4a412b1e-29ac-4420-920d-6054e2c03d53-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.904495 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/4a412b1e-29ac-4420-920d-6054e2c03d53-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.904514 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4a412b1e-29ac-4420-920d-6054e2c03d53-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.904555 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/4a412b1e-29ac-4420-920d-6054e2c03d53-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.904581 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a412b1e-29ac-4420-920d-6054e2c03d53-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.904626 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4a412b1e-29ac-4420-920d-6054e2c03d53-config\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:09 crc kubenswrapper[4881]: I0121 11:52:09.904651 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:10 crc kubenswrapper[4881]: I0121 11:52:10.006828 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/4a412b1e-29ac-4420-920d-6054e2c03d53-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:10 crc kubenswrapper[4881]: I0121 11:52:10.006898 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a412b1e-29ac-4420-920d-6054e2c03d53-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:10 crc kubenswrapper[4881]: I0121 11:52:10.006956 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4a412b1e-29ac-4420-920d-6054e2c03d53-config\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:10 crc kubenswrapper[4881]: I0121 11:52:10.006993 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:10 crc kubenswrapper[4881]: I0121 11:52:10.007053 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/4a412b1e-29ac-4420-920d-6054e2c03d53-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:10 crc kubenswrapper[4881]: I0121 11:52:10.007219 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbc92\" (UniqueName: \"kubernetes.io/projected/4a412b1e-29ac-4420-920d-6054e2c03d53-kube-api-access-nbc92\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:10 crc kubenswrapper[4881]: I0121 11:52:10.007257 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/4a412b1e-29ac-4420-920d-6054e2c03d53-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:10 crc kubenswrapper[4881]: I0121 11:52:10.007315 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/4a412b1e-29ac-4420-920d-6054e2c03d53-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:10 crc kubenswrapper[4881]: I0121 11:52:10.007372 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4a412b1e-29ac-4420-920d-6054e2c03d53-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:10 crc kubenswrapper[4881]: I0121 11:52:10.007399 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4a412b1e-29ac-4420-920d-6054e2c03d53-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:10 crc kubenswrapper[4881]: I0121 11:52:10.007430 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/4a412b1e-29ac-4420-920d-6054e2c03d53-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:10 crc kubenswrapper[4881]: I0121 11:52:10.007451 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/4a412b1e-29ac-4420-920d-6054e2c03d53-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:10 crc kubenswrapper[4881]: I0121 11:52:10.007475 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4a412b1e-29ac-4420-920d-6054e2c03d53-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:10 crc kubenswrapper[4881]: I0121 11:52:10.008758 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/4a412b1e-29ac-4420-920d-6054e2c03d53-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:10 crc kubenswrapper[4881]: I0121 11:52:10.012009 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/4a412b1e-29ac-4420-920d-6054e2c03d53-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:10 crc kubenswrapper[4881]: I0121 11:52:10.012186 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/4a412b1e-29ac-4420-920d-6054e2c03d53-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:10 crc kubenswrapper[4881]: I0121 11:52:10.015805 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/4a412b1e-29ac-4420-920d-6054e2c03d53-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:10 crc kubenswrapper[4881]: I0121 11:52:10.016853 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/4a412b1e-29ac-4420-920d-6054e2c03d53-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:10 crc kubenswrapper[4881]: I0121 11:52:10.017002 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a412b1e-29ac-4420-920d-6054e2c03d53-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:10 crc kubenswrapper[4881]: I0121 11:52:10.019495 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4a412b1e-29ac-4420-920d-6054e2c03d53-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:10 crc kubenswrapper[4881]: I0121 11:52:10.019509 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4a412b1e-29ac-4420-920d-6054e2c03d53-config\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:10 crc kubenswrapper[4881]: I0121 11:52:10.020114 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4a412b1e-29ac-4420-920d-6054e2c03d53-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:10 crc kubenswrapper[4881]: I0121 11:52:10.020236 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/4a412b1e-29ac-4420-920d-6054e2c03d53-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:10 crc kubenswrapper[4881]: I0121 11:52:10.028367 4881 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 11:52:10 crc kubenswrapper[4881]: I0121 11:52:10.028400 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4a412b1e-29ac-4420-920d-6054e2c03d53-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:10 crc kubenswrapper[4881]: I0121 11:52:10.028425 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3c91253029fdcc57c7bcc13c4ee1dc503079fe71761fa62e5d04837e0b8b075e/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:10 crc kubenswrapper[4881]: I0121 11:52:10.031123 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbc92\" (UniqueName: \"kubernetes.io/projected/4a412b1e-29ac-4420-920d-6054e2c03d53-kube-api-access-nbc92\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:10 crc kubenswrapper[4881]: I0121 11:52:10.072249 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c8add5c8-5d24-439f-b2da-5ddabecb671a\") pod \"prometheus-metric-storage-0\" (UID: \"4a412b1e-29ac-4420-920d-6054e2c03d53\") " pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:10 crc kubenswrapper[4881]: I0121 11:52:10.163205 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:10 crc kubenswrapper[4881]: I0121 11:52:10.652120 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 21 11:52:11 crc kubenswrapper[4881]: I0121 11:52:11.332155 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5ae3126-d6d3-4268-8e35-e216eabcc6f4" path="/var/lib/kubelet/pods/c5ae3126-d6d3-4268-8e35-e216eabcc6f4/volumes" Jan 21 11:52:11 crc kubenswrapper[4881]: I0121 11:52:11.487904 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"4a412b1e-29ac-4420-920d-6054e2c03d53","Type":"ContainerStarted","Data":"3a365c1f9c9183115a8cf53d204723967ceea5d6d7c2491eaa0e86e7626daa3d"} Jan 21 11:52:11 crc kubenswrapper[4881]: I0121 11:52:11.511561 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="c5ae3126-d6d3-4268-8e35-e216eabcc6f4" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.136:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 11:52:15 crc kubenswrapper[4881]: I0121 11:52:15.539128 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"4a412b1e-29ac-4420-920d-6054e2c03d53","Type":"ContainerStarted","Data":"6f125e01fd517390d85ac08a2c5ea9d2899034078c9238efd78e6ffb03996ce4"} Jan 21 11:52:20 crc kubenswrapper[4881]: I0121 11:52:20.311777 4881 scope.go:117] "RemoveContainer" containerID="d8ad2c92af12e692917f97f6e76a6242fb5c00dc2c38dcea5e2ce39cd5dfeb57" Jan 21 11:52:20 crc kubenswrapper[4881]: E0121 11:52:20.312668 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:52:25 crc kubenswrapper[4881]: I0121 11:52:25.656373 4881 generic.go:334] "Generic (PLEG): container finished" podID="4a412b1e-29ac-4420-920d-6054e2c03d53" containerID="6f125e01fd517390d85ac08a2c5ea9d2899034078c9238efd78e6ffb03996ce4" exitCode=0 Jan 21 11:52:25 crc kubenswrapper[4881]: I0121 11:52:25.656523 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"4a412b1e-29ac-4420-920d-6054e2c03d53","Type":"ContainerDied","Data":"6f125e01fd517390d85ac08a2c5ea9d2899034078c9238efd78e6ffb03996ce4"} Jan 21 11:52:26 crc kubenswrapper[4881]: I0121 11:52:26.668411 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"4a412b1e-29ac-4420-920d-6054e2c03d53","Type":"ContainerStarted","Data":"d69a8d1f17d30ed5c57b5c6613211ee457c89edce7c7ab4c21c2299ff634238c"} Jan 21 11:52:30 crc kubenswrapper[4881]: I0121 11:52:30.716545 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"4a412b1e-29ac-4420-920d-6054e2c03d53","Type":"ContainerStarted","Data":"b322f6587d4d6e6ca2aab444b426a0b2cf8db4e66e633a9150fb6848f18052d2"} Jan 21 11:52:30 crc kubenswrapper[4881]: I0121 11:52:30.717250 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"4a412b1e-29ac-4420-920d-6054e2c03d53","Type":"ContainerStarted","Data":"29ab15283cc0a73140495752a9403292f011cad93a7eba66fb212107581801d4"} Jan 21 11:52:30 crc kubenswrapper[4881]: I0121 11:52:30.758163 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=21.758143825 podStartE2EDuration="21.758143825s" podCreationTimestamp="2026-01-21 11:52:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:52:30.75385232 +0000 UTC m=+3338.013808819" watchObservedRunningTime="2026-01-21 11:52:30.758143825 +0000 UTC m=+3338.018100294" Jan 21 11:52:32 crc kubenswrapper[4881]: I0121 11:52:32.216405 4881 scope.go:117] "RemoveContainer" containerID="8325ef681bcdbc9f213b1b50d5070cda09f322843e0e7d334a000739ac240fa4" Jan 21 11:52:32 crc kubenswrapper[4881]: I0121 11:52:32.242893 4881 scope.go:117] "RemoveContainer" containerID="a35359d5b5faf07c0a8496b05737dc67dd3207c714c5cd8b7b98eda3d6b21eb4" Jan 21 11:52:32 crc kubenswrapper[4881]: I0121 11:52:32.272622 4881 scope.go:117] "RemoveContainer" containerID="ef9d78c9c5e22c01f5e8274cad9637d465377b5339dc20fcbf444a1190841bcb" Jan 21 11:52:32 crc kubenswrapper[4881]: I0121 11:52:32.313684 4881 scope.go:117] "RemoveContainer" containerID="c140acf6f14058c82c2022005acd28d679f35f983dc5582ed33c0dd219896e01" Jan 21 11:52:34 crc kubenswrapper[4881]: I0121 11:52:34.311061 4881 scope.go:117] "RemoveContainer" containerID="d8ad2c92af12e692917f97f6e76a6242fb5c00dc2c38dcea5e2ce39cd5dfeb57" Jan 21 11:52:34 crc kubenswrapper[4881]: E0121 11:52:34.311748 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:52:35 crc kubenswrapper[4881]: I0121 11:52:35.163614 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:40 crc kubenswrapper[4881]: I0121 11:52:40.164529 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:40 crc kubenswrapper[4881]: I0121 11:52:40.173070 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:40 crc kubenswrapper[4881]: I0121 11:52:40.237444 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 21 11:52:46 crc kubenswrapper[4881]: I0121 11:52:46.310644 4881 scope.go:117] "RemoveContainer" containerID="d8ad2c92af12e692917f97f6e76a6242fb5c00dc2c38dcea5e2ce39cd5dfeb57" Jan 21 11:52:46 crc kubenswrapper[4881]: E0121 11:52:46.311419 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.668914 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.671138 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.673342 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.673342 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.673604 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.676007 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-sp5k2" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.685952 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.740488 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"b482979e-7a9e-4b89-846c-f50400adcf1b\") " pod="openstack/tempest-tests-tempest" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.740586 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b482979e-7a9e-4b89-846c-f50400adcf1b-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"b482979e-7a9e-4b89-846c-f50400adcf1b\") " pod="openstack/tempest-tests-tempest" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.740632 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b482979e-7a9e-4b89-846c-f50400adcf1b-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"b482979e-7a9e-4b89-846c-f50400adcf1b\") " pod="openstack/tempest-tests-tempest" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.740725 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b482979e-7a9e-4b89-846c-f50400adcf1b-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"b482979e-7a9e-4b89-846c-f50400adcf1b\") " pod="openstack/tempest-tests-tempest" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.740756 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b482979e-7a9e-4b89-846c-f50400adcf1b-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"b482979e-7a9e-4b89-846c-f50400adcf1b\") " pod="openstack/tempest-tests-tempest" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.740855 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b482979e-7a9e-4b89-846c-f50400adcf1b-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"b482979e-7a9e-4b89-846c-f50400adcf1b\") " pod="openstack/tempest-tests-tempest" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.740894 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b482979e-7a9e-4b89-846c-f50400adcf1b-config-data\") pod \"tempest-tests-tempest\" (UID: \"b482979e-7a9e-4b89-846c-f50400adcf1b\") " pod="openstack/tempest-tests-tempest" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.740992 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx4hn\" (UniqueName: \"kubernetes.io/projected/b482979e-7a9e-4b89-846c-f50400adcf1b-kube-api-access-nx4hn\") pod \"tempest-tests-tempest\" (UID: \"b482979e-7a9e-4b89-846c-f50400adcf1b\") " pod="openstack/tempest-tests-tempest" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.741063 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b482979e-7a9e-4b89-846c-f50400adcf1b-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"b482979e-7a9e-4b89-846c-f50400adcf1b\") " pod="openstack/tempest-tests-tempest" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.843133 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"b482979e-7a9e-4b89-846c-f50400adcf1b\") " pod="openstack/tempest-tests-tempest" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.843198 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b482979e-7a9e-4b89-846c-f50400adcf1b-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"b482979e-7a9e-4b89-846c-f50400adcf1b\") " pod="openstack/tempest-tests-tempest" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.843225 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b482979e-7a9e-4b89-846c-f50400adcf1b-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"b482979e-7a9e-4b89-846c-f50400adcf1b\") " pod="openstack/tempest-tests-tempest" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.843261 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b482979e-7a9e-4b89-846c-f50400adcf1b-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"b482979e-7a9e-4b89-846c-f50400adcf1b\") " pod="openstack/tempest-tests-tempest" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.843276 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b482979e-7a9e-4b89-846c-f50400adcf1b-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"b482979e-7a9e-4b89-846c-f50400adcf1b\") " pod="openstack/tempest-tests-tempest" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.843319 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b482979e-7a9e-4b89-846c-f50400adcf1b-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"b482979e-7a9e-4b89-846c-f50400adcf1b\") " pod="openstack/tempest-tests-tempest" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.843345 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b482979e-7a9e-4b89-846c-f50400adcf1b-config-data\") pod \"tempest-tests-tempest\" (UID: \"b482979e-7a9e-4b89-846c-f50400adcf1b\") " pod="openstack/tempest-tests-tempest" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.843376 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nx4hn\" (UniqueName: \"kubernetes.io/projected/b482979e-7a9e-4b89-846c-f50400adcf1b-kube-api-access-nx4hn\") pod \"tempest-tests-tempest\" (UID: \"b482979e-7a9e-4b89-846c-f50400adcf1b\") " pod="openstack/tempest-tests-tempest" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.843411 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b482979e-7a9e-4b89-846c-f50400adcf1b-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"b482979e-7a9e-4b89-846c-f50400adcf1b\") " pod="openstack/tempest-tests-tempest" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.843549 4881 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"b482979e-7a9e-4b89-846c-f50400adcf1b\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/tempest-tests-tempest" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.843978 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b482979e-7a9e-4b89-846c-f50400adcf1b-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"b482979e-7a9e-4b89-846c-f50400adcf1b\") " pod="openstack/tempest-tests-tempest" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.844407 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b482979e-7a9e-4b89-846c-f50400adcf1b-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"b482979e-7a9e-4b89-846c-f50400adcf1b\") " pod="openstack/tempest-tests-tempest" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.845311 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b482979e-7a9e-4b89-846c-f50400adcf1b-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"b482979e-7a9e-4b89-846c-f50400adcf1b\") " pod="openstack/tempest-tests-tempest" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.845900 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b482979e-7a9e-4b89-846c-f50400adcf1b-config-data\") pod \"tempest-tests-tempest\" (UID: \"b482979e-7a9e-4b89-846c-f50400adcf1b\") " pod="openstack/tempest-tests-tempest" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.851226 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b482979e-7a9e-4b89-846c-f50400adcf1b-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"b482979e-7a9e-4b89-846c-f50400adcf1b\") " pod="openstack/tempest-tests-tempest" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.851469 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b482979e-7a9e-4b89-846c-f50400adcf1b-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"b482979e-7a9e-4b89-846c-f50400adcf1b\") " pod="openstack/tempest-tests-tempest" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.855308 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b482979e-7a9e-4b89-846c-f50400adcf1b-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"b482979e-7a9e-4b89-846c-f50400adcf1b\") " pod="openstack/tempest-tests-tempest" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.866490 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nx4hn\" (UniqueName: \"kubernetes.io/projected/b482979e-7a9e-4b89-846c-f50400adcf1b-kube-api-access-nx4hn\") pod \"tempest-tests-tempest\" (UID: \"b482979e-7a9e-4b89-846c-f50400adcf1b\") " pod="openstack/tempest-tests-tempest" Jan 21 11:52:52 crc kubenswrapper[4881]: I0121 11:52:52.892481 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"b482979e-7a9e-4b89-846c-f50400adcf1b\") " pod="openstack/tempest-tests-tempest" Jan 21 11:52:53 crc kubenswrapper[4881]: I0121 11:52:53.001631 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 21 11:52:53 crc kubenswrapper[4881]: W0121 11:52:53.557648 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb482979e_7a9e_4b89_846c_f50400adcf1b.slice/crio-e7f94caf9fb5ebfb061dd9ba5ac5d3214a56c129294a84a3c16da495e4592e03 WatchSource:0}: Error finding container e7f94caf9fb5ebfb061dd9ba5ac5d3214a56c129294a84a3c16da495e4592e03: Status 404 returned error can't find the container with id e7f94caf9fb5ebfb061dd9ba5ac5d3214a56c129294a84a3c16da495e4592e03 Jan 21 11:52:53 crc kubenswrapper[4881]: I0121 11:52:53.560445 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 21 11:52:53 crc kubenswrapper[4881]: I0121 11:52:53.747888 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"b482979e-7a9e-4b89-846c-f50400adcf1b","Type":"ContainerStarted","Data":"e7f94caf9fb5ebfb061dd9ba5ac5d3214a56c129294a84a3c16da495e4592e03"} Jan 21 11:52:57 crc kubenswrapper[4881]: I0121 11:52:57.311452 4881 scope.go:117] "RemoveContainer" containerID="d8ad2c92af12e692917f97f6e76a6242fb5c00dc2c38dcea5e2ce39cd5dfeb57" Jan 21 11:52:57 crc kubenswrapper[4881]: E0121 11:52:57.312293 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:53:03 crc kubenswrapper[4881]: I0121 11:53:03.647741 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 21 11:53:04 crc kubenswrapper[4881]: I0121 11:53:04.861306 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"b482979e-7a9e-4b89-846c-f50400adcf1b","Type":"ContainerStarted","Data":"58f7186a17a8d936929153955c8b6cd57846e64bd7ae7d91ae066bf6fd80cea0"} Jan 21 11:53:08 crc kubenswrapper[4881]: I0121 11:53:08.310744 4881 scope.go:117] "RemoveContainer" containerID="d8ad2c92af12e692917f97f6e76a6242fb5c00dc2c38dcea5e2ce39cd5dfeb57" Jan 21 11:53:08 crc kubenswrapper[4881]: E0121 11:53:08.313071 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:53:23 crc kubenswrapper[4881]: I0121 11:53:23.321713 4881 scope.go:117] "RemoveContainer" containerID="d8ad2c92af12e692917f97f6e76a6242fb5c00dc2c38dcea5e2ce39cd5dfeb57" Jan 21 11:53:23 crc kubenswrapper[4881]: E0121 11:53:23.322764 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 11:53:37 crc kubenswrapper[4881]: I0121 11:53:37.311382 4881 scope.go:117] "RemoveContainer" containerID="d8ad2c92af12e692917f97f6e76a6242fb5c00dc2c38dcea5e2ce39cd5dfeb57" Jan 21 11:53:38 crc kubenswrapper[4881]: I0121 11:53:38.286716 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"0eb49608bbe8f2a16a73771ce3fd5ae654c9692ec1f4885af786d4be3393b51c"} Jan 21 11:53:38 crc kubenswrapper[4881]: I0121 11:53:38.353093 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=37.269352385 podStartE2EDuration="47.353062306s" podCreationTimestamp="2026-01-21 11:52:51 +0000 UTC" firstStartedPulling="2026-01-21 11:52:53.560954844 +0000 UTC m=+3360.820911343" lastFinishedPulling="2026-01-21 11:53:03.644664795 +0000 UTC m=+3370.904621264" observedRunningTime="2026-01-21 11:53:04.893416217 +0000 UTC m=+3372.153372686" watchObservedRunningTime="2026-01-21 11:53:38.353062306 +0000 UTC m=+3405.613018815" Jan 21 11:55:15 crc kubenswrapper[4881]: I0121 11:55:15.161232 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-z67qr"] Jan 21 11:55:15 crc kubenswrapper[4881]: I0121 11:55:15.164952 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z67qr" Jan 21 11:55:15 crc kubenswrapper[4881]: I0121 11:55:15.176715 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z67qr"] Jan 21 11:55:15 crc kubenswrapper[4881]: I0121 11:55:15.357757 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpvdn\" (UniqueName: \"kubernetes.io/projected/aa68d770-00ce-479d-8638-c321d359f566-kube-api-access-jpvdn\") pod \"certified-operators-z67qr\" (UID: \"aa68d770-00ce-479d-8638-c321d359f566\") " pod="openshift-marketplace/certified-operators-z67qr" Jan 21 11:55:15 crc kubenswrapper[4881]: I0121 11:55:15.358233 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa68d770-00ce-479d-8638-c321d359f566-utilities\") pod \"certified-operators-z67qr\" (UID: \"aa68d770-00ce-479d-8638-c321d359f566\") " pod="openshift-marketplace/certified-operators-z67qr" Jan 21 11:55:15 crc kubenswrapper[4881]: I0121 11:55:15.358316 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa68d770-00ce-479d-8638-c321d359f566-catalog-content\") pod \"certified-operators-z67qr\" (UID: \"aa68d770-00ce-479d-8638-c321d359f566\") " pod="openshift-marketplace/certified-operators-z67qr" Jan 21 11:55:15 crc kubenswrapper[4881]: I0121 11:55:15.461706 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa68d770-00ce-479d-8638-c321d359f566-utilities\") pod \"certified-operators-z67qr\" (UID: \"aa68d770-00ce-479d-8638-c321d359f566\") " pod="openshift-marketplace/certified-operators-z67qr" Jan 21 11:55:15 crc kubenswrapper[4881]: I0121 11:55:15.461817 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa68d770-00ce-479d-8638-c321d359f566-catalog-content\") pod \"certified-operators-z67qr\" (UID: \"aa68d770-00ce-479d-8638-c321d359f566\") " pod="openshift-marketplace/certified-operators-z67qr" Jan 21 11:55:15 crc kubenswrapper[4881]: I0121 11:55:15.462017 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpvdn\" (UniqueName: \"kubernetes.io/projected/aa68d770-00ce-479d-8638-c321d359f566-kube-api-access-jpvdn\") pod \"certified-operators-z67qr\" (UID: \"aa68d770-00ce-479d-8638-c321d359f566\") " pod="openshift-marketplace/certified-operators-z67qr" Jan 21 11:55:15 crc kubenswrapper[4881]: I0121 11:55:15.462186 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa68d770-00ce-479d-8638-c321d359f566-utilities\") pod \"certified-operators-z67qr\" (UID: \"aa68d770-00ce-479d-8638-c321d359f566\") " pod="openshift-marketplace/certified-operators-z67qr" Jan 21 11:55:15 crc kubenswrapper[4881]: I0121 11:55:15.462422 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa68d770-00ce-479d-8638-c321d359f566-catalog-content\") pod \"certified-operators-z67qr\" (UID: \"aa68d770-00ce-479d-8638-c321d359f566\") " pod="openshift-marketplace/certified-operators-z67qr" Jan 21 11:55:15 crc kubenswrapper[4881]: I0121 11:55:15.490456 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpvdn\" (UniqueName: \"kubernetes.io/projected/aa68d770-00ce-479d-8638-c321d359f566-kube-api-access-jpvdn\") pod \"certified-operators-z67qr\" (UID: \"aa68d770-00ce-479d-8638-c321d359f566\") " pod="openshift-marketplace/certified-operators-z67qr" Jan 21 11:55:15 crc kubenswrapper[4881]: I0121 11:55:15.520427 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z67qr" Jan 21 11:55:16 crc kubenswrapper[4881]: I0121 11:55:16.085012 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z67qr"] Jan 21 11:55:16 crc kubenswrapper[4881]: I0121 11:55:16.319541 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z67qr" event={"ID":"aa68d770-00ce-479d-8638-c321d359f566","Type":"ContainerStarted","Data":"f1d7702dd9a2ff57d9d3290eecce7abf05b9481260cb8fe006929f80a18e6b6c"} Jan 21 11:55:16 crc kubenswrapper[4881]: I0121 11:55:16.319914 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z67qr" event={"ID":"aa68d770-00ce-479d-8638-c321d359f566","Type":"ContainerStarted","Data":"fd3efa1bda6f47f00e75d651283a9df00f1ada4385af64dc6875164eac5891bf"} Jan 21 11:55:17 crc kubenswrapper[4881]: I0121 11:55:17.329842 4881 generic.go:334] "Generic (PLEG): container finished" podID="aa68d770-00ce-479d-8638-c321d359f566" containerID="f1d7702dd9a2ff57d9d3290eecce7abf05b9481260cb8fe006929f80a18e6b6c" exitCode=0 Jan 21 11:55:17 crc kubenswrapper[4881]: I0121 11:55:17.329959 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z67qr" event={"ID":"aa68d770-00ce-479d-8638-c321d359f566","Type":"ContainerDied","Data":"f1d7702dd9a2ff57d9d3290eecce7abf05b9481260cb8fe006929f80a18e6b6c"} Jan 21 11:55:18 crc kubenswrapper[4881]: I0121 11:55:18.342660 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z67qr" event={"ID":"aa68d770-00ce-479d-8638-c321d359f566","Type":"ContainerStarted","Data":"8c729f58ff3919fa28022a93847e818baade72be2f7e183f0afec92c291a4b2f"} Jan 21 11:55:20 crc kubenswrapper[4881]: I0121 11:55:20.364176 4881 generic.go:334] "Generic (PLEG): container finished" podID="aa68d770-00ce-479d-8638-c321d359f566" containerID="8c729f58ff3919fa28022a93847e818baade72be2f7e183f0afec92c291a4b2f" exitCode=0 Jan 21 11:55:20 crc kubenswrapper[4881]: I0121 11:55:20.364229 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z67qr" event={"ID":"aa68d770-00ce-479d-8638-c321d359f566","Type":"ContainerDied","Data":"8c729f58ff3919fa28022a93847e818baade72be2f7e183f0afec92c291a4b2f"} Jan 21 11:55:21 crc kubenswrapper[4881]: I0121 11:55:21.376995 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z67qr" event={"ID":"aa68d770-00ce-479d-8638-c321d359f566","Type":"ContainerStarted","Data":"b97394ac524b67ebef404bbfb323c8e0d5a49b931e3bd3ad35e70c82c565dea0"} Jan 21 11:55:21 crc kubenswrapper[4881]: I0121 11:55:21.414018 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-z67qr" podStartSLOduration=2.937953623 podStartE2EDuration="6.41399635s" podCreationTimestamp="2026-01-21 11:55:15 +0000 UTC" firstStartedPulling="2026-01-21 11:55:17.333066642 +0000 UTC m=+3504.593023111" lastFinishedPulling="2026-01-21 11:55:20.809109359 +0000 UTC m=+3508.069065838" observedRunningTime="2026-01-21 11:55:21.403044942 +0000 UTC m=+3508.663001421" watchObservedRunningTime="2026-01-21 11:55:21.41399635 +0000 UTC m=+3508.673952819" Jan 21 11:55:25 crc kubenswrapper[4881]: I0121 11:55:25.521339 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-z67qr" Jan 21 11:55:25 crc kubenswrapper[4881]: I0121 11:55:25.523247 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-z67qr" Jan 21 11:55:25 crc kubenswrapper[4881]: I0121 11:55:25.608583 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-z67qr" Jan 21 11:55:26 crc kubenswrapper[4881]: I0121 11:55:26.619903 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-z67qr" Jan 21 11:55:26 crc kubenswrapper[4881]: I0121 11:55:26.670574 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z67qr"] Jan 21 11:55:28 crc kubenswrapper[4881]: I0121 11:55:28.687963 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-z67qr" podUID="aa68d770-00ce-479d-8638-c321d359f566" containerName="registry-server" containerID="cri-o://b97394ac524b67ebef404bbfb323c8e0d5a49b931e3bd3ad35e70c82c565dea0" gracePeriod=2 Jan 21 11:55:29 crc kubenswrapper[4881]: I0121 11:55:29.195506 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z67qr" Jan 21 11:55:29 crc kubenswrapper[4881]: I0121 11:55:29.291440 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa68d770-00ce-479d-8638-c321d359f566-utilities\") pod \"aa68d770-00ce-479d-8638-c321d359f566\" (UID: \"aa68d770-00ce-479d-8638-c321d359f566\") " Jan 21 11:55:29 crc kubenswrapper[4881]: I0121 11:55:29.291915 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa68d770-00ce-479d-8638-c321d359f566-catalog-content\") pod \"aa68d770-00ce-479d-8638-c321d359f566\" (UID: \"aa68d770-00ce-479d-8638-c321d359f566\") " Jan 21 11:55:29 crc kubenswrapper[4881]: I0121 11:55:29.292149 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpvdn\" (UniqueName: \"kubernetes.io/projected/aa68d770-00ce-479d-8638-c321d359f566-kube-api-access-jpvdn\") pod \"aa68d770-00ce-479d-8638-c321d359f566\" (UID: \"aa68d770-00ce-479d-8638-c321d359f566\") " Jan 21 11:55:29 crc kubenswrapper[4881]: I0121 11:55:29.292610 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa68d770-00ce-479d-8638-c321d359f566-utilities" (OuterVolumeSpecName: "utilities") pod "aa68d770-00ce-479d-8638-c321d359f566" (UID: "aa68d770-00ce-479d-8638-c321d359f566"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:55:29 crc kubenswrapper[4881]: I0121 11:55:29.293161 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa68d770-00ce-479d-8638-c321d359f566-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:55:29 crc kubenswrapper[4881]: I0121 11:55:29.300714 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa68d770-00ce-479d-8638-c321d359f566-kube-api-access-jpvdn" (OuterVolumeSpecName: "kube-api-access-jpvdn") pod "aa68d770-00ce-479d-8638-c321d359f566" (UID: "aa68d770-00ce-479d-8638-c321d359f566"). InnerVolumeSpecName "kube-api-access-jpvdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:55:29 crc kubenswrapper[4881]: I0121 11:55:29.358388 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa68d770-00ce-479d-8638-c321d359f566-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aa68d770-00ce-479d-8638-c321d359f566" (UID: "aa68d770-00ce-479d-8638-c321d359f566"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:55:29 crc kubenswrapper[4881]: I0121 11:55:29.395395 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpvdn\" (UniqueName: \"kubernetes.io/projected/aa68d770-00ce-479d-8638-c321d359f566-kube-api-access-jpvdn\") on node \"crc\" DevicePath \"\"" Jan 21 11:55:29 crc kubenswrapper[4881]: I0121 11:55:29.395690 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa68d770-00ce-479d-8638-c321d359f566-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:55:29 crc kubenswrapper[4881]: I0121 11:55:29.707190 4881 generic.go:334] "Generic (PLEG): container finished" podID="aa68d770-00ce-479d-8638-c321d359f566" containerID="b97394ac524b67ebef404bbfb323c8e0d5a49b931e3bd3ad35e70c82c565dea0" exitCode=0 Jan 21 11:55:29 crc kubenswrapper[4881]: I0121 11:55:29.707232 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z67qr" event={"ID":"aa68d770-00ce-479d-8638-c321d359f566","Type":"ContainerDied","Data":"b97394ac524b67ebef404bbfb323c8e0d5a49b931e3bd3ad35e70c82c565dea0"} Jan 21 11:55:29 crc kubenswrapper[4881]: I0121 11:55:29.707267 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z67qr" event={"ID":"aa68d770-00ce-479d-8638-c321d359f566","Type":"ContainerDied","Data":"fd3efa1bda6f47f00e75d651283a9df00f1ada4385af64dc6875164eac5891bf"} Jan 21 11:55:29 crc kubenswrapper[4881]: I0121 11:55:29.707286 4881 scope.go:117] "RemoveContainer" containerID="b97394ac524b67ebef404bbfb323c8e0d5a49b931e3bd3ad35e70c82c565dea0" Jan 21 11:55:29 crc kubenswrapper[4881]: I0121 11:55:29.709516 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z67qr" Jan 21 11:55:29 crc kubenswrapper[4881]: I0121 11:55:29.765156 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z67qr"] Jan 21 11:55:29 crc kubenswrapper[4881]: I0121 11:55:29.768147 4881 scope.go:117] "RemoveContainer" containerID="8c729f58ff3919fa28022a93847e818baade72be2f7e183f0afec92c291a4b2f" Jan 21 11:55:29 crc kubenswrapper[4881]: I0121 11:55:29.780136 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-z67qr"] Jan 21 11:55:29 crc kubenswrapper[4881]: I0121 11:55:29.805605 4881 scope.go:117] "RemoveContainer" containerID="f1d7702dd9a2ff57d9d3290eecce7abf05b9481260cb8fe006929f80a18e6b6c" Jan 21 11:55:29 crc kubenswrapper[4881]: I0121 11:55:29.848997 4881 scope.go:117] "RemoveContainer" containerID="b97394ac524b67ebef404bbfb323c8e0d5a49b931e3bd3ad35e70c82c565dea0" Jan 21 11:55:29 crc kubenswrapper[4881]: E0121 11:55:29.849666 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b97394ac524b67ebef404bbfb323c8e0d5a49b931e3bd3ad35e70c82c565dea0\": container with ID starting with b97394ac524b67ebef404bbfb323c8e0d5a49b931e3bd3ad35e70c82c565dea0 not found: ID does not exist" containerID="b97394ac524b67ebef404bbfb323c8e0d5a49b931e3bd3ad35e70c82c565dea0" Jan 21 11:55:29 crc kubenswrapper[4881]: I0121 11:55:29.849702 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b97394ac524b67ebef404bbfb323c8e0d5a49b931e3bd3ad35e70c82c565dea0"} err="failed to get container status \"b97394ac524b67ebef404bbfb323c8e0d5a49b931e3bd3ad35e70c82c565dea0\": rpc error: code = NotFound desc = could not find container \"b97394ac524b67ebef404bbfb323c8e0d5a49b931e3bd3ad35e70c82c565dea0\": container with ID starting with b97394ac524b67ebef404bbfb323c8e0d5a49b931e3bd3ad35e70c82c565dea0 not found: ID does not exist" Jan 21 11:55:29 crc kubenswrapper[4881]: I0121 11:55:29.849728 4881 scope.go:117] "RemoveContainer" containerID="8c729f58ff3919fa28022a93847e818baade72be2f7e183f0afec92c291a4b2f" Jan 21 11:55:29 crc kubenswrapper[4881]: E0121 11:55:29.850079 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c729f58ff3919fa28022a93847e818baade72be2f7e183f0afec92c291a4b2f\": container with ID starting with 8c729f58ff3919fa28022a93847e818baade72be2f7e183f0afec92c291a4b2f not found: ID does not exist" containerID="8c729f58ff3919fa28022a93847e818baade72be2f7e183f0afec92c291a4b2f" Jan 21 11:55:29 crc kubenswrapper[4881]: I0121 11:55:29.850105 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c729f58ff3919fa28022a93847e818baade72be2f7e183f0afec92c291a4b2f"} err="failed to get container status \"8c729f58ff3919fa28022a93847e818baade72be2f7e183f0afec92c291a4b2f\": rpc error: code = NotFound desc = could not find container \"8c729f58ff3919fa28022a93847e818baade72be2f7e183f0afec92c291a4b2f\": container with ID starting with 8c729f58ff3919fa28022a93847e818baade72be2f7e183f0afec92c291a4b2f not found: ID does not exist" Jan 21 11:55:29 crc kubenswrapper[4881]: I0121 11:55:29.850118 4881 scope.go:117] "RemoveContainer" containerID="f1d7702dd9a2ff57d9d3290eecce7abf05b9481260cb8fe006929f80a18e6b6c" Jan 21 11:55:29 crc kubenswrapper[4881]: E0121 11:55:29.850323 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1d7702dd9a2ff57d9d3290eecce7abf05b9481260cb8fe006929f80a18e6b6c\": container with ID starting with f1d7702dd9a2ff57d9d3290eecce7abf05b9481260cb8fe006929f80a18e6b6c not found: ID does not exist" containerID="f1d7702dd9a2ff57d9d3290eecce7abf05b9481260cb8fe006929f80a18e6b6c" Jan 21 11:55:29 crc kubenswrapper[4881]: I0121 11:55:29.850343 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1d7702dd9a2ff57d9d3290eecce7abf05b9481260cb8fe006929f80a18e6b6c"} err="failed to get container status \"f1d7702dd9a2ff57d9d3290eecce7abf05b9481260cb8fe006929f80a18e6b6c\": rpc error: code = NotFound desc = could not find container \"f1d7702dd9a2ff57d9d3290eecce7abf05b9481260cb8fe006929f80a18e6b6c\": container with ID starting with f1d7702dd9a2ff57d9d3290eecce7abf05b9481260cb8fe006929f80a18e6b6c not found: ID does not exist" Jan 21 11:55:31 crc kubenswrapper[4881]: I0121 11:55:31.323702 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa68d770-00ce-479d-8638-c321d359f566" path="/var/lib/kubelet/pods/aa68d770-00ce-479d-8638-c321d359f566/volumes" Jan 21 11:55:31 crc kubenswrapper[4881]: I0121 11:55:31.451676 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-djpvn"] Jan 21 11:55:31 crc kubenswrapper[4881]: E0121 11:55:31.452632 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa68d770-00ce-479d-8638-c321d359f566" containerName="extract-content" Jan 21 11:55:31 crc kubenswrapper[4881]: I0121 11:55:31.455016 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa68d770-00ce-479d-8638-c321d359f566" containerName="extract-content" Jan 21 11:55:31 crc kubenswrapper[4881]: E0121 11:55:31.455082 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa68d770-00ce-479d-8638-c321d359f566" containerName="registry-server" Jan 21 11:55:31 crc kubenswrapper[4881]: I0121 11:55:31.455092 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa68d770-00ce-479d-8638-c321d359f566" containerName="registry-server" Jan 21 11:55:31 crc kubenswrapper[4881]: E0121 11:55:31.455205 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa68d770-00ce-479d-8638-c321d359f566" containerName="extract-utilities" Jan 21 11:55:31 crc kubenswrapper[4881]: I0121 11:55:31.455216 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa68d770-00ce-479d-8638-c321d359f566" containerName="extract-utilities" Jan 21 11:55:31 crc kubenswrapper[4881]: I0121 11:55:31.455739 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa68d770-00ce-479d-8638-c321d359f566" containerName="registry-server" Jan 21 11:55:31 crc kubenswrapper[4881]: I0121 11:55:31.457817 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-djpvn" Jan 21 11:55:31 crc kubenswrapper[4881]: I0121 11:55:31.463592 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-djpvn"] Jan 21 11:55:31 crc kubenswrapper[4881]: I0121 11:55:31.518135 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e8058c9-2ffc-461a-98b1-5470103994c8-catalog-content\") pod \"redhat-operators-djpvn\" (UID: \"5e8058c9-2ffc-461a-98b1-5470103994c8\") " pod="openshift-marketplace/redhat-operators-djpvn" Jan 21 11:55:31 crc kubenswrapper[4881]: I0121 11:55:31.518727 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvrlm\" (UniqueName: \"kubernetes.io/projected/5e8058c9-2ffc-461a-98b1-5470103994c8-kube-api-access-qvrlm\") pod \"redhat-operators-djpvn\" (UID: \"5e8058c9-2ffc-461a-98b1-5470103994c8\") " pod="openshift-marketplace/redhat-operators-djpvn" Jan 21 11:55:31 crc kubenswrapper[4881]: I0121 11:55:31.518949 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e8058c9-2ffc-461a-98b1-5470103994c8-utilities\") pod \"redhat-operators-djpvn\" (UID: \"5e8058c9-2ffc-461a-98b1-5470103994c8\") " pod="openshift-marketplace/redhat-operators-djpvn" Jan 21 11:55:31 crc kubenswrapper[4881]: I0121 11:55:31.620712 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvrlm\" (UniqueName: \"kubernetes.io/projected/5e8058c9-2ffc-461a-98b1-5470103994c8-kube-api-access-qvrlm\") pod \"redhat-operators-djpvn\" (UID: \"5e8058c9-2ffc-461a-98b1-5470103994c8\") " pod="openshift-marketplace/redhat-operators-djpvn" Jan 21 11:55:31 crc kubenswrapper[4881]: I0121 11:55:31.620780 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e8058c9-2ffc-461a-98b1-5470103994c8-utilities\") pod \"redhat-operators-djpvn\" (UID: \"5e8058c9-2ffc-461a-98b1-5470103994c8\") " pod="openshift-marketplace/redhat-operators-djpvn" Jan 21 11:55:31 crc kubenswrapper[4881]: I0121 11:55:31.620878 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e8058c9-2ffc-461a-98b1-5470103994c8-catalog-content\") pod \"redhat-operators-djpvn\" (UID: \"5e8058c9-2ffc-461a-98b1-5470103994c8\") " pod="openshift-marketplace/redhat-operators-djpvn" Jan 21 11:55:31 crc kubenswrapper[4881]: I0121 11:55:31.621340 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e8058c9-2ffc-461a-98b1-5470103994c8-utilities\") pod \"redhat-operators-djpvn\" (UID: \"5e8058c9-2ffc-461a-98b1-5470103994c8\") " pod="openshift-marketplace/redhat-operators-djpvn" Jan 21 11:55:31 crc kubenswrapper[4881]: I0121 11:55:31.621559 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e8058c9-2ffc-461a-98b1-5470103994c8-catalog-content\") pod \"redhat-operators-djpvn\" (UID: \"5e8058c9-2ffc-461a-98b1-5470103994c8\") " pod="openshift-marketplace/redhat-operators-djpvn" Jan 21 11:55:31 crc kubenswrapper[4881]: I0121 11:55:31.644636 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvrlm\" (UniqueName: \"kubernetes.io/projected/5e8058c9-2ffc-461a-98b1-5470103994c8-kube-api-access-qvrlm\") pod \"redhat-operators-djpvn\" (UID: \"5e8058c9-2ffc-461a-98b1-5470103994c8\") " pod="openshift-marketplace/redhat-operators-djpvn" Jan 21 11:55:31 crc kubenswrapper[4881]: I0121 11:55:31.779888 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-djpvn" Jan 21 11:55:32 crc kubenswrapper[4881]: I0121 11:55:32.951432 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-djpvn"] Jan 21 11:55:33 crc kubenswrapper[4881]: I0121 11:55:33.861439 4881 generic.go:334] "Generic (PLEG): container finished" podID="5e8058c9-2ffc-461a-98b1-5470103994c8" containerID="9d5add8e11ad8cf3da511324f8e418d3c25cdf583504d3fb39bc330543acc405" exitCode=0 Jan 21 11:55:33 crc kubenswrapper[4881]: I0121 11:55:33.861506 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-djpvn" event={"ID":"5e8058c9-2ffc-461a-98b1-5470103994c8","Type":"ContainerDied","Data":"9d5add8e11ad8cf3da511324f8e418d3c25cdf583504d3fb39bc330543acc405"} Jan 21 11:55:33 crc kubenswrapper[4881]: I0121 11:55:33.861958 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-djpvn" event={"ID":"5e8058c9-2ffc-461a-98b1-5470103994c8","Type":"ContainerStarted","Data":"b9cab90a2e43a6bc804312c58baa5fbb4516f350e1ebe2508b8e3bbfc2b6d7ef"} Jan 21 11:55:36 crc kubenswrapper[4881]: I0121 11:55:36.581450 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-djpvn" event={"ID":"5e8058c9-2ffc-461a-98b1-5470103994c8","Type":"ContainerStarted","Data":"cbc2183672b4480581de4c466c173749f95cdd4de19823891648de2dbe542235"} Jan 21 11:55:40 crc kubenswrapper[4881]: I0121 11:55:40.627209 4881 generic.go:334] "Generic (PLEG): container finished" podID="5e8058c9-2ffc-461a-98b1-5470103994c8" containerID="cbc2183672b4480581de4c466c173749f95cdd4de19823891648de2dbe542235" exitCode=0 Jan 21 11:55:40 crc kubenswrapper[4881]: I0121 11:55:40.627328 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-djpvn" event={"ID":"5e8058c9-2ffc-461a-98b1-5470103994c8","Type":"ContainerDied","Data":"cbc2183672b4480581de4c466c173749f95cdd4de19823891648de2dbe542235"} Jan 21 11:55:41 crc kubenswrapper[4881]: I0121 11:55:41.638031 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-djpvn" event={"ID":"5e8058c9-2ffc-461a-98b1-5470103994c8","Type":"ContainerStarted","Data":"2e8dccd1701b660e82c89a939087590cf223d1e9a3674853f77e49eb443f2442"} Jan 21 11:55:41 crc kubenswrapper[4881]: I0121 11:55:41.666832 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-djpvn" podStartSLOduration=3.518684745 podStartE2EDuration="10.66680887s" podCreationTimestamp="2026-01-21 11:55:31 +0000 UTC" firstStartedPulling="2026-01-21 11:55:33.863565419 +0000 UTC m=+3521.123521888" lastFinishedPulling="2026-01-21 11:55:41.011689524 +0000 UTC m=+3528.271646013" observedRunningTime="2026-01-21 11:55:41.660523747 +0000 UTC m=+3528.920480236" watchObservedRunningTime="2026-01-21 11:55:41.66680887 +0000 UTC m=+3528.926765349" Jan 21 11:55:41 crc kubenswrapper[4881]: I0121 11:55:41.780659 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-djpvn" Jan 21 11:55:41 crc kubenswrapper[4881]: I0121 11:55:41.780704 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-djpvn" Jan 21 11:55:42 crc kubenswrapper[4881]: I0121 11:55:42.827505 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-djpvn" podUID="5e8058c9-2ffc-461a-98b1-5470103994c8" containerName="registry-server" probeResult="failure" output=< Jan 21 11:55:42 crc kubenswrapper[4881]: timeout: failed to connect service ":50051" within 1s Jan 21 11:55:42 crc kubenswrapper[4881]: > Jan 21 11:55:51 crc kubenswrapper[4881]: I0121 11:55:51.835870 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-djpvn" Jan 21 11:55:51 crc kubenswrapper[4881]: I0121 11:55:51.901565 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-djpvn" Jan 21 11:55:52 crc kubenswrapper[4881]: I0121 11:55:52.084753 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-djpvn"] Jan 21 11:55:52 crc kubenswrapper[4881]: I0121 11:55:52.896142 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-djpvn" podUID="5e8058c9-2ffc-461a-98b1-5470103994c8" containerName="registry-server" containerID="cri-o://2e8dccd1701b660e82c89a939087590cf223d1e9a3674853f77e49eb443f2442" gracePeriod=2 Jan 21 11:55:53 crc kubenswrapper[4881]: I0121 11:55:53.381255 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-djpvn" Jan 21 11:55:53 crc kubenswrapper[4881]: I0121 11:55:53.437627 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvrlm\" (UniqueName: \"kubernetes.io/projected/5e8058c9-2ffc-461a-98b1-5470103994c8-kube-api-access-qvrlm\") pod \"5e8058c9-2ffc-461a-98b1-5470103994c8\" (UID: \"5e8058c9-2ffc-461a-98b1-5470103994c8\") " Jan 21 11:55:53 crc kubenswrapper[4881]: I0121 11:55:53.439774 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e8058c9-2ffc-461a-98b1-5470103994c8-utilities\") pod \"5e8058c9-2ffc-461a-98b1-5470103994c8\" (UID: \"5e8058c9-2ffc-461a-98b1-5470103994c8\") " Jan 21 11:55:53 crc kubenswrapper[4881]: I0121 11:55:53.440059 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e8058c9-2ffc-461a-98b1-5470103994c8-catalog-content\") pod \"5e8058c9-2ffc-461a-98b1-5470103994c8\" (UID: \"5e8058c9-2ffc-461a-98b1-5470103994c8\") " Jan 21 11:55:53 crc kubenswrapper[4881]: I0121 11:55:53.440445 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e8058c9-2ffc-461a-98b1-5470103994c8-utilities" (OuterVolumeSpecName: "utilities") pod "5e8058c9-2ffc-461a-98b1-5470103994c8" (UID: "5e8058c9-2ffc-461a-98b1-5470103994c8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:55:53 crc kubenswrapper[4881]: I0121 11:55:53.440946 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e8058c9-2ffc-461a-98b1-5470103994c8-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:55:53 crc kubenswrapper[4881]: I0121 11:55:53.452861 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e8058c9-2ffc-461a-98b1-5470103994c8-kube-api-access-qvrlm" (OuterVolumeSpecName: "kube-api-access-qvrlm") pod "5e8058c9-2ffc-461a-98b1-5470103994c8" (UID: "5e8058c9-2ffc-461a-98b1-5470103994c8"). InnerVolumeSpecName "kube-api-access-qvrlm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:55:53 crc kubenswrapper[4881]: I0121 11:55:53.542914 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvrlm\" (UniqueName: \"kubernetes.io/projected/5e8058c9-2ffc-461a-98b1-5470103994c8-kube-api-access-qvrlm\") on node \"crc\" DevicePath \"\"" Jan 21 11:55:53 crc kubenswrapper[4881]: I0121 11:55:53.566880 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e8058c9-2ffc-461a-98b1-5470103994c8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5e8058c9-2ffc-461a-98b1-5470103994c8" (UID: "5e8058c9-2ffc-461a-98b1-5470103994c8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:55:53 crc kubenswrapper[4881]: I0121 11:55:53.644562 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e8058c9-2ffc-461a-98b1-5470103994c8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:55:53 crc kubenswrapper[4881]: I0121 11:55:53.906705 4881 generic.go:334] "Generic (PLEG): container finished" podID="5e8058c9-2ffc-461a-98b1-5470103994c8" containerID="2e8dccd1701b660e82c89a939087590cf223d1e9a3674853f77e49eb443f2442" exitCode=0 Jan 21 11:55:53 crc kubenswrapper[4881]: I0121 11:55:53.906978 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-djpvn" event={"ID":"5e8058c9-2ffc-461a-98b1-5470103994c8","Type":"ContainerDied","Data":"2e8dccd1701b660e82c89a939087590cf223d1e9a3674853f77e49eb443f2442"} Jan 21 11:55:53 crc kubenswrapper[4881]: I0121 11:55:53.907007 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-djpvn" event={"ID":"5e8058c9-2ffc-461a-98b1-5470103994c8","Type":"ContainerDied","Data":"b9cab90a2e43a6bc804312c58baa5fbb4516f350e1ebe2508b8e3bbfc2b6d7ef"} Jan 21 11:55:53 crc kubenswrapper[4881]: I0121 11:55:53.907025 4881 scope.go:117] "RemoveContainer" containerID="2e8dccd1701b660e82c89a939087590cf223d1e9a3674853f77e49eb443f2442" Jan 21 11:55:53 crc kubenswrapper[4881]: I0121 11:55:53.907163 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-djpvn" Jan 21 11:55:53 crc kubenswrapper[4881]: I0121 11:55:53.940042 4881 scope.go:117] "RemoveContainer" containerID="cbc2183672b4480581de4c466c173749f95cdd4de19823891648de2dbe542235" Jan 21 11:55:53 crc kubenswrapper[4881]: I0121 11:55:53.962935 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-djpvn"] Jan 21 11:55:53 crc kubenswrapper[4881]: I0121 11:55:53.969226 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-djpvn"] Jan 21 11:55:53 crc kubenswrapper[4881]: I0121 11:55:53.980613 4881 scope.go:117] "RemoveContainer" containerID="9d5add8e11ad8cf3da511324f8e418d3c25cdf583504d3fb39bc330543acc405" Jan 21 11:55:54 crc kubenswrapper[4881]: I0121 11:55:54.015302 4881 scope.go:117] "RemoveContainer" containerID="2e8dccd1701b660e82c89a939087590cf223d1e9a3674853f77e49eb443f2442" Jan 21 11:55:54 crc kubenswrapper[4881]: E0121 11:55:54.016295 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e8dccd1701b660e82c89a939087590cf223d1e9a3674853f77e49eb443f2442\": container with ID starting with 2e8dccd1701b660e82c89a939087590cf223d1e9a3674853f77e49eb443f2442 not found: ID does not exist" containerID="2e8dccd1701b660e82c89a939087590cf223d1e9a3674853f77e49eb443f2442" Jan 21 11:55:54 crc kubenswrapper[4881]: I0121 11:55:54.016390 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e8dccd1701b660e82c89a939087590cf223d1e9a3674853f77e49eb443f2442"} err="failed to get container status \"2e8dccd1701b660e82c89a939087590cf223d1e9a3674853f77e49eb443f2442\": rpc error: code = NotFound desc = could not find container \"2e8dccd1701b660e82c89a939087590cf223d1e9a3674853f77e49eb443f2442\": container with ID starting with 2e8dccd1701b660e82c89a939087590cf223d1e9a3674853f77e49eb443f2442 not found: ID does not exist" Jan 21 11:55:54 crc kubenswrapper[4881]: I0121 11:55:54.016456 4881 scope.go:117] "RemoveContainer" containerID="cbc2183672b4480581de4c466c173749f95cdd4de19823891648de2dbe542235" Jan 21 11:55:54 crc kubenswrapper[4881]: E0121 11:55:54.017115 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbc2183672b4480581de4c466c173749f95cdd4de19823891648de2dbe542235\": container with ID starting with cbc2183672b4480581de4c466c173749f95cdd4de19823891648de2dbe542235 not found: ID does not exist" containerID="cbc2183672b4480581de4c466c173749f95cdd4de19823891648de2dbe542235" Jan 21 11:55:54 crc kubenswrapper[4881]: I0121 11:55:54.017168 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbc2183672b4480581de4c466c173749f95cdd4de19823891648de2dbe542235"} err="failed to get container status \"cbc2183672b4480581de4c466c173749f95cdd4de19823891648de2dbe542235\": rpc error: code = NotFound desc = could not find container \"cbc2183672b4480581de4c466c173749f95cdd4de19823891648de2dbe542235\": container with ID starting with cbc2183672b4480581de4c466c173749f95cdd4de19823891648de2dbe542235 not found: ID does not exist" Jan 21 11:55:54 crc kubenswrapper[4881]: I0121 11:55:54.017193 4881 scope.go:117] "RemoveContainer" containerID="9d5add8e11ad8cf3da511324f8e418d3c25cdf583504d3fb39bc330543acc405" Jan 21 11:55:54 crc kubenswrapper[4881]: E0121 11:55:54.017741 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d5add8e11ad8cf3da511324f8e418d3c25cdf583504d3fb39bc330543acc405\": container with ID starting with 9d5add8e11ad8cf3da511324f8e418d3c25cdf583504d3fb39bc330543acc405 not found: ID does not exist" containerID="9d5add8e11ad8cf3da511324f8e418d3c25cdf583504d3fb39bc330543acc405" Jan 21 11:55:54 crc kubenswrapper[4881]: I0121 11:55:54.017851 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d5add8e11ad8cf3da511324f8e418d3c25cdf583504d3fb39bc330543acc405"} err="failed to get container status \"9d5add8e11ad8cf3da511324f8e418d3c25cdf583504d3fb39bc330543acc405\": rpc error: code = NotFound desc = could not find container \"9d5add8e11ad8cf3da511324f8e418d3c25cdf583504d3fb39bc330543acc405\": container with ID starting with 9d5add8e11ad8cf3da511324f8e418d3c25cdf583504d3fb39bc330543acc405 not found: ID does not exist" Jan 21 11:55:55 crc kubenswrapper[4881]: I0121 11:55:55.448054 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e8058c9-2ffc-461a-98b1-5470103994c8" path="/var/lib/kubelet/pods/5e8058c9-2ffc-461a-98b1-5470103994c8/volumes" Jan 21 11:55:59 crc kubenswrapper[4881]: I0121 11:55:59.851048 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:55:59 crc kubenswrapper[4881]: I0121 11:55:59.851750 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:56:29 crc kubenswrapper[4881]: I0121 11:56:29.851173 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:56:29 crc kubenswrapper[4881]: I0121 11:56:29.851939 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:56:59 crc kubenswrapper[4881]: I0121 11:56:59.851235 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:56:59 crc kubenswrapper[4881]: I0121 11:56:59.851905 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:56:59 crc kubenswrapper[4881]: I0121 11:56:59.851964 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 11:56:59 crc kubenswrapper[4881]: I0121 11:56:59.853123 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0eb49608bbe8f2a16a73771ce3fd5ae654c9692ec1f4885af786d4be3393b51c"} pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 11:56:59 crc kubenswrapper[4881]: I0121 11:56:59.853206 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" containerID="cri-o://0eb49608bbe8f2a16a73771ce3fd5ae654c9692ec1f4885af786d4be3393b51c" gracePeriod=600 Jan 21 11:57:00 crc kubenswrapper[4881]: I0121 11:57:00.865471 4881 generic.go:334] "Generic (PLEG): container finished" podID="3687b313-1df2-4274-80db-8c758b51bf2d" containerID="0eb49608bbe8f2a16a73771ce3fd5ae654c9692ec1f4885af786d4be3393b51c" exitCode=0 Jan 21 11:57:00 crc kubenswrapper[4881]: I0121 11:57:00.865553 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerDied","Data":"0eb49608bbe8f2a16a73771ce3fd5ae654c9692ec1f4885af786d4be3393b51c"} Jan 21 11:57:00 crc kubenswrapper[4881]: I0121 11:57:00.866145 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"cc4c2fdba8ce6542705b256b23e1d94b8d7eb0f0ca9e8607445022552d1c7bb9"} Jan 21 11:57:00 crc kubenswrapper[4881]: I0121 11:57:00.866185 4881 scope.go:117] "RemoveContainer" containerID="d8ad2c92af12e692917f97f6e76a6242fb5c00dc2c38dcea5e2ce39cd5dfeb57" Jan 21 11:58:37 crc kubenswrapper[4881]: I0121 11:58:37.679007 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wglbm"] Jan 21 11:58:37 crc kubenswrapper[4881]: E0121 11:58:37.680070 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e8058c9-2ffc-461a-98b1-5470103994c8" containerName="extract-content" Jan 21 11:58:37 crc kubenswrapper[4881]: I0121 11:58:37.680088 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e8058c9-2ffc-461a-98b1-5470103994c8" containerName="extract-content" Jan 21 11:58:37 crc kubenswrapper[4881]: E0121 11:58:37.680114 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e8058c9-2ffc-461a-98b1-5470103994c8" containerName="extract-utilities" Jan 21 11:58:37 crc kubenswrapper[4881]: I0121 11:58:37.680122 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e8058c9-2ffc-461a-98b1-5470103994c8" containerName="extract-utilities" Jan 21 11:58:37 crc kubenswrapper[4881]: E0121 11:58:37.680145 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e8058c9-2ffc-461a-98b1-5470103994c8" containerName="registry-server" Jan 21 11:58:37 crc kubenswrapper[4881]: I0121 11:58:37.680153 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e8058c9-2ffc-461a-98b1-5470103994c8" containerName="registry-server" Jan 21 11:58:37 crc kubenswrapper[4881]: I0121 11:58:37.680385 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e8058c9-2ffc-461a-98b1-5470103994c8" containerName="registry-server" Jan 21 11:58:37 crc kubenswrapper[4881]: I0121 11:58:37.682390 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wglbm" Jan 21 11:58:37 crc kubenswrapper[4881]: I0121 11:58:37.696165 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wglbm"] Jan 21 11:58:37 crc kubenswrapper[4881]: I0121 11:58:37.821803 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fc62569-566f-4a73-b58a-93ea02e351d5-catalog-content\") pod \"redhat-marketplace-wglbm\" (UID: \"0fc62569-566f-4a73-b58a-93ea02e351d5\") " pod="openshift-marketplace/redhat-marketplace-wglbm" Jan 21 11:58:37 crc kubenswrapper[4881]: I0121 11:58:37.821861 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fc62569-566f-4a73-b58a-93ea02e351d5-utilities\") pod \"redhat-marketplace-wglbm\" (UID: \"0fc62569-566f-4a73-b58a-93ea02e351d5\") " pod="openshift-marketplace/redhat-marketplace-wglbm" Jan 21 11:58:37 crc kubenswrapper[4881]: I0121 11:58:37.822122 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grng2\" (UniqueName: \"kubernetes.io/projected/0fc62569-566f-4a73-b58a-93ea02e351d5-kube-api-access-grng2\") pod \"redhat-marketplace-wglbm\" (UID: \"0fc62569-566f-4a73-b58a-93ea02e351d5\") " pod="openshift-marketplace/redhat-marketplace-wglbm" Jan 21 11:58:37 crc kubenswrapper[4881]: I0121 11:58:37.924553 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fc62569-566f-4a73-b58a-93ea02e351d5-catalog-content\") pod \"redhat-marketplace-wglbm\" (UID: \"0fc62569-566f-4a73-b58a-93ea02e351d5\") " pod="openshift-marketplace/redhat-marketplace-wglbm" Jan 21 11:58:37 crc kubenswrapper[4881]: I0121 11:58:37.924600 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fc62569-566f-4a73-b58a-93ea02e351d5-utilities\") pod \"redhat-marketplace-wglbm\" (UID: \"0fc62569-566f-4a73-b58a-93ea02e351d5\") " pod="openshift-marketplace/redhat-marketplace-wglbm" Jan 21 11:58:37 crc kubenswrapper[4881]: I0121 11:58:37.924697 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grng2\" (UniqueName: \"kubernetes.io/projected/0fc62569-566f-4a73-b58a-93ea02e351d5-kube-api-access-grng2\") pod \"redhat-marketplace-wglbm\" (UID: \"0fc62569-566f-4a73-b58a-93ea02e351d5\") " pod="openshift-marketplace/redhat-marketplace-wglbm" Jan 21 11:58:37 crc kubenswrapper[4881]: I0121 11:58:37.925232 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fc62569-566f-4a73-b58a-93ea02e351d5-catalog-content\") pod \"redhat-marketplace-wglbm\" (UID: \"0fc62569-566f-4a73-b58a-93ea02e351d5\") " pod="openshift-marketplace/redhat-marketplace-wglbm" Jan 21 11:58:37 crc kubenswrapper[4881]: I0121 11:58:37.925288 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fc62569-566f-4a73-b58a-93ea02e351d5-utilities\") pod \"redhat-marketplace-wglbm\" (UID: \"0fc62569-566f-4a73-b58a-93ea02e351d5\") " pod="openshift-marketplace/redhat-marketplace-wglbm" Jan 21 11:58:37 crc kubenswrapper[4881]: I0121 11:58:37.951876 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grng2\" (UniqueName: \"kubernetes.io/projected/0fc62569-566f-4a73-b58a-93ea02e351d5-kube-api-access-grng2\") pod \"redhat-marketplace-wglbm\" (UID: \"0fc62569-566f-4a73-b58a-93ea02e351d5\") " pod="openshift-marketplace/redhat-marketplace-wglbm" Jan 21 11:58:38 crc kubenswrapper[4881]: I0121 11:58:38.007617 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wglbm" Jan 21 11:58:38 crc kubenswrapper[4881]: I0121 11:58:38.532937 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wglbm"] Jan 21 11:58:38 crc kubenswrapper[4881]: I0121 11:58:38.940221 4881 generic.go:334] "Generic (PLEG): container finished" podID="0fc62569-566f-4a73-b58a-93ea02e351d5" containerID="f4477e3fe85c82b0f8c49b858c6a66049488d37fa120cec5dbcd7a7205111dcb" exitCode=0 Jan 21 11:58:38 crc kubenswrapper[4881]: I0121 11:58:38.940297 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wglbm" event={"ID":"0fc62569-566f-4a73-b58a-93ea02e351d5","Type":"ContainerDied","Data":"f4477e3fe85c82b0f8c49b858c6a66049488d37fa120cec5dbcd7a7205111dcb"} Jan 21 11:58:38 crc kubenswrapper[4881]: I0121 11:58:38.941005 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wglbm" event={"ID":"0fc62569-566f-4a73-b58a-93ea02e351d5","Type":"ContainerStarted","Data":"a1fc02f86769f942a6122e618b7b58b486e44dad90ff27a3391a9c93979aff84"} Jan 21 11:58:38 crc kubenswrapper[4881]: I0121 11:58:38.942757 4881 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 11:58:39 crc kubenswrapper[4881]: I0121 11:58:39.065440 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wnld6"] Jan 21 11:58:39 crc kubenswrapper[4881]: I0121 11:58:39.068577 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wnld6" Jan 21 11:58:39 crc kubenswrapper[4881]: I0121 11:58:39.102851 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wnld6"] Jan 21 11:58:39 crc kubenswrapper[4881]: I0121 11:58:39.167717 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftfl7\" (UniqueName: \"kubernetes.io/projected/13dde7f6-f493-4ebb-ba1c-2ba924f29e23-kube-api-access-ftfl7\") pod \"community-operators-wnld6\" (UID: \"13dde7f6-f493-4ebb-ba1c-2ba924f29e23\") " pod="openshift-marketplace/community-operators-wnld6" Jan 21 11:58:39 crc kubenswrapper[4881]: I0121 11:58:39.167814 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13dde7f6-f493-4ebb-ba1c-2ba924f29e23-utilities\") pod \"community-operators-wnld6\" (UID: \"13dde7f6-f493-4ebb-ba1c-2ba924f29e23\") " pod="openshift-marketplace/community-operators-wnld6" Jan 21 11:58:39 crc kubenswrapper[4881]: I0121 11:58:39.168150 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13dde7f6-f493-4ebb-ba1c-2ba924f29e23-catalog-content\") pod \"community-operators-wnld6\" (UID: \"13dde7f6-f493-4ebb-ba1c-2ba924f29e23\") " pod="openshift-marketplace/community-operators-wnld6" Jan 21 11:58:39 crc kubenswrapper[4881]: I0121 11:58:39.270270 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13dde7f6-f493-4ebb-ba1c-2ba924f29e23-catalog-content\") pod \"community-operators-wnld6\" (UID: \"13dde7f6-f493-4ebb-ba1c-2ba924f29e23\") " pod="openshift-marketplace/community-operators-wnld6" Jan 21 11:58:39 crc kubenswrapper[4881]: I0121 11:58:39.270359 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftfl7\" (UniqueName: \"kubernetes.io/projected/13dde7f6-f493-4ebb-ba1c-2ba924f29e23-kube-api-access-ftfl7\") pod \"community-operators-wnld6\" (UID: \"13dde7f6-f493-4ebb-ba1c-2ba924f29e23\") " pod="openshift-marketplace/community-operators-wnld6" Jan 21 11:58:39 crc kubenswrapper[4881]: I0121 11:58:39.270409 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13dde7f6-f493-4ebb-ba1c-2ba924f29e23-utilities\") pod \"community-operators-wnld6\" (UID: \"13dde7f6-f493-4ebb-ba1c-2ba924f29e23\") " pod="openshift-marketplace/community-operators-wnld6" Jan 21 11:58:39 crc kubenswrapper[4881]: I0121 11:58:39.270951 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13dde7f6-f493-4ebb-ba1c-2ba924f29e23-catalog-content\") pod \"community-operators-wnld6\" (UID: \"13dde7f6-f493-4ebb-ba1c-2ba924f29e23\") " pod="openshift-marketplace/community-operators-wnld6" Jan 21 11:58:39 crc kubenswrapper[4881]: I0121 11:58:39.271004 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13dde7f6-f493-4ebb-ba1c-2ba924f29e23-utilities\") pod \"community-operators-wnld6\" (UID: \"13dde7f6-f493-4ebb-ba1c-2ba924f29e23\") " pod="openshift-marketplace/community-operators-wnld6" Jan 21 11:58:39 crc kubenswrapper[4881]: I0121 11:58:39.291982 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftfl7\" (UniqueName: \"kubernetes.io/projected/13dde7f6-f493-4ebb-ba1c-2ba924f29e23-kube-api-access-ftfl7\") pod \"community-operators-wnld6\" (UID: \"13dde7f6-f493-4ebb-ba1c-2ba924f29e23\") " pod="openshift-marketplace/community-operators-wnld6" Jan 21 11:58:39 crc kubenswrapper[4881]: I0121 11:58:39.404138 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wnld6" Jan 21 11:58:39 crc kubenswrapper[4881]: I0121 11:58:39.952565 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wglbm" event={"ID":"0fc62569-566f-4a73-b58a-93ea02e351d5","Type":"ContainerStarted","Data":"798f33404a294de27316f7ff8b2766e348ebb156993ec085f9ccb178e417a91a"} Jan 21 11:58:39 crc kubenswrapper[4881]: I0121 11:58:39.958674 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wnld6"] Jan 21 11:58:39 crc kubenswrapper[4881]: W0121 11:58:39.966426 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13dde7f6_f493_4ebb_ba1c_2ba924f29e23.slice/crio-9dfcfa193e7da807aee026d705aa3db51d60e43a718829318060d2e20313e7c6 WatchSource:0}: Error finding container 9dfcfa193e7da807aee026d705aa3db51d60e43a718829318060d2e20313e7c6: Status 404 returned error can't find the container with id 9dfcfa193e7da807aee026d705aa3db51d60e43a718829318060d2e20313e7c6 Jan 21 11:58:40 crc kubenswrapper[4881]: I0121 11:58:40.964352 4881 generic.go:334] "Generic (PLEG): container finished" podID="0fc62569-566f-4a73-b58a-93ea02e351d5" containerID="798f33404a294de27316f7ff8b2766e348ebb156993ec085f9ccb178e417a91a" exitCode=0 Jan 21 11:58:40 crc kubenswrapper[4881]: I0121 11:58:40.964419 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wglbm" event={"ID":"0fc62569-566f-4a73-b58a-93ea02e351d5","Type":"ContainerDied","Data":"798f33404a294de27316f7ff8b2766e348ebb156993ec085f9ccb178e417a91a"} Jan 21 11:58:40 crc kubenswrapper[4881]: I0121 11:58:40.968433 4881 generic.go:334] "Generic (PLEG): container finished" podID="13dde7f6-f493-4ebb-ba1c-2ba924f29e23" containerID="7ec875ee36db270ccd84290368a873a416bf8317eab9b3f2ea99be677c73066a" exitCode=0 Jan 21 11:58:40 crc kubenswrapper[4881]: I0121 11:58:40.968553 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnld6" event={"ID":"13dde7f6-f493-4ebb-ba1c-2ba924f29e23","Type":"ContainerDied","Data":"7ec875ee36db270ccd84290368a873a416bf8317eab9b3f2ea99be677c73066a"} Jan 21 11:58:40 crc kubenswrapper[4881]: I0121 11:58:40.968658 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnld6" event={"ID":"13dde7f6-f493-4ebb-ba1c-2ba924f29e23","Type":"ContainerStarted","Data":"9dfcfa193e7da807aee026d705aa3db51d60e43a718829318060d2e20313e7c6"} Jan 21 11:58:42 crc kubenswrapper[4881]: I0121 11:58:42.043033 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnld6" event={"ID":"13dde7f6-f493-4ebb-ba1c-2ba924f29e23","Type":"ContainerStarted","Data":"2ee68717fbc26f32eb264a09574b499020d832f091cbc0024a98d36e8b74228f"} Jan 21 11:58:42 crc kubenswrapper[4881]: I0121 11:58:42.047924 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wglbm" event={"ID":"0fc62569-566f-4a73-b58a-93ea02e351d5","Type":"ContainerStarted","Data":"14b8577e9b7273c011247d4f6cb6d6cad7ed131b5bf5fd6883614bbacb2dce1f"} Jan 21 11:58:42 crc kubenswrapper[4881]: I0121 11:58:42.090387 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wglbm" podStartSLOduration=2.633083076 podStartE2EDuration="5.09036361s" podCreationTimestamp="2026-01-21 11:58:37 +0000 UTC" firstStartedPulling="2026-01-21 11:58:38.942424374 +0000 UTC m=+3706.202380843" lastFinishedPulling="2026-01-21 11:58:41.399704888 +0000 UTC m=+3708.659661377" observedRunningTime="2026-01-21 11:58:42.082641242 +0000 UTC m=+3709.342597711" watchObservedRunningTime="2026-01-21 11:58:42.09036361 +0000 UTC m=+3709.350320079" Jan 21 11:58:44 crc kubenswrapper[4881]: I0121 11:58:44.073587 4881 generic.go:334] "Generic (PLEG): container finished" podID="13dde7f6-f493-4ebb-ba1c-2ba924f29e23" containerID="2ee68717fbc26f32eb264a09574b499020d832f091cbc0024a98d36e8b74228f" exitCode=0 Jan 21 11:58:44 crc kubenswrapper[4881]: I0121 11:58:44.073680 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnld6" event={"ID":"13dde7f6-f493-4ebb-ba1c-2ba924f29e23","Type":"ContainerDied","Data":"2ee68717fbc26f32eb264a09574b499020d832f091cbc0024a98d36e8b74228f"} Jan 21 11:58:45 crc kubenswrapper[4881]: I0121 11:58:45.087755 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnld6" event={"ID":"13dde7f6-f493-4ebb-ba1c-2ba924f29e23","Type":"ContainerStarted","Data":"290a46cd0f3af9ddfb85613c4e7fbe1f03098a1880c638003eec48d10be28765"} Jan 21 11:58:45 crc kubenswrapper[4881]: I0121 11:58:45.112352 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wnld6" podStartSLOduration=2.650694887 podStartE2EDuration="6.112332165s" podCreationTimestamp="2026-01-21 11:58:39 +0000 UTC" firstStartedPulling="2026-01-21 11:58:40.970812618 +0000 UTC m=+3708.230769127" lastFinishedPulling="2026-01-21 11:58:44.432449936 +0000 UTC m=+3711.692406405" observedRunningTime="2026-01-21 11:58:45.107927488 +0000 UTC m=+3712.367883957" watchObservedRunningTime="2026-01-21 11:58:45.112332165 +0000 UTC m=+3712.372288644" Jan 21 11:58:48 crc kubenswrapper[4881]: I0121 11:58:48.008386 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wglbm" Jan 21 11:58:48 crc kubenswrapper[4881]: I0121 11:58:48.009133 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wglbm" Jan 21 11:58:48 crc kubenswrapper[4881]: I0121 11:58:48.060753 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wglbm" Jan 21 11:58:48 crc kubenswrapper[4881]: I0121 11:58:48.188145 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wglbm" Jan 21 11:58:48 crc kubenswrapper[4881]: I0121 11:58:48.655639 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wglbm"] Jan 21 11:58:49 crc kubenswrapper[4881]: I0121 11:58:49.405277 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wnld6" Jan 21 11:58:49 crc kubenswrapper[4881]: I0121 11:58:49.405679 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wnld6" Jan 21 11:58:49 crc kubenswrapper[4881]: I0121 11:58:49.469262 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wnld6" Jan 21 11:58:50 crc kubenswrapper[4881]: I0121 11:58:50.151356 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wglbm" podUID="0fc62569-566f-4a73-b58a-93ea02e351d5" containerName="registry-server" containerID="cri-o://14b8577e9b7273c011247d4f6cb6d6cad7ed131b5bf5fd6883614bbacb2dce1f" gracePeriod=2 Jan 21 11:58:50 crc kubenswrapper[4881]: I0121 11:58:50.200642 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wnld6" Jan 21 11:58:50 crc kubenswrapper[4881]: I0121 11:58:50.660905 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wglbm" Jan 21 11:58:50 crc kubenswrapper[4881]: I0121 11:58:50.797467 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grng2\" (UniqueName: \"kubernetes.io/projected/0fc62569-566f-4a73-b58a-93ea02e351d5-kube-api-access-grng2\") pod \"0fc62569-566f-4a73-b58a-93ea02e351d5\" (UID: \"0fc62569-566f-4a73-b58a-93ea02e351d5\") " Jan 21 11:58:50 crc kubenswrapper[4881]: I0121 11:58:50.797571 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fc62569-566f-4a73-b58a-93ea02e351d5-catalog-content\") pod \"0fc62569-566f-4a73-b58a-93ea02e351d5\" (UID: \"0fc62569-566f-4a73-b58a-93ea02e351d5\") " Jan 21 11:58:50 crc kubenswrapper[4881]: I0121 11:58:50.797700 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fc62569-566f-4a73-b58a-93ea02e351d5-utilities\") pod \"0fc62569-566f-4a73-b58a-93ea02e351d5\" (UID: \"0fc62569-566f-4a73-b58a-93ea02e351d5\") " Jan 21 11:58:50 crc kubenswrapper[4881]: I0121 11:58:50.799128 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fc62569-566f-4a73-b58a-93ea02e351d5-utilities" (OuterVolumeSpecName: "utilities") pod "0fc62569-566f-4a73-b58a-93ea02e351d5" (UID: "0fc62569-566f-4a73-b58a-93ea02e351d5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:58:50 crc kubenswrapper[4881]: I0121 11:58:50.804614 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fc62569-566f-4a73-b58a-93ea02e351d5-kube-api-access-grng2" (OuterVolumeSpecName: "kube-api-access-grng2") pod "0fc62569-566f-4a73-b58a-93ea02e351d5" (UID: "0fc62569-566f-4a73-b58a-93ea02e351d5"). InnerVolumeSpecName "kube-api-access-grng2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:58:50 crc kubenswrapper[4881]: I0121 11:58:50.821706 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fc62569-566f-4a73-b58a-93ea02e351d5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0fc62569-566f-4a73-b58a-93ea02e351d5" (UID: "0fc62569-566f-4a73-b58a-93ea02e351d5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:58:50 crc kubenswrapper[4881]: I0121 11:58:50.901003 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grng2\" (UniqueName: \"kubernetes.io/projected/0fc62569-566f-4a73-b58a-93ea02e351d5-kube-api-access-grng2\") on node \"crc\" DevicePath \"\"" Jan 21 11:58:50 crc kubenswrapper[4881]: I0121 11:58:50.901040 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fc62569-566f-4a73-b58a-93ea02e351d5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:58:50 crc kubenswrapper[4881]: I0121 11:58:50.901049 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fc62569-566f-4a73-b58a-93ea02e351d5-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:58:51 crc kubenswrapper[4881]: I0121 11:58:51.162586 4881 generic.go:334] "Generic (PLEG): container finished" podID="0fc62569-566f-4a73-b58a-93ea02e351d5" containerID="14b8577e9b7273c011247d4f6cb6d6cad7ed131b5bf5fd6883614bbacb2dce1f" exitCode=0 Jan 21 11:58:51 crc kubenswrapper[4881]: I0121 11:58:51.162679 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wglbm" event={"ID":"0fc62569-566f-4a73-b58a-93ea02e351d5","Type":"ContainerDied","Data":"14b8577e9b7273c011247d4f6cb6d6cad7ed131b5bf5fd6883614bbacb2dce1f"} Jan 21 11:58:51 crc kubenswrapper[4881]: I0121 11:58:51.162724 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wglbm" event={"ID":"0fc62569-566f-4a73-b58a-93ea02e351d5","Type":"ContainerDied","Data":"a1fc02f86769f942a6122e618b7b58b486e44dad90ff27a3391a9c93979aff84"} Jan 21 11:58:51 crc kubenswrapper[4881]: I0121 11:58:51.162741 4881 scope.go:117] "RemoveContainer" containerID="14b8577e9b7273c011247d4f6cb6d6cad7ed131b5bf5fd6883614bbacb2dce1f" Jan 21 11:58:51 crc kubenswrapper[4881]: I0121 11:58:51.162690 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wglbm" Jan 21 11:58:51 crc kubenswrapper[4881]: I0121 11:58:51.187600 4881 scope.go:117] "RemoveContainer" containerID="798f33404a294de27316f7ff8b2766e348ebb156993ec085f9ccb178e417a91a" Jan 21 11:58:51 crc kubenswrapper[4881]: I0121 11:58:51.214824 4881 scope.go:117] "RemoveContainer" containerID="f4477e3fe85c82b0f8c49b858c6a66049488d37fa120cec5dbcd7a7205111dcb" Jan 21 11:58:51 crc kubenswrapper[4881]: I0121 11:58:51.219298 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wglbm"] Jan 21 11:58:51 crc kubenswrapper[4881]: I0121 11:58:51.227932 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wglbm"] Jan 21 11:58:51 crc kubenswrapper[4881]: I0121 11:58:51.252058 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wnld6"] Jan 21 11:58:51 crc kubenswrapper[4881]: I0121 11:58:51.282615 4881 scope.go:117] "RemoveContainer" containerID="14b8577e9b7273c011247d4f6cb6d6cad7ed131b5bf5fd6883614bbacb2dce1f" Jan 21 11:58:51 crc kubenswrapper[4881]: E0121 11:58:51.283011 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14b8577e9b7273c011247d4f6cb6d6cad7ed131b5bf5fd6883614bbacb2dce1f\": container with ID starting with 14b8577e9b7273c011247d4f6cb6d6cad7ed131b5bf5fd6883614bbacb2dce1f not found: ID does not exist" containerID="14b8577e9b7273c011247d4f6cb6d6cad7ed131b5bf5fd6883614bbacb2dce1f" Jan 21 11:58:51 crc kubenswrapper[4881]: I0121 11:58:51.283049 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14b8577e9b7273c011247d4f6cb6d6cad7ed131b5bf5fd6883614bbacb2dce1f"} err="failed to get container status \"14b8577e9b7273c011247d4f6cb6d6cad7ed131b5bf5fd6883614bbacb2dce1f\": rpc error: code = NotFound desc = could not find container \"14b8577e9b7273c011247d4f6cb6d6cad7ed131b5bf5fd6883614bbacb2dce1f\": container with ID starting with 14b8577e9b7273c011247d4f6cb6d6cad7ed131b5bf5fd6883614bbacb2dce1f not found: ID does not exist" Jan 21 11:58:51 crc kubenswrapper[4881]: I0121 11:58:51.283078 4881 scope.go:117] "RemoveContainer" containerID="798f33404a294de27316f7ff8b2766e348ebb156993ec085f9ccb178e417a91a" Jan 21 11:58:51 crc kubenswrapper[4881]: E0121 11:58:51.283373 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"798f33404a294de27316f7ff8b2766e348ebb156993ec085f9ccb178e417a91a\": container with ID starting with 798f33404a294de27316f7ff8b2766e348ebb156993ec085f9ccb178e417a91a not found: ID does not exist" containerID="798f33404a294de27316f7ff8b2766e348ebb156993ec085f9ccb178e417a91a" Jan 21 11:58:51 crc kubenswrapper[4881]: I0121 11:58:51.283408 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"798f33404a294de27316f7ff8b2766e348ebb156993ec085f9ccb178e417a91a"} err="failed to get container status \"798f33404a294de27316f7ff8b2766e348ebb156993ec085f9ccb178e417a91a\": rpc error: code = NotFound desc = could not find container \"798f33404a294de27316f7ff8b2766e348ebb156993ec085f9ccb178e417a91a\": container with ID starting with 798f33404a294de27316f7ff8b2766e348ebb156993ec085f9ccb178e417a91a not found: ID does not exist" Jan 21 11:58:51 crc kubenswrapper[4881]: I0121 11:58:51.283427 4881 scope.go:117] "RemoveContainer" containerID="f4477e3fe85c82b0f8c49b858c6a66049488d37fa120cec5dbcd7a7205111dcb" Jan 21 11:58:51 crc kubenswrapper[4881]: E0121 11:58:51.283752 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4477e3fe85c82b0f8c49b858c6a66049488d37fa120cec5dbcd7a7205111dcb\": container with ID starting with f4477e3fe85c82b0f8c49b858c6a66049488d37fa120cec5dbcd7a7205111dcb not found: ID does not exist" containerID="f4477e3fe85c82b0f8c49b858c6a66049488d37fa120cec5dbcd7a7205111dcb" Jan 21 11:58:51 crc kubenswrapper[4881]: I0121 11:58:51.283829 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4477e3fe85c82b0f8c49b858c6a66049488d37fa120cec5dbcd7a7205111dcb"} err="failed to get container status \"f4477e3fe85c82b0f8c49b858c6a66049488d37fa120cec5dbcd7a7205111dcb\": rpc error: code = NotFound desc = could not find container \"f4477e3fe85c82b0f8c49b858c6a66049488d37fa120cec5dbcd7a7205111dcb\": container with ID starting with f4477e3fe85c82b0f8c49b858c6a66049488d37fa120cec5dbcd7a7205111dcb not found: ID does not exist" Jan 21 11:58:51 crc kubenswrapper[4881]: I0121 11:58:51.327658 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fc62569-566f-4a73-b58a-93ea02e351d5" path="/var/lib/kubelet/pods/0fc62569-566f-4a73-b58a-93ea02e351d5/volumes" Jan 21 11:58:52 crc kubenswrapper[4881]: I0121 11:58:52.175479 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wnld6" podUID="13dde7f6-f493-4ebb-ba1c-2ba924f29e23" containerName="registry-server" containerID="cri-o://290a46cd0f3af9ddfb85613c4e7fbe1f03098a1880c638003eec48d10be28765" gracePeriod=2 Jan 21 11:58:52 crc kubenswrapper[4881]: I0121 11:58:52.654971 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wnld6" Jan 21 11:58:52 crc kubenswrapper[4881]: I0121 11:58:52.846720 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftfl7\" (UniqueName: \"kubernetes.io/projected/13dde7f6-f493-4ebb-ba1c-2ba924f29e23-kube-api-access-ftfl7\") pod \"13dde7f6-f493-4ebb-ba1c-2ba924f29e23\" (UID: \"13dde7f6-f493-4ebb-ba1c-2ba924f29e23\") " Jan 21 11:58:52 crc kubenswrapper[4881]: I0121 11:58:52.846967 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13dde7f6-f493-4ebb-ba1c-2ba924f29e23-catalog-content\") pod \"13dde7f6-f493-4ebb-ba1c-2ba924f29e23\" (UID: \"13dde7f6-f493-4ebb-ba1c-2ba924f29e23\") " Jan 21 11:58:52 crc kubenswrapper[4881]: I0121 11:58:52.847087 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13dde7f6-f493-4ebb-ba1c-2ba924f29e23-utilities\") pod \"13dde7f6-f493-4ebb-ba1c-2ba924f29e23\" (UID: \"13dde7f6-f493-4ebb-ba1c-2ba924f29e23\") " Jan 21 11:58:52 crc kubenswrapper[4881]: I0121 11:58:52.849956 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13dde7f6-f493-4ebb-ba1c-2ba924f29e23-utilities" (OuterVolumeSpecName: "utilities") pod "13dde7f6-f493-4ebb-ba1c-2ba924f29e23" (UID: "13dde7f6-f493-4ebb-ba1c-2ba924f29e23"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:58:52 crc kubenswrapper[4881]: I0121 11:58:52.878966 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13dde7f6-f493-4ebb-ba1c-2ba924f29e23-kube-api-access-ftfl7" (OuterVolumeSpecName: "kube-api-access-ftfl7") pod "13dde7f6-f493-4ebb-ba1c-2ba924f29e23" (UID: "13dde7f6-f493-4ebb-ba1c-2ba924f29e23"). InnerVolumeSpecName "kube-api-access-ftfl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:58:52 crc kubenswrapper[4881]: I0121 11:58:52.918111 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13dde7f6-f493-4ebb-ba1c-2ba924f29e23-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "13dde7f6-f493-4ebb-ba1c-2ba924f29e23" (UID: "13dde7f6-f493-4ebb-ba1c-2ba924f29e23"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:58:52 crc kubenswrapper[4881]: I0121 11:58:52.949649 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13dde7f6-f493-4ebb-ba1c-2ba924f29e23-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:58:52 crc kubenswrapper[4881]: I0121 11:58:52.949685 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13dde7f6-f493-4ebb-ba1c-2ba924f29e23-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:58:52 crc kubenswrapper[4881]: I0121 11:58:52.949698 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftfl7\" (UniqueName: \"kubernetes.io/projected/13dde7f6-f493-4ebb-ba1c-2ba924f29e23-kube-api-access-ftfl7\") on node \"crc\" DevicePath \"\"" Jan 21 11:58:53 crc kubenswrapper[4881]: I0121 11:58:53.215296 4881 generic.go:334] "Generic (PLEG): container finished" podID="13dde7f6-f493-4ebb-ba1c-2ba924f29e23" containerID="290a46cd0f3af9ddfb85613c4e7fbe1f03098a1880c638003eec48d10be28765" exitCode=0 Jan 21 11:58:53 crc kubenswrapper[4881]: I0121 11:58:53.215339 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnld6" event={"ID":"13dde7f6-f493-4ebb-ba1c-2ba924f29e23","Type":"ContainerDied","Data":"290a46cd0f3af9ddfb85613c4e7fbe1f03098a1880c638003eec48d10be28765"} Jan 21 11:58:53 crc kubenswrapper[4881]: I0121 11:58:53.215369 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnld6" event={"ID":"13dde7f6-f493-4ebb-ba1c-2ba924f29e23","Type":"ContainerDied","Data":"9dfcfa193e7da807aee026d705aa3db51d60e43a718829318060d2e20313e7c6"} Jan 21 11:58:53 crc kubenswrapper[4881]: I0121 11:58:53.215400 4881 scope.go:117] "RemoveContainer" containerID="290a46cd0f3af9ddfb85613c4e7fbe1f03098a1880c638003eec48d10be28765" Jan 21 11:58:53 crc kubenswrapper[4881]: I0121 11:58:53.215720 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wnld6" Jan 21 11:58:53 crc kubenswrapper[4881]: I0121 11:58:53.239176 4881 scope.go:117] "RemoveContainer" containerID="2ee68717fbc26f32eb264a09574b499020d832f091cbc0024a98d36e8b74228f" Jan 21 11:58:53 crc kubenswrapper[4881]: I0121 11:58:53.266933 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wnld6"] Jan 21 11:58:53 crc kubenswrapper[4881]: I0121 11:58:53.284065 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wnld6"] Jan 21 11:58:53 crc kubenswrapper[4881]: I0121 11:58:53.287574 4881 scope.go:117] "RemoveContainer" containerID="7ec875ee36db270ccd84290368a873a416bf8317eab9b3f2ea99be677c73066a" Jan 21 11:58:53 crc kubenswrapper[4881]: I0121 11:58:53.325258 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13dde7f6-f493-4ebb-ba1c-2ba924f29e23" path="/var/lib/kubelet/pods/13dde7f6-f493-4ebb-ba1c-2ba924f29e23/volumes" Jan 21 11:58:53 crc kubenswrapper[4881]: I0121 11:58:53.336582 4881 scope.go:117] "RemoveContainer" containerID="290a46cd0f3af9ddfb85613c4e7fbe1f03098a1880c638003eec48d10be28765" Jan 21 11:58:53 crc kubenswrapper[4881]: E0121 11:58:53.337236 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"290a46cd0f3af9ddfb85613c4e7fbe1f03098a1880c638003eec48d10be28765\": container with ID starting with 290a46cd0f3af9ddfb85613c4e7fbe1f03098a1880c638003eec48d10be28765 not found: ID does not exist" containerID="290a46cd0f3af9ddfb85613c4e7fbe1f03098a1880c638003eec48d10be28765" Jan 21 11:58:53 crc kubenswrapper[4881]: I0121 11:58:53.337324 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"290a46cd0f3af9ddfb85613c4e7fbe1f03098a1880c638003eec48d10be28765"} err="failed to get container status \"290a46cd0f3af9ddfb85613c4e7fbe1f03098a1880c638003eec48d10be28765\": rpc error: code = NotFound desc = could not find container \"290a46cd0f3af9ddfb85613c4e7fbe1f03098a1880c638003eec48d10be28765\": container with ID starting with 290a46cd0f3af9ddfb85613c4e7fbe1f03098a1880c638003eec48d10be28765 not found: ID does not exist" Jan 21 11:58:53 crc kubenswrapper[4881]: I0121 11:58:53.337362 4881 scope.go:117] "RemoveContainer" containerID="2ee68717fbc26f32eb264a09574b499020d832f091cbc0024a98d36e8b74228f" Jan 21 11:58:53 crc kubenswrapper[4881]: E0121 11:58:53.337979 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ee68717fbc26f32eb264a09574b499020d832f091cbc0024a98d36e8b74228f\": container with ID starting with 2ee68717fbc26f32eb264a09574b499020d832f091cbc0024a98d36e8b74228f not found: ID does not exist" containerID="2ee68717fbc26f32eb264a09574b499020d832f091cbc0024a98d36e8b74228f" Jan 21 11:58:53 crc kubenswrapper[4881]: I0121 11:58:53.338065 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ee68717fbc26f32eb264a09574b499020d832f091cbc0024a98d36e8b74228f"} err="failed to get container status \"2ee68717fbc26f32eb264a09574b499020d832f091cbc0024a98d36e8b74228f\": rpc error: code = NotFound desc = could not find container \"2ee68717fbc26f32eb264a09574b499020d832f091cbc0024a98d36e8b74228f\": container with ID starting with 2ee68717fbc26f32eb264a09574b499020d832f091cbc0024a98d36e8b74228f not found: ID does not exist" Jan 21 11:58:53 crc kubenswrapper[4881]: I0121 11:58:53.338130 4881 scope.go:117] "RemoveContainer" containerID="7ec875ee36db270ccd84290368a873a416bf8317eab9b3f2ea99be677c73066a" Jan 21 11:58:53 crc kubenswrapper[4881]: E0121 11:58:53.338814 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ec875ee36db270ccd84290368a873a416bf8317eab9b3f2ea99be677c73066a\": container with ID starting with 7ec875ee36db270ccd84290368a873a416bf8317eab9b3f2ea99be677c73066a not found: ID does not exist" containerID="7ec875ee36db270ccd84290368a873a416bf8317eab9b3f2ea99be677c73066a" Jan 21 11:58:53 crc kubenswrapper[4881]: I0121 11:58:53.338847 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ec875ee36db270ccd84290368a873a416bf8317eab9b3f2ea99be677c73066a"} err="failed to get container status \"7ec875ee36db270ccd84290368a873a416bf8317eab9b3f2ea99be677c73066a\": rpc error: code = NotFound desc = could not find container \"7ec875ee36db270ccd84290368a873a416bf8317eab9b3f2ea99be677c73066a\": container with ID starting with 7ec875ee36db270ccd84290368a873a416bf8317eab9b3f2ea99be677c73066a not found: ID does not exist" Jan 21 11:59:29 crc kubenswrapper[4881]: I0121 11:59:29.851131 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:59:29 crc kubenswrapper[4881]: I0121 11:59:29.851781 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:59:59 crc kubenswrapper[4881]: I0121 11:59:59.850716 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:59:59 crc kubenswrapper[4881]: I0121 11:59:59.851376 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:00:00 crc kubenswrapper[4881]: I0121 12:00:00.199913 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483280-rl7qn"] Jan 21 12:00:00 crc kubenswrapper[4881]: E0121 12:00:00.200456 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13dde7f6-f493-4ebb-ba1c-2ba924f29e23" containerName="extract-content" Jan 21 12:00:00 crc kubenswrapper[4881]: I0121 12:00:00.200481 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="13dde7f6-f493-4ebb-ba1c-2ba924f29e23" containerName="extract-content" Jan 21 12:00:00 crc kubenswrapper[4881]: E0121 12:00:00.200520 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13dde7f6-f493-4ebb-ba1c-2ba924f29e23" containerName="registry-server" Jan 21 12:00:00 crc kubenswrapper[4881]: I0121 12:00:00.200529 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="13dde7f6-f493-4ebb-ba1c-2ba924f29e23" containerName="registry-server" Jan 21 12:00:00 crc kubenswrapper[4881]: E0121 12:00:00.200541 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fc62569-566f-4a73-b58a-93ea02e351d5" containerName="extract-content" Jan 21 12:00:00 crc kubenswrapper[4881]: I0121 12:00:00.200550 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fc62569-566f-4a73-b58a-93ea02e351d5" containerName="extract-content" Jan 21 12:00:00 crc kubenswrapper[4881]: E0121 12:00:00.200566 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fc62569-566f-4a73-b58a-93ea02e351d5" containerName="registry-server" Jan 21 12:00:00 crc kubenswrapper[4881]: I0121 12:00:00.200573 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fc62569-566f-4a73-b58a-93ea02e351d5" containerName="registry-server" Jan 21 12:00:00 crc kubenswrapper[4881]: E0121 12:00:00.200609 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13dde7f6-f493-4ebb-ba1c-2ba924f29e23" containerName="extract-utilities" Jan 21 12:00:00 crc kubenswrapper[4881]: I0121 12:00:00.200618 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="13dde7f6-f493-4ebb-ba1c-2ba924f29e23" containerName="extract-utilities" Jan 21 12:00:00 crc kubenswrapper[4881]: E0121 12:00:00.200634 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fc62569-566f-4a73-b58a-93ea02e351d5" containerName="extract-utilities" Jan 21 12:00:00 crc kubenswrapper[4881]: I0121 12:00:00.200642 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fc62569-566f-4a73-b58a-93ea02e351d5" containerName="extract-utilities" Jan 21 12:00:00 crc kubenswrapper[4881]: I0121 12:00:00.200930 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fc62569-566f-4a73-b58a-93ea02e351d5" containerName="registry-server" Jan 21 12:00:00 crc kubenswrapper[4881]: I0121 12:00:00.200965 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="13dde7f6-f493-4ebb-ba1c-2ba924f29e23" containerName="registry-server" Jan 21 12:00:00 crc kubenswrapper[4881]: I0121 12:00:00.202018 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483280-rl7qn" Jan 21 12:00:00 crc kubenswrapper[4881]: I0121 12:00:00.209975 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 12:00:00 crc kubenswrapper[4881]: I0121 12:00:00.210178 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 12:00:00 crc kubenswrapper[4881]: I0121 12:00:00.211100 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483280-rl7qn"] Jan 21 12:00:00 crc kubenswrapper[4881]: I0121 12:00:00.220417 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e74d3023-7ad9-4e65-9627-cc8127927f6b-config-volume\") pod \"collect-profiles-29483280-rl7qn\" (UID: \"e74d3023-7ad9-4e65-9627-cc8127927f6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483280-rl7qn" Jan 21 12:00:00 crc kubenswrapper[4881]: I0121 12:00:00.220636 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh2lm\" (UniqueName: \"kubernetes.io/projected/e74d3023-7ad9-4e65-9627-cc8127927f6b-kube-api-access-dh2lm\") pod \"collect-profiles-29483280-rl7qn\" (UID: \"e74d3023-7ad9-4e65-9627-cc8127927f6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483280-rl7qn" Jan 21 12:00:00 crc kubenswrapper[4881]: I0121 12:00:00.220957 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e74d3023-7ad9-4e65-9627-cc8127927f6b-secret-volume\") pod \"collect-profiles-29483280-rl7qn\" (UID: \"e74d3023-7ad9-4e65-9627-cc8127927f6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483280-rl7qn" Jan 21 12:00:00 crc kubenswrapper[4881]: I0121 12:00:00.323326 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e74d3023-7ad9-4e65-9627-cc8127927f6b-config-volume\") pod \"collect-profiles-29483280-rl7qn\" (UID: \"e74d3023-7ad9-4e65-9627-cc8127927f6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483280-rl7qn" Jan 21 12:00:00 crc kubenswrapper[4881]: I0121 12:00:00.323405 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dh2lm\" (UniqueName: \"kubernetes.io/projected/e74d3023-7ad9-4e65-9627-cc8127927f6b-kube-api-access-dh2lm\") pod \"collect-profiles-29483280-rl7qn\" (UID: \"e74d3023-7ad9-4e65-9627-cc8127927f6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483280-rl7qn" Jan 21 12:00:00 crc kubenswrapper[4881]: I0121 12:00:00.323549 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e74d3023-7ad9-4e65-9627-cc8127927f6b-secret-volume\") pod \"collect-profiles-29483280-rl7qn\" (UID: \"e74d3023-7ad9-4e65-9627-cc8127927f6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483280-rl7qn" Jan 21 12:00:00 crc kubenswrapper[4881]: I0121 12:00:00.325032 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e74d3023-7ad9-4e65-9627-cc8127927f6b-config-volume\") pod \"collect-profiles-29483280-rl7qn\" (UID: \"e74d3023-7ad9-4e65-9627-cc8127927f6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483280-rl7qn" Jan 21 12:00:00 crc kubenswrapper[4881]: I0121 12:00:00.332653 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e74d3023-7ad9-4e65-9627-cc8127927f6b-secret-volume\") pod \"collect-profiles-29483280-rl7qn\" (UID: \"e74d3023-7ad9-4e65-9627-cc8127927f6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483280-rl7qn" Jan 21 12:00:00 crc kubenswrapper[4881]: I0121 12:00:00.340871 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dh2lm\" (UniqueName: \"kubernetes.io/projected/e74d3023-7ad9-4e65-9627-cc8127927f6b-kube-api-access-dh2lm\") pod \"collect-profiles-29483280-rl7qn\" (UID: \"e74d3023-7ad9-4e65-9627-cc8127927f6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483280-rl7qn" Jan 21 12:00:00 crc kubenswrapper[4881]: I0121 12:00:00.530638 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483280-rl7qn" Jan 21 12:00:01 crc kubenswrapper[4881]: I0121 12:00:01.026632 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483280-rl7qn"] Jan 21 12:00:01 crc kubenswrapper[4881]: I0121 12:00:01.975977 4881 generic.go:334] "Generic (PLEG): container finished" podID="e74d3023-7ad9-4e65-9627-cc8127927f6b" containerID="f4fa32143b4e9e742c21ea98ab2bdc72498265c13850a532b1a72e716a34316a" exitCode=0 Jan 21 12:00:01 crc kubenswrapper[4881]: I0121 12:00:01.976371 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483280-rl7qn" event={"ID":"e74d3023-7ad9-4e65-9627-cc8127927f6b","Type":"ContainerDied","Data":"f4fa32143b4e9e742c21ea98ab2bdc72498265c13850a532b1a72e716a34316a"} Jan 21 12:00:01 crc kubenswrapper[4881]: I0121 12:00:01.976414 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483280-rl7qn" event={"ID":"e74d3023-7ad9-4e65-9627-cc8127927f6b","Type":"ContainerStarted","Data":"043088683aabf2d418e683c2f01d6f19ffe884d446753df6d19dcbbf4a207932"} Jan 21 12:00:03 crc kubenswrapper[4881]: I0121 12:00:03.385196 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483280-rl7qn" Jan 21 12:00:03 crc kubenswrapper[4881]: I0121 12:00:03.486920 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e74d3023-7ad9-4e65-9627-cc8127927f6b-secret-volume\") pod \"e74d3023-7ad9-4e65-9627-cc8127927f6b\" (UID: \"e74d3023-7ad9-4e65-9627-cc8127927f6b\") " Jan 21 12:00:03 crc kubenswrapper[4881]: I0121 12:00:03.487089 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e74d3023-7ad9-4e65-9627-cc8127927f6b-config-volume\") pod \"e74d3023-7ad9-4e65-9627-cc8127927f6b\" (UID: \"e74d3023-7ad9-4e65-9627-cc8127927f6b\") " Jan 21 12:00:03 crc kubenswrapper[4881]: I0121 12:00:03.487226 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dh2lm\" (UniqueName: \"kubernetes.io/projected/e74d3023-7ad9-4e65-9627-cc8127927f6b-kube-api-access-dh2lm\") pod \"e74d3023-7ad9-4e65-9627-cc8127927f6b\" (UID: \"e74d3023-7ad9-4e65-9627-cc8127927f6b\") " Jan 21 12:00:03 crc kubenswrapper[4881]: I0121 12:00:03.487589 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e74d3023-7ad9-4e65-9627-cc8127927f6b-config-volume" (OuterVolumeSpecName: "config-volume") pod "e74d3023-7ad9-4e65-9627-cc8127927f6b" (UID: "e74d3023-7ad9-4e65-9627-cc8127927f6b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 12:00:03 crc kubenswrapper[4881]: I0121 12:00:03.488243 4881 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e74d3023-7ad9-4e65-9627-cc8127927f6b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 12:00:03 crc kubenswrapper[4881]: I0121 12:00:03.493688 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e74d3023-7ad9-4e65-9627-cc8127927f6b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e74d3023-7ad9-4e65-9627-cc8127927f6b" (UID: "e74d3023-7ad9-4e65-9627-cc8127927f6b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 12:00:03 crc kubenswrapper[4881]: I0121 12:00:03.494695 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e74d3023-7ad9-4e65-9627-cc8127927f6b-kube-api-access-dh2lm" (OuterVolumeSpecName: "kube-api-access-dh2lm") pod "e74d3023-7ad9-4e65-9627-cc8127927f6b" (UID: "e74d3023-7ad9-4e65-9627-cc8127927f6b"). InnerVolumeSpecName "kube-api-access-dh2lm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:00:03 crc kubenswrapper[4881]: I0121 12:00:03.589993 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dh2lm\" (UniqueName: \"kubernetes.io/projected/e74d3023-7ad9-4e65-9627-cc8127927f6b-kube-api-access-dh2lm\") on node \"crc\" DevicePath \"\"" Jan 21 12:00:03 crc kubenswrapper[4881]: I0121 12:00:03.590033 4881 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e74d3023-7ad9-4e65-9627-cc8127927f6b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 12:00:03 crc kubenswrapper[4881]: I0121 12:00:03.997544 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483280-rl7qn" event={"ID":"e74d3023-7ad9-4e65-9627-cc8127927f6b","Type":"ContainerDied","Data":"043088683aabf2d418e683c2f01d6f19ffe884d446753df6d19dcbbf4a207932"} Jan 21 12:00:03 crc kubenswrapper[4881]: I0121 12:00:03.997594 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="043088683aabf2d418e683c2f01d6f19ffe884d446753df6d19dcbbf4a207932" Jan 21 12:00:03 crc kubenswrapper[4881]: I0121 12:00:03.997634 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483280-rl7qn" Jan 21 12:00:04 crc kubenswrapper[4881]: I0121 12:00:04.471198 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483235-h6fqb"] Jan 21 12:00:04 crc kubenswrapper[4881]: I0121 12:00:04.480893 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483235-h6fqb"] Jan 21 12:00:05 crc kubenswrapper[4881]: I0121 12:00:05.332278 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c37f0ee6-fcc1-4663-91a3-ab5e47dad851" path="/var/lib/kubelet/pods/c37f0ee6-fcc1-4663-91a3-ab5e47dad851/volumes" Jan 21 12:00:29 crc kubenswrapper[4881]: I0121 12:00:29.851668 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:00:29 crc kubenswrapper[4881]: I0121 12:00:29.852586 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:00:29 crc kubenswrapper[4881]: I0121 12:00:29.852676 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 12:00:29 crc kubenswrapper[4881]: I0121 12:00:29.854153 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cc4c2fdba8ce6542705b256b23e1d94b8d7eb0f0ca9e8607445022552d1c7bb9"} pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 12:00:29 crc kubenswrapper[4881]: I0121 12:00:29.854276 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" containerID="cri-o://cc4c2fdba8ce6542705b256b23e1d94b8d7eb0f0ca9e8607445022552d1c7bb9" gracePeriod=600 Jan 21 12:00:29 crc kubenswrapper[4881]: E0121 12:00:29.986660 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:00:30 crc kubenswrapper[4881]: I0121 12:00:30.257179 4881 generic.go:334] "Generic (PLEG): container finished" podID="3687b313-1df2-4274-80db-8c758b51bf2d" containerID="cc4c2fdba8ce6542705b256b23e1d94b8d7eb0f0ca9e8607445022552d1c7bb9" exitCode=0 Jan 21 12:00:30 crc kubenswrapper[4881]: I0121 12:00:30.257242 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerDied","Data":"cc4c2fdba8ce6542705b256b23e1d94b8d7eb0f0ca9e8607445022552d1c7bb9"} Jan 21 12:00:30 crc kubenswrapper[4881]: I0121 12:00:30.257393 4881 scope.go:117] "RemoveContainer" containerID="0eb49608bbe8f2a16a73771ce3fd5ae654c9692ec1f4885af786d4be3393b51c" Jan 21 12:00:30 crc kubenswrapper[4881]: I0121 12:00:30.259013 4881 scope.go:117] "RemoveContainer" containerID="cc4c2fdba8ce6542705b256b23e1d94b8d7eb0f0ca9e8607445022552d1c7bb9" Jan 21 12:00:30 crc kubenswrapper[4881]: E0121 12:00:30.259511 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:00:32 crc kubenswrapper[4881]: I0121 12:00:32.599010 4881 scope.go:117] "RemoveContainer" containerID="4ef110f660eb1c97d787ba6c2683b1ded92c0cd6a25a9dac3c9da2e19fd3d06a" Jan 21 12:00:43 crc kubenswrapper[4881]: I0121 12:00:43.318083 4881 scope.go:117] "RemoveContainer" containerID="cc4c2fdba8ce6542705b256b23e1d94b8d7eb0f0ca9e8607445022552d1c7bb9" Jan 21 12:00:43 crc kubenswrapper[4881]: E0121 12:00:43.319001 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:00:57 crc kubenswrapper[4881]: I0121 12:00:57.310768 4881 scope.go:117] "RemoveContainer" containerID="cc4c2fdba8ce6542705b256b23e1d94b8d7eb0f0ca9e8607445022552d1c7bb9" Jan 21 12:00:57 crc kubenswrapper[4881]: E0121 12:00:57.311805 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:01:00 crc kubenswrapper[4881]: I0121 12:01:00.182483 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29483281-5vf4h"] Jan 21 12:01:00 crc kubenswrapper[4881]: E0121 12:01:00.184310 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e74d3023-7ad9-4e65-9627-cc8127927f6b" containerName="collect-profiles" Jan 21 12:01:00 crc kubenswrapper[4881]: I0121 12:01:00.184338 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="e74d3023-7ad9-4e65-9627-cc8127927f6b" containerName="collect-profiles" Jan 21 12:01:00 crc kubenswrapper[4881]: I0121 12:01:00.184719 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="e74d3023-7ad9-4e65-9627-cc8127927f6b" containerName="collect-profiles" Jan 21 12:01:00 crc kubenswrapper[4881]: I0121 12:01:00.186242 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483281-5vf4h" Jan 21 12:01:00 crc kubenswrapper[4881]: I0121 12:01:00.195973 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29483281-5vf4h"] Jan 21 12:01:00 crc kubenswrapper[4881]: I0121 12:01:00.280243 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcjvz\" (UniqueName: \"kubernetes.io/projected/d4b92750-a75d-44b9-b0ba-75296371fc59-kube-api-access-pcjvz\") pod \"keystone-cron-29483281-5vf4h\" (UID: \"d4b92750-a75d-44b9-b0ba-75296371fc59\") " pod="openstack/keystone-cron-29483281-5vf4h" Jan 21 12:01:00 crc kubenswrapper[4881]: I0121 12:01:00.280428 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4b92750-a75d-44b9-b0ba-75296371fc59-config-data\") pod \"keystone-cron-29483281-5vf4h\" (UID: \"d4b92750-a75d-44b9-b0ba-75296371fc59\") " pod="openstack/keystone-cron-29483281-5vf4h" Jan 21 12:01:00 crc kubenswrapper[4881]: I0121 12:01:00.280509 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4b92750-a75d-44b9-b0ba-75296371fc59-combined-ca-bundle\") pod \"keystone-cron-29483281-5vf4h\" (UID: \"d4b92750-a75d-44b9-b0ba-75296371fc59\") " pod="openstack/keystone-cron-29483281-5vf4h" Jan 21 12:01:00 crc kubenswrapper[4881]: I0121 12:01:00.280638 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d4b92750-a75d-44b9-b0ba-75296371fc59-fernet-keys\") pod \"keystone-cron-29483281-5vf4h\" (UID: \"d4b92750-a75d-44b9-b0ba-75296371fc59\") " pod="openstack/keystone-cron-29483281-5vf4h" Jan 21 12:01:00 crc kubenswrapper[4881]: I0121 12:01:00.382363 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4b92750-a75d-44b9-b0ba-75296371fc59-config-data\") pod \"keystone-cron-29483281-5vf4h\" (UID: \"d4b92750-a75d-44b9-b0ba-75296371fc59\") " pod="openstack/keystone-cron-29483281-5vf4h" Jan 21 12:01:00 crc kubenswrapper[4881]: I0121 12:01:00.382482 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4b92750-a75d-44b9-b0ba-75296371fc59-combined-ca-bundle\") pod \"keystone-cron-29483281-5vf4h\" (UID: \"d4b92750-a75d-44b9-b0ba-75296371fc59\") " pod="openstack/keystone-cron-29483281-5vf4h" Jan 21 12:01:00 crc kubenswrapper[4881]: I0121 12:01:00.382560 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d4b92750-a75d-44b9-b0ba-75296371fc59-fernet-keys\") pod \"keystone-cron-29483281-5vf4h\" (UID: \"d4b92750-a75d-44b9-b0ba-75296371fc59\") " pod="openstack/keystone-cron-29483281-5vf4h" Jan 21 12:01:00 crc kubenswrapper[4881]: I0121 12:01:00.382624 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcjvz\" (UniqueName: \"kubernetes.io/projected/d4b92750-a75d-44b9-b0ba-75296371fc59-kube-api-access-pcjvz\") pod \"keystone-cron-29483281-5vf4h\" (UID: \"d4b92750-a75d-44b9-b0ba-75296371fc59\") " pod="openstack/keystone-cron-29483281-5vf4h" Jan 21 12:01:00 crc kubenswrapper[4881]: I0121 12:01:00.392104 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d4b92750-a75d-44b9-b0ba-75296371fc59-fernet-keys\") pod \"keystone-cron-29483281-5vf4h\" (UID: \"d4b92750-a75d-44b9-b0ba-75296371fc59\") " pod="openstack/keystone-cron-29483281-5vf4h" Jan 21 12:01:00 crc kubenswrapper[4881]: I0121 12:01:00.392170 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4b92750-a75d-44b9-b0ba-75296371fc59-combined-ca-bundle\") pod \"keystone-cron-29483281-5vf4h\" (UID: \"d4b92750-a75d-44b9-b0ba-75296371fc59\") " pod="openstack/keystone-cron-29483281-5vf4h" Jan 21 12:01:00 crc kubenswrapper[4881]: I0121 12:01:00.392213 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4b92750-a75d-44b9-b0ba-75296371fc59-config-data\") pod \"keystone-cron-29483281-5vf4h\" (UID: \"d4b92750-a75d-44b9-b0ba-75296371fc59\") " pod="openstack/keystone-cron-29483281-5vf4h" Jan 21 12:01:00 crc kubenswrapper[4881]: I0121 12:01:00.407076 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcjvz\" (UniqueName: \"kubernetes.io/projected/d4b92750-a75d-44b9-b0ba-75296371fc59-kube-api-access-pcjvz\") pod \"keystone-cron-29483281-5vf4h\" (UID: \"d4b92750-a75d-44b9-b0ba-75296371fc59\") " pod="openstack/keystone-cron-29483281-5vf4h" Jan 21 12:01:00 crc kubenswrapper[4881]: I0121 12:01:00.505711 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483281-5vf4h" Jan 21 12:01:01 crc kubenswrapper[4881]: I0121 12:01:01.026400 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29483281-5vf4h"] Jan 21 12:01:01 crc kubenswrapper[4881]: I0121 12:01:01.724488 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483281-5vf4h" event={"ID":"d4b92750-a75d-44b9-b0ba-75296371fc59","Type":"ContainerStarted","Data":"be33628a74d9a97066f006dffffcfca1b14cc440a7bf9af3ccb2aba1319485a7"} Jan 21 12:01:01 crc kubenswrapper[4881]: I0121 12:01:01.724864 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483281-5vf4h" event={"ID":"d4b92750-a75d-44b9-b0ba-75296371fc59","Type":"ContainerStarted","Data":"1cc079d49d1423ee4e1244a5c9cc50e50531364c616afdcfe5ffebdfd0abd447"} Jan 21 12:01:01 crc kubenswrapper[4881]: I0121 12:01:01.756910 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29483281-5vf4h" podStartSLOduration=1.756885741 podStartE2EDuration="1.756885741s" podCreationTimestamp="2026-01-21 12:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 12:01:01.746589439 +0000 UTC m=+3849.006545928" watchObservedRunningTime="2026-01-21 12:01:01.756885741 +0000 UTC m=+3849.016842210" Jan 21 12:01:05 crc kubenswrapper[4881]: I0121 12:01:05.768201 4881 generic.go:334] "Generic (PLEG): container finished" podID="d4b92750-a75d-44b9-b0ba-75296371fc59" containerID="be33628a74d9a97066f006dffffcfca1b14cc440a7bf9af3ccb2aba1319485a7" exitCode=0 Jan 21 12:01:05 crc kubenswrapper[4881]: I0121 12:01:05.768273 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483281-5vf4h" event={"ID":"d4b92750-a75d-44b9-b0ba-75296371fc59","Type":"ContainerDied","Data":"be33628a74d9a97066f006dffffcfca1b14cc440a7bf9af3ccb2aba1319485a7"} Jan 21 12:01:07 crc kubenswrapper[4881]: I0121 12:01:07.213324 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483281-5vf4h" Jan 21 12:01:07 crc kubenswrapper[4881]: I0121 12:01:07.245727 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d4b92750-a75d-44b9-b0ba-75296371fc59-fernet-keys\") pod \"d4b92750-a75d-44b9-b0ba-75296371fc59\" (UID: \"d4b92750-a75d-44b9-b0ba-75296371fc59\") " Jan 21 12:01:07 crc kubenswrapper[4881]: I0121 12:01:07.246004 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4b92750-a75d-44b9-b0ba-75296371fc59-config-data\") pod \"d4b92750-a75d-44b9-b0ba-75296371fc59\" (UID: \"d4b92750-a75d-44b9-b0ba-75296371fc59\") " Jan 21 12:01:07 crc kubenswrapper[4881]: I0121 12:01:07.246043 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcjvz\" (UniqueName: \"kubernetes.io/projected/d4b92750-a75d-44b9-b0ba-75296371fc59-kube-api-access-pcjvz\") pod \"d4b92750-a75d-44b9-b0ba-75296371fc59\" (UID: \"d4b92750-a75d-44b9-b0ba-75296371fc59\") " Jan 21 12:01:07 crc kubenswrapper[4881]: I0121 12:01:07.246137 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4b92750-a75d-44b9-b0ba-75296371fc59-combined-ca-bundle\") pod \"d4b92750-a75d-44b9-b0ba-75296371fc59\" (UID: \"d4b92750-a75d-44b9-b0ba-75296371fc59\") " Jan 21 12:01:07 crc kubenswrapper[4881]: I0121 12:01:07.265160 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4b92750-a75d-44b9-b0ba-75296371fc59-kube-api-access-pcjvz" (OuterVolumeSpecName: "kube-api-access-pcjvz") pod "d4b92750-a75d-44b9-b0ba-75296371fc59" (UID: "d4b92750-a75d-44b9-b0ba-75296371fc59"). InnerVolumeSpecName "kube-api-access-pcjvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:01:07 crc kubenswrapper[4881]: I0121 12:01:07.275639 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4b92750-a75d-44b9-b0ba-75296371fc59-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d4b92750-a75d-44b9-b0ba-75296371fc59" (UID: "d4b92750-a75d-44b9-b0ba-75296371fc59"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 12:01:07 crc kubenswrapper[4881]: I0121 12:01:07.287174 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4b92750-a75d-44b9-b0ba-75296371fc59-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4b92750-a75d-44b9-b0ba-75296371fc59" (UID: "d4b92750-a75d-44b9-b0ba-75296371fc59"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 12:01:07 crc kubenswrapper[4881]: I0121 12:01:07.319413 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4b92750-a75d-44b9-b0ba-75296371fc59-config-data" (OuterVolumeSpecName: "config-data") pod "d4b92750-a75d-44b9-b0ba-75296371fc59" (UID: "d4b92750-a75d-44b9-b0ba-75296371fc59"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 12:01:07 crc kubenswrapper[4881]: I0121 12:01:07.351147 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4b92750-a75d-44b9-b0ba-75296371fc59-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 12:01:07 crc kubenswrapper[4881]: I0121 12:01:07.351218 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcjvz\" (UniqueName: \"kubernetes.io/projected/d4b92750-a75d-44b9-b0ba-75296371fc59-kube-api-access-pcjvz\") on node \"crc\" DevicePath \"\"" Jan 21 12:01:07 crc kubenswrapper[4881]: I0121 12:01:07.351252 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4b92750-a75d-44b9-b0ba-75296371fc59-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 12:01:07 crc kubenswrapper[4881]: I0121 12:01:07.351284 4881 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d4b92750-a75d-44b9-b0ba-75296371fc59-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 12:01:07 crc kubenswrapper[4881]: I0121 12:01:07.790513 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483281-5vf4h" event={"ID":"d4b92750-a75d-44b9-b0ba-75296371fc59","Type":"ContainerDied","Data":"1cc079d49d1423ee4e1244a5c9cc50e50531364c616afdcfe5ffebdfd0abd447"} Jan 21 12:01:07 crc kubenswrapper[4881]: I0121 12:01:07.790577 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cc079d49d1423ee4e1244a5c9cc50e50531364c616afdcfe5ffebdfd0abd447" Jan 21 12:01:07 crc kubenswrapper[4881]: I0121 12:01:07.790585 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483281-5vf4h" Jan 21 12:01:11 crc kubenswrapper[4881]: I0121 12:01:11.310544 4881 scope.go:117] "RemoveContainer" containerID="cc4c2fdba8ce6542705b256b23e1d94b8d7eb0f0ca9e8607445022552d1c7bb9" Jan 21 12:01:11 crc kubenswrapper[4881]: E0121 12:01:11.311314 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:01:25 crc kubenswrapper[4881]: I0121 12:01:25.315382 4881 scope.go:117] "RemoveContainer" containerID="cc4c2fdba8ce6542705b256b23e1d94b8d7eb0f0ca9e8607445022552d1c7bb9" Jan 21 12:01:25 crc kubenswrapper[4881]: E0121 12:01:25.316634 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:01:37 crc kubenswrapper[4881]: I0121 12:01:37.311238 4881 scope.go:117] "RemoveContainer" containerID="cc4c2fdba8ce6542705b256b23e1d94b8d7eb0f0ca9e8607445022552d1c7bb9" Jan 21 12:01:37 crc kubenswrapper[4881]: E0121 12:01:37.312065 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:01:52 crc kubenswrapper[4881]: I0121 12:01:52.310567 4881 scope.go:117] "RemoveContainer" containerID="cc4c2fdba8ce6542705b256b23e1d94b8d7eb0f0ca9e8607445022552d1c7bb9" Jan 21 12:01:52 crc kubenswrapper[4881]: E0121 12:01:52.311428 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:02:06 crc kubenswrapper[4881]: I0121 12:02:06.312191 4881 scope.go:117] "RemoveContainer" containerID="cc4c2fdba8ce6542705b256b23e1d94b8d7eb0f0ca9e8607445022552d1c7bb9" Jan 21 12:02:06 crc kubenswrapper[4881]: E0121 12:02:06.313271 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:02:18 crc kubenswrapper[4881]: I0121 12:02:18.311174 4881 scope.go:117] "RemoveContainer" containerID="cc4c2fdba8ce6542705b256b23e1d94b8d7eb0f0ca9e8607445022552d1c7bb9" Jan 21 12:02:18 crc kubenswrapper[4881]: E0121 12:02:18.311960 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:02:33 crc kubenswrapper[4881]: I0121 12:02:33.317107 4881 scope.go:117] "RemoveContainer" containerID="cc4c2fdba8ce6542705b256b23e1d94b8d7eb0f0ca9e8607445022552d1c7bb9" Jan 21 12:02:33 crc kubenswrapper[4881]: E0121 12:02:33.317632 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:02:47 crc kubenswrapper[4881]: I0121 12:02:47.311114 4881 scope.go:117] "RemoveContainer" containerID="cc4c2fdba8ce6542705b256b23e1d94b8d7eb0f0ca9e8607445022552d1c7bb9" Jan 21 12:02:47 crc kubenswrapper[4881]: E0121 12:02:47.311928 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:03:00 crc kubenswrapper[4881]: I0121 12:03:00.311532 4881 scope.go:117] "RemoveContainer" containerID="cc4c2fdba8ce6542705b256b23e1d94b8d7eb0f0ca9e8607445022552d1c7bb9" Jan 21 12:03:00 crc kubenswrapper[4881]: E0121 12:03:00.312360 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:03:12 crc kubenswrapper[4881]: I0121 12:03:12.311335 4881 scope.go:117] "RemoveContainer" containerID="cc4c2fdba8ce6542705b256b23e1d94b8d7eb0f0ca9e8607445022552d1c7bb9" Jan 21 12:03:12 crc kubenswrapper[4881]: E0121 12:03:12.312313 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:03:23 crc kubenswrapper[4881]: I0121 12:03:23.317868 4881 scope.go:117] "RemoveContainer" containerID="cc4c2fdba8ce6542705b256b23e1d94b8d7eb0f0ca9e8607445022552d1c7bb9" Jan 21 12:03:23 crc kubenswrapper[4881]: E0121 12:03:23.320483 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:03:34 crc kubenswrapper[4881]: I0121 12:03:34.311877 4881 scope.go:117] "RemoveContainer" containerID="cc4c2fdba8ce6542705b256b23e1d94b8d7eb0f0ca9e8607445022552d1c7bb9" Jan 21 12:03:34 crc kubenswrapper[4881]: E0121 12:03:34.312904 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:03:47 crc kubenswrapper[4881]: I0121 12:03:47.310285 4881 scope.go:117] "RemoveContainer" containerID="cc4c2fdba8ce6542705b256b23e1d94b8d7eb0f0ca9e8607445022552d1c7bb9" Jan 21 12:03:47 crc kubenswrapper[4881]: E0121 12:03:47.312257 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:03:58 crc kubenswrapper[4881]: I0121 12:03:58.311683 4881 scope.go:117] "RemoveContainer" containerID="cc4c2fdba8ce6542705b256b23e1d94b8d7eb0f0ca9e8607445022552d1c7bb9" Jan 21 12:03:58 crc kubenswrapper[4881]: E0121 12:03:58.312694 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:04:11 crc kubenswrapper[4881]: I0121 12:04:11.311945 4881 scope.go:117] "RemoveContainer" containerID="cc4c2fdba8ce6542705b256b23e1d94b8d7eb0f0ca9e8607445022552d1c7bb9" Jan 21 12:04:11 crc kubenswrapper[4881]: E0121 12:04:11.313043 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:04:25 crc kubenswrapper[4881]: I0121 12:04:25.310979 4881 scope.go:117] "RemoveContainer" containerID="cc4c2fdba8ce6542705b256b23e1d94b8d7eb0f0ca9e8607445022552d1c7bb9" Jan 21 12:04:25 crc kubenswrapper[4881]: E0121 12:04:25.312042 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:04:36 crc kubenswrapper[4881]: I0121 12:04:36.311230 4881 scope.go:117] "RemoveContainer" containerID="cc4c2fdba8ce6542705b256b23e1d94b8d7eb0f0ca9e8607445022552d1c7bb9" Jan 21 12:04:36 crc kubenswrapper[4881]: E0121 12:04:36.313971 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:04:50 crc kubenswrapper[4881]: I0121 12:04:50.311021 4881 scope.go:117] "RemoveContainer" containerID="cc4c2fdba8ce6542705b256b23e1d94b8d7eb0f0ca9e8607445022552d1c7bb9" Jan 21 12:04:50 crc kubenswrapper[4881]: E0121 12:04:50.311750 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:05:04 crc kubenswrapper[4881]: I0121 12:05:04.311976 4881 scope.go:117] "RemoveContainer" containerID="cc4c2fdba8ce6542705b256b23e1d94b8d7eb0f0ca9e8607445022552d1c7bb9" Jan 21 12:05:04 crc kubenswrapper[4881]: E0121 12:05:04.312951 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:05:16 crc kubenswrapper[4881]: I0121 12:05:16.312006 4881 scope.go:117] "RemoveContainer" containerID="cc4c2fdba8ce6542705b256b23e1d94b8d7eb0f0ca9e8607445022552d1c7bb9" Jan 21 12:05:16 crc kubenswrapper[4881]: E0121 12:05:16.313336 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:05:27 crc kubenswrapper[4881]: I0121 12:05:27.311176 4881 scope.go:117] "RemoveContainer" containerID="cc4c2fdba8ce6542705b256b23e1d94b8d7eb0f0ca9e8607445022552d1c7bb9" Jan 21 12:05:27 crc kubenswrapper[4881]: E0121 12:05:27.311852 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:05:42 crc kubenswrapper[4881]: I0121 12:05:42.310896 4881 scope.go:117] "RemoveContainer" containerID="cc4c2fdba8ce6542705b256b23e1d94b8d7eb0f0ca9e8607445022552d1c7bb9" Jan 21 12:05:43 crc kubenswrapper[4881]: I0121 12:05:43.060303 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"8fa2fcd197247817c68b133d6a51bf7eca2545a597f5deb7e87467827e522318"} Jan 21 12:06:04 crc kubenswrapper[4881]: I0121 12:06:04.076214 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qnhh2"] Jan 21 12:06:04 crc kubenswrapper[4881]: E0121 12:06:04.077381 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4b92750-a75d-44b9-b0ba-75296371fc59" containerName="keystone-cron" Jan 21 12:06:04 crc kubenswrapper[4881]: I0121 12:06:04.077396 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4b92750-a75d-44b9-b0ba-75296371fc59" containerName="keystone-cron" Jan 21 12:06:04 crc kubenswrapper[4881]: I0121 12:06:04.077623 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4b92750-a75d-44b9-b0ba-75296371fc59" containerName="keystone-cron" Jan 21 12:06:04 crc kubenswrapper[4881]: I0121 12:06:04.079392 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qnhh2" Jan 21 12:06:04 crc kubenswrapper[4881]: I0121 12:06:04.082379 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b9e1b23-382c-4857-9ffa-0106af9afaa8-catalog-content\") pod \"redhat-operators-qnhh2\" (UID: \"7b9e1b23-382c-4857-9ffa-0106af9afaa8\") " pod="openshift-marketplace/redhat-operators-qnhh2" Jan 21 12:06:04 crc kubenswrapper[4881]: I0121 12:06:04.082682 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpsmq\" (UniqueName: \"kubernetes.io/projected/7b9e1b23-382c-4857-9ffa-0106af9afaa8-kube-api-access-qpsmq\") pod \"redhat-operators-qnhh2\" (UID: \"7b9e1b23-382c-4857-9ffa-0106af9afaa8\") " pod="openshift-marketplace/redhat-operators-qnhh2" Jan 21 12:06:04 crc kubenswrapper[4881]: I0121 12:06:04.082734 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b9e1b23-382c-4857-9ffa-0106af9afaa8-utilities\") pod \"redhat-operators-qnhh2\" (UID: \"7b9e1b23-382c-4857-9ffa-0106af9afaa8\") " pod="openshift-marketplace/redhat-operators-qnhh2" Jan 21 12:06:04 crc kubenswrapper[4881]: I0121 12:06:04.094383 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qnhh2"] Jan 21 12:06:04 crc kubenswrapper[4881]: I0121 12:06:04.185551 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b9e1b23-382c-4857-9ffa-0106af9afaa8-catalog-content\") pod \"redhat-operators-qnhh2\" (UID: \"7b9e1b23-382c-4857-9ffa-0106af9afaa8\") " pod="openshift-marketplace/redhat-operators-qnhh2" Jan 21 12:06:04 crc kubenswrapper[4881]: I0121 12:06:04.185734 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpsmq\" (UniqueName: \"kubernetes.io/projected/7b9e1b23-382c-4857-9ffa-0106af9afaa8-kube-api-access-qpsmq\") pod \"redhat-operators-qnhh2\" (UID: \"7b9e1b23-382c-4857-9ffa-0106af9afaa8\") " pod="openshift-marketplace/redhat-operators-qnhh2" Jan 21 12:06:04 crc kubenswrapper[4881]: I0121 12:06:04.185768 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b9e1b23-382c-4857-9ffa-0106af9afaa8-utilities\") pod \"redhat-operators-qnhh2\" (UID: \"7b9e1b23-382c-4857-9ffa-0106af9afaa8\") " pod="openshift-marketplace/redhat-operators-qnhh2" Jan 21 12:06:04 crc kubenswrapper[4881]: I0121 12:06:04.186328 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b9e1b23-382c-4857-9ffa-0106af9afaa8-catalog-content\") pod \"redhat-operators-qnhh2\" (UID: \"7b9e1b23-382c-4857-9ffa-0106af9afaa8\") " pod="openshift-marketplace/redhat-operators-qnhh2" Jan 21 12:06:04 crc kubenswrapper[4881]: I0121 12:06:04.186462 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b9e1b23-382c-4857-9ffa-0106af9afaa8-utilities\") pod \"redhat-operators-qnhh2\" (UID: \"7b9e1b23-382c-4857-9ffa-0106af9afaa8\") " pod="openshift-marketplace/redhat-operators-qnhh2" Jan 21 12:06:04 crc kubenswrapper[4881]: I0121 12:06:04.211936 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpsmq\" (UniqueName: \"kubernetes.io/projected/7b9e1b23-382c-4857-9ffa-0106af9afaa8-kube-api-access-qpsmq\") pod \"redhat-operators-qnhh2\" (UID: \"7b9e1b23-382c-4857-9ffa-0106af9afaa8\") " pod="openshift-marketplace/redhat-operators-qnhh2" Jan 21 12:06:04 crc kubenswrapper[4881]: I0121 12:06:04.410023 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qnhh2" Jan 21 12:06:04 crc kubenswrapper[4881]: I0121 12:06:04.922883 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qnhh2"] Jan 21 12:06:04 crc kubenswrapper[4881]: W0121 12:06:04.926062 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b9e1b23_382c_4857_9ffa_0106af9afaa8.slice/crio-25513ebd94ad748a797e6b5332f9cbb867e4bc462face6f0fc3b7ed4e0ed1504 WatchSource:0}: Error finding container 25513ebd94ad748a797e6b5332f9cbb867e4bc462face6f0fc3b7ed4e0ed1504: Status 404 returned error can't find the container with id 25513ebd94ad748a797e6b5332f9cbb867e4bc462face6f0fc3b7ed4e0ed1504 Jan 21 12:06:05 crc kubenswrapper[4881]: I0121 12:06:05.616653 4881 generic.go:334] "Generic (PLEG): container finished" podID="7b9e1b23-382c-4857-9ffa-0106af9afaa8" containerID="309f09412bab91b28cad03a81dc1b53676d2d0eaa20c5596ad91194b47204b65" exitCode=0 Jan 21 12:06:05 crc kubenswrapper[4881]: I0121 12:06:05.616882 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qnhh2" event={"ID":"7b9e1b23-382c-4857-9ffa-0106af9afaa8","Type":"ContainerDied","Data":"309f09412bab91b28cad03a81dc1b53676d2d0eaa20c5596ad91194b47204b65"} Jan 21 12:06:05 crc kubenswrapper[4881]: I0121 12:06:05.616909 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qnhh2" event={"ID":"7b9e1b23-382c-4857-9ffa-0106af9afaa8","Type":"ContainerStarted","Data":"25513ebd94ad748a797e6b5332f9cbb867e4bc462face6f0fc3b7ed4e0ed1504"} Jan 21 12:06:05 crc kubenswrapper[4881]: I0121 12:06:05.619394 4881 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 12:06:06 crc kubenswrapper[4881]: I0121 12:06:06.629875 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qnhh2" event={"ID":"7b9e1b23-382c-4857-9ffa-0106af9afaa8","Type":"ContainerStarted","Data":"fd13d11261e10d806b8fc9a31e08126bcb2b5dabf46be5b7eb671c2157e7d1db"} Jan 21 12:06:10 crc kubenswrapper[4881]: I0121 12:06:10.678123 4881 generic.go:334] "Generic (PLEG): container finished" podID="7b9e1b23-382c-4857-9ffa-0106af9afaa8" containerID="fd13d11261e10d806b8fc9a31e08126bcb2b5dabf46be5b7eb671c2157e7d1db" exitCode=0 Jan 21 12:06:10 crc kubenswrapper[4881]: I0121 12:06:10.678271 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qnhh2" event={"ID":"7b9e1b23-382c-4857-9ffa-0106af9afaa8","Type":"ContainerDied","Data":"fd13d11261e10d806b8fc9a31e08126bcb2b5dabf46be5b7eb671c2157e7d1db"} Jan 21 12:06:11 crc kubenswrapper[4881]: I0121 12:06:11.691799 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qnhh2" event={"ID":"7b9e1b23-382c-4857-9ffa-0106af9afaa8","Type":"ContainerStarted","Data":"2bf2fad5d13f5e5ff97b6e448324df0e93f3fdddee103d48c5ee75aba4b2dd1b"} Jan 21 12:06:11 crc kubenswrapper[4881]: I0121 12:06:11.719180 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qnhh2" podStartSLOduration=2.283024034 podStartE2EDuration="7.71913413s" podCreationTimestamp="2026-01-21 12:06:04 +0000 UTC" firstStartedPulling="2026-01-21 12:06:05.619135328 +0000 UTC m=+4152.879091797" lastFinishedPulling="2026-01-21 12:06:11.055245384 +0000 UTC m=+4158.315201893" observedRunningTime="2026-01-21 12:06:11.707037309 +0000 UTC m=+4158.966993788" watchObservedRunningTime="2026-01-21 12:06:11.71913413 +0000 UTC m=+4158.979090599" Jan 21 12:06:14 crc kubenswrapper[4881]: I0121 12:06:14.411566 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qnhh2" Jan 21 12:06:14 crc kubenswrapper[4881]: I0121 12:06:14.412088 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qnhh2" Jan 21 12:06:15 crc kubenswrapper[4881]: I0121 12:06:15.501859 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qnhh2" podUID="7b9e1b23-382c-4857-9ffa-0106af9afaa8" containerName="registry-server" probeResult="failure" output=< Jan 21 12:06:15 crc kubenswrapper[4881]: timeout: failed to connect service ":50051" within 1s Jan 21 12:06:15 crc kubenswrapper[4881]: > Jan 21 12:06:24 crc kubenswrapper[4881]: I0121 12:06:24.468727 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qnhh2" Jan 21 12:06:24 crc kubenswrapper[4881]: I0121 12:06:24.528224 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qnhh2" Jan 21 12:06:24 crc kubenswrapper[4881]: I0121 12:06:24.714572 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qnhh2"] Jan 21 12:06:25 crc kubenswrapper[4881]: I0121 12:06:25.840553 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qnhh2" podUID="7b9e1b23-382c-4857-9ffa-0106af9afaa8" containerName="registry-server" containerID="cri-o://2bf2fad5d13f5e5ff97b6e448324df0e93f3fdddee103d48c5ee75aba4b2dd1b" gracePeriod=2 Jan 21 12:06:26 crc kubenswrapper[4881]: I0121 12:06:26.340715 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qnhh2" Jan 21 12:06:26 crc kubenswrapper[4881]: I0121 12:06:26.449747 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b9e1b23-382c-4857-9ffa-0106af9afaa8-catalog-content\") pod \"7b9e1b23-382c-4857-9ffa-0106af9afaa8\" (UID: \"7b9e1b23-382c-4857-9ffa-0106af9afaa8\") " Jan 21 12:06:26 crc kubenswrapper[4881]: I0121 12:06:26.449849 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpsmq\" (UniqueName: \"kubernetes.io/projected/7b9e1b23-382c-4857-9ffa-0106af9afaa8-kube-api-access-qpsmq\") pod \"7b9e1b23-382c-4857-9ffa-0106af9afaa8\" (UID: \"7b9e1b23-382c-4857-9ffa-0106af9afaa8\") " Jan 21 12:06:26 crc kubenswrapper[4881]: I0121 12:06:26.449879 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b9e1b23-382c-4857-9ffa-0106af9afaa8-utilities\") pod \"7b9e1b23-382c-4857-9ffa-0106af9afaa8\" (UID: \"7b9e1b23-382c-4857-9ffa-0106af9afaa8\") " Jan 21 12:06:26 crc kubenswrapper[4881]: I0121 12:06:26.451335 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b9e1b23-382c-4857-9ffa-0106af9afaa8-utilities" (OuterVolumeSpecName: "utilities") pod "7b9e1b23-382c-4857-9ffa-0106af9afaa8" (UID: "7b9e1b23-382c-4857-9ffa-0106af9afaa8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:06:26 crc kubenswrapper[4881]: I0121 12:06:26.459880 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b9e1b23-382c-4857-9ffa-0106af9afaa8-kube-api-access-qpsmq" (OuterVolumeSpecName: "kube-api-access-qpsmq") pod "7b9e1b23-382c-4857-9ffa-0106af9afaa8" (UID: "7b9e1b23-382c-4857-9ffa-0106af9afaa8"). InnerVolumeSpecName "kube-api-access-qpsmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:06:26 crc kubenswrapper[4881]: I0121 12:06:26.556394 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpsmq\" (UniqueName: \"kubernetes.io/projected/7b9e1b23-382c-4857-9ffa-0106af9afaa8-kube-api-access-qpsmq\") on node \"crc\" DevicePath \"\"" Jan 21 12:06:26 crc kubenswrapper[4881]: I0121 12:06:26.556691 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b9e1b23-382c-4857-9ffa-0106af9afaa8-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 12:06:26 crc kubenswrapper[4881]: I0121 12:06:26.628167 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b9e1b23-382c-4857-9ffa-0106af9afaa8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7b9e1b23-382c-4857-9ffa-0106af9afaa8" (UID: "7b9e1b23-382c-4857-9ffa-0106af9afaa8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:06:26 crc kubenswrapper[4881]: I0121 12:06:26.658811 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b9e1b23-382c-4857-9ffa-0106af9afaa8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 12:06:26 crc kubenswrapper[4881]: I0121 12:06:26.852141 4881 generic.go:334] "Generic (PLEG): container finished" podID="7b9e1b23-382c-4857-9ffa-0106af9afaa8" containerID="2bf2fad5d13f5e5ff97b6e448324df0e93f3fdddee103d48c5ee75aba4b2dd1b" exitCode=0 Jan 21 12:06:26 crc kubenswrapper[4881]: I0121 12:06:26.852193 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qnhh2" event={"ID":"7b9e1b23-382c-4857-9ffa-0106af9afaa8","Type":"ContainerDied","Data":"2bf2fad5d13f5e5ff97b6e448324df0e93f3fdddee103d48c5ee75aba4b2dd1b"} Jan 21 12:06:26 crc kubenswrapper[4881]: I0121 12:06:26.852240 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qnhh2" event={"ID":"7b9e1b23-382c-4857-9ffa-0106af9afaa8","Type":"ContainerDied","Data":"25513ebd94ad748a797e6b5332f9cbb867e4bc462face6f0fc3b7ed4e0ed1504"} Jan 21 12:06:26 crc kubenswrapper[4881]: I0121 12:06:26.852268 4881 scope.go:117] "RemoveContainer" containerID="2bf2fad5d13f5e5ff97b6e448324df0e93f3fdddee103d48c5ee75aba4b2dd1b" Jan 21 12:06:26 crc kubenswrapper[4881]: I0121 12:06:26.853017 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qnhh2" Jan 21 12:06:26 crc kubenswrapper[4881]: I0121 12:06:26.891652 4881 scope.go:117] "RemoveContainer" containerID="fd13d11261e10d806b8fc9a31e08126bcb2b5dabf46be5b7eb671c2157e7d1db" Jan 21 12:06:26 crc kubenswrapper[4881]: I0121 12:06:26.893082 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qnhh2"] Jan 21 12:06:26 crc kubenswrapper[4881]: I0121 12:06:26.909148 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qnhh2"] Jan 21 12:06:26 crc kubenswrapper[4881]: I0121 12:06:26.925425 4881 scope.go:117] "RemoveContainer" containerID="309f09412bab91b28cad03a81dc1b53676d2d0eaa20c5596ad91194b47204b65" Jan 21 12:06:26 crc kubenswrapper[4881]: I0121 12:06:26.968734 4881 scope.go:117] "RemoveContainer" containerID="2bf2fad5d13f5e5ff97b6e448324df0e93f3fdddee103d48c5ee75aba4b2dd1b" Jan 21 12:06:26 crc kubenswrapper[4881]: E0121 12:06:26.969418 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bf2fad5d13f5e5ff97b6e448324df0e93f3fdddee103d48c5ee75aba4b2dd1b\": container with ID starting with 2bf2fad5d13f5e5ff97b6e448324df0e93f3fdddee103d48c5ee75aba4b2dd1b not found: ID does not exist" containerID="2bf2fad5d13f5e5ff97b6e448324df0e93f3fdddee103d48c5ee75aba4b2dd1b" Jan 21 12:06:26 crc kubenswrapper[4881]: I0121 12:06:26.969542 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bf2fad5d13f5e5ff97b6e448324df0e93f3fdddee103d48c5ee75aba4b2dd1b"} err="failed to get container status \"2bf2fad5d13f5e5ff97b6e448324df0e93f3fdddee103d48c5ee75aba4b2dd1b\": rpc error: code = NotFound desc = could not find container \"2bf2fad5d13f5e5ff97b6e448324df0e93f3fdddee103d48c5ee75aba4b2dd1b\": container with ID starting with 2bf2fad5d13f5e5ff97b6e448324df0e93f3fdddee103d48c5ee75aba4b2dd1b not found: ID does not exist" Jan 21 12:06:26 crc kubenswrapper[4881]: I0121 12:06:26.969644 4881 scope.go:117] "RemoveContainer" containerID="fd13d11261e10d806b8fc9a31e08126bcb2b5dabf46be5b7eb671c2157e7d1db" Jan 21 12:06:26 crc kubenswrapper[4881]: E0121 12:06:26.970182 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd13d11261e10d806b8fc9a31e08126bcb2b5dabf46be5b7eb671c2157e7d1db\": container with ID starting with fd13d11261e10d806b8fc9a31e08126bcb2b5dabf46be5b7eb671c2157e7d1db not found: ID does not exist" containerID="fd13d11261e10d806b8fc9a31e08126bcb2b5dabf46be5b7eb671c2157e7d1db" Jan 21 12:06:26 crc kubenswrapper[4881]: I0121 12:06:26.970214 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd13d11261e10d806b8fc9a31e08126bcb2b5dabf46be5b7eb671c2157e7d1db"} err="failed to get container status \"fd13d11261e10d806b8fc9a31e08126bcb2b5dabf46be5b7eb671c2157e7d1db\": rpc error: code = NotFound desc = could not find container \"fd13d11261e10d806b8fc9a31e08126bcb2b5dabf46be5b7eb671c2157e7d1db\": container with ID starting with fd13d11261e10d806b8fc9a31e08126bcb2b5dabf46be5b7eb671c2157e7d1db not found: ID does not exist" Jan 21 12:06:26 crc kubenswrapper[4881]: I0121 12:06:26.970237 4881 scope.go:117] "RemoveContainer" containerID="309f09412bab91b28cad03a81dc1b53676d2d0eaa20c5596ad91194b47204b65" Jan 21 12:06:26 crc kubenswrapper[4881]: E0121 12:06:26.970573 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"309f09412bab91b28cad03a81dc1b53676d2d0eaa20c5596ad91194b47204b65\": container with ID starting with 309f09412bab91b28cad03a81dc1b53676d2d0eaa20c5596ad91194b47204b65 not found: ID does not exist" containerID="309f09412bab91b28cad03a81dc1b53676d2d0eaa20c5596ad91194b47204b65" Jan 21 12:06:26 crc kubenswrapper[4881]: I0121 12:06:26.970674 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"309f09412bab91b28cad03a81dc1b53676d2d0eaa20c5596ad91194b47204b65"} err="failed to get container status \"309f09412bab91b28cad03a81dc1b53676d2d0eaa20c5596ad91194b47204b65\": rpc error: code = NotFound desc = could not find container \"309f09412bab91b28cad03a81dc1b53676d2d0eaa20c5596ad91194b47204b65\": container with ID starting with 309f09412bab91b28cad03a81dc1b53676d2d0eaa20c5596ad91194b47204b65 not found: ID does not exist" Jan 21 12:06:27 crc kubenswrapper[4881]: I0121 12:06:27.322410 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b9e1b23-382c-4857-9ffa-0106af9afaa8" path="/var/lib/kubelet/pods/7b9e1b23-382c-4857-9ffa-0106af9afaa8/volumes" Jan 21 12:06:38 crc kubenswrapper[4881]: I0121 12:06:38.447057 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-j4cbb"] Jan 21 12:06:38 crc kubenswrapper[4881]: E0121 12:06:38.448121 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b9e1b23-382c-4857-9ffa-0106af9afaa8" containerName="extract-utilities" Jan 21 12:06:38 crc kubenswrapper[4881]: I0121 12:06:38.448142 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b9e1b23-382c-4857-9ffa-0106af9afaa8" containerName="extract-utilities" Jan 21 12:06:38 crc kubenswrapper[4881]: E0121 12:06:38.448206 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b9e1b23-382c-4857-9ffa-0106af9afaa8" containerName="registry-server" Jan 21 12:06:38 crc kubenswrapper[4881]: I0121 12:06:38.448219 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b9e1b23-382c-4857-9ffa-0106af9afaa8" containerName="registry-server" Jan 21 12:06:38 crc kubenswrapper[4881]: E0121 12:06:38.448257 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b9e1b23-382c-4857-9ffa-0106af9afaa8" containerName="extract-content" Jan 21 12:06:38 crc kubenswrapper[4881]: I0121 12:06:38.448293 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b9e1b23-382c-4857-9ffa-0106af9afaa8" containerName="extract-content" Jan 21 12:06:38 crc kubenswrapper[4881]: I0121 12:06:38.448846 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b9e1b23-382c-4857-9ffa-0106af9afaa8" containerName="registry-server" Jan 21 12:06:38 crc kubenswrapper[4881]: I0121 12:06:38.451953 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j4cbb" Jan 21 12:06:38 crc kubenswrapper[4881]: I0121 12:06:38.468732 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j4cbb"] Jan 21 12:06:38 crc kubenswrapper[4881]: I0121 12:06:38.636232 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5462fac8-b03c-48c0-bc3d-b1a1b1285cab-catalog-content\") pod \"certified-operators-j4cbb\" (UID: \"5462fac8-b03c-48c0-bc3d-b1a1b1285cab\") " pod="openshift-marketplace/certified-operators-j4cbb" Jan 21 12:06:38 crc kubenswrapper[4881]: I0121 12:06:38.636473 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5462fac8-b03c-48c0-bc3d-b1a1b1285cab-utilities\") pod \"certified-operators-j4cbb\" (UID: \"5462fac8-b03c-48c0-bc3d-b1a1b1285cab\") " pod="openshift-marketplace/certified-operators-j4cbb" Jan 21 12:06:38 crc kubenswrapper[4881]: I0121 12:06:38.636624 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v77g4\" (UniqueName: \"kubernetes.io/projected/5462fac8-b03c-48c0-bc3d-b1a1b1285cab-kube-api-access-v77g4\") pod \"certified-operators-j4cbb\" (UID: \"5462fac8-b03c-48c0-bc3d-b1a1b1285cab\") " pod="openshift-marketplace/certified-operators-j4cbb" Jan 21 12:06:38 crc kubenswrapper[4881]: I0121 12:06:38.738252 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5462fac8-b03c-48c0-bc3d-b1a1b1285cab-utilities\") pod \"certified-operators-j4cbb\" (UID: \"5462fac8-b03c-48c0-bc3d-b1a1b1285cab\") " pod="openshift-marketplace/certified-operators-j4cbb" Jan 21 12:06:38 crc kubenswrapper[4881]: I0121 12:06:38.738433 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v77g4\" (UniqueName: \"kubernetes.io/projected/5462fac8-b03c-48c0-bc3d-b1a1b1285cab-kube-api-access-v77g4\") pod \"certified-operators-j4cbb\" (UID: \"5462fac8-b03c-48c0-bc3d-b1a1b1285cab\") " pod="openshift-marketplace/certified-operators-j4cbb" Jan 21 12:06:38 crc kubenswrapper[4881]: I0121 12:06:38.738780 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5462fac8-b03c-48c0-bc3d-b1a1b1285cab-utilities\") pod \"certified-operators-j4cbb\" (UID: \"5462fac8-b03c-48c0-bc3d-b1a1b1285cab\") " pod="openshift-marketplace/certified-operators-j4cbb" Jan 21 12:06:38 crc kubenswrapper[4881]: I0121 12:06:38.739060 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5462fac8-b03c-48c0-bc3d-b1a1b1285cab-catalog-content\") pod \"certified-operators-j4cbb\" (UID: \"5462fac8-b03c-48c0-bc3d-b1a1b1285cab\") " pod="openshift-marketplace/certified-operators-j4cbb" Jan 21 12:06:38 crc kubenswrapper[4881]: I0121 12:06:38.739435 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5462fac8-b03c-48c0-bc3d-b1a1b1285cab-catalog-content\") pod \"certified-operators-j4cbb\" (UID: \"5462fac8-b03c-48c0-bc3d-b1a1b1285cab\") " pod="openshift-marketplace/certified-operators-j4cbb" Jan 21 12:06:38 crc kubenswrapper[4881]: I0121 12:06:38.760368 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v77g4\" (UniqueName: \"kubernetes.io/projected/5462fac8-b03c-48c0-bc3d-b1a1b1285cab-kube-api-access-v77g4\") pod \"certified-operators-j4cbb\" (UID: \"5462fac8-b03c-48c0-bc3d-b1a1b1285cab\") " pod="openshift-marketplace/certified-operators-j4cbb" Jan 21 12:06:38 crc kubenswrapper[4881]: I0121 12:06:38.789156 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j4cbb" Jan 21 12:06:39 crc kubenswrapper[4881]: I0121 12:06:39.335759 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j4cbb"] Jan 21 12:06:39 crc kubenswrapper[4881]: W0121 12:06:39.348247 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5462fac8_b03c_48c0_bc3d_b1a1b1285cab.slice/crio-f1af1bbc46ba691c69bc616913a216b385badd2ac173c74fb7757e7c43387e8d WatchSource:0}: Error finding container f1af1bbc46ba691c69bc616913a216b385badd2ac173c74fb7757e7c43387e8d: Status 404 returned error can't find the container with id f1af1bbc46ba691c69bc616913a216b385badd2ac173c74fb7757e7c43387e8d Jan 21 12:06:40 crc kubenswrapper[4881]: I0121 12:06:40.015350 4881 generic.go:334] "Generic (PLEG): container finished" podID="5462fac8-b03c-48c0-bc3d-b1a1b1285cab" containerID="e46841663886567f48ff14137d656646ff12629cc60b1215035b0dd66d9313e9" exitCode=0 Jan 21 12:06:40 crc kubenswrapper[4881]: I0121 12:06:40.015439 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4cbb" event={"ID":"5462fac8-b03c-48c0-bc3d-b1a1b1285cab","Type":"ContainerDied","Data":"e46841663886567f48ff14137d656646ff12629cc60b1215035b0dd66d9313e9"} Jan 21 12:06:40 crc kubenswrapper[4881]: I0121 12:06:40.015745 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4cbb" event={"ID":"5462fac8-b03c-48c0-bc3d-b1a1b1285cab","Type":"ContainerStarted","Data":"f1af1bbc46ba691c69bc616913a216b385badd2ac173c74fb7757e7c43387e8d"} Jan 21 12:06:41 crc kubenswrapper[4881]: I0121 12:06:41.029324 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4cbb" event={"ID":"5462fac8-b03c-48c0-bc3d-b1a1b1285cab","Type":"ContainerStarted","Data":"4fe6a6b0c1166ac85e56fcac7eb34994c07625a7c33d186edad67b8e9cde8084"} Jan 21 12:06:42 crc kubenswrapper[4881]: I0121 12:06:42.043919 4881 generic.go:334] "Generic (PLEG): container finished" podID="5462fac8-b03c-48c0-bc3d-b1a1b1285cab" containerID="4fe6a6b0c1166ac85e56fcac7eb34994c07625a7c33d186edad67b8e9cde8084" exitCode=0 Jan 21 12:06:42 crc kubenswrapper[4881]: I0121 12:06:42.044033 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4cbb" event={"ID":"5462fac8-b03c-48c0-bc3d-b1a1b1285cab","Type":"ContainerDied","Data":"4fe6a6b0c1166ac85e56fcac7eb34994c07625a7c33d186edad67b8e9cde8084"} Jan 21 12:06:43 crc kubenswrapper[4881]: I0121 12:06:43.058593 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4cbb" event={"ID":"5462fac8-b03c-48c0-bc3d-b1a1b1285cab","Type":"ContainerStarted","Data":"7bf836af88f96370d65c4e80cff36822d97d322d077dfe06a20fe2ed7714e53e"} Jan 21 12:06:43 crc kubenswrapper[4881]: I0121 12:06:43.082153 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-j4cbb" podStartSLOduration=2.513594721 podStartE2EDuration="5.08212907s" podCreationTimestamp="2026-01-21 12:06:38 +0000 UTC" firstStartedPulling="2026-01-21 12:06:40.017183714 +0000 UTC m=+4187.277140183" lastFinishedPulling="2026-01-21 12:06:42.585718043 +0000 UTC m=+4189.845674532" observedRunningTime="2026-01-21 12:06:43.08166451 +0000 UTC m=+4190.341620999" watchObservedRunningTime="2026-01-21 12:06:43.08212907 +0000 UTC m=+4190.342085549" Jan 21 12:06:48 crc kubenswrapper[4881]: I0121 12:06:48.789997 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-j4cbb" Jan 21 12:06:48 crc kubenswrapper[4881]: I0121 12:06:48.791209 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-j4cbb" Jan 21 12:06:48 crc kubenswrapper[4881]: I0121 12:06:48.835810 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-j4cbb" Jan 21 12:06:49 crc kubenswrapper[4881]: I0121 12:06:49.222381 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-j4cbb" Jan 21 12:06:49 crc kubenswrapper[4881]: I0121 12:06:49.294133 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j4cbb"] Jan 21 12:06:51 crc kubenswrapper[4881]: I0121 12:06:51.136610 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-j4cbb" podUID="5462fac8-b03c-48c0-bc3d-b1a1b1285cab" containerName="registry-server" containerID="cri-o://7bf836af88f96370d65c4e80cff36822d97d322d077dfe06a20fe2ed7714e53e" gracePeriod=2 Jan 21 12:06:51 crc kubenswrapper[4881]: I0121 12:06:51.694437 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j4cbb" Jan 21 12:06:51 crc kubenswrapper[4881]: I0121 12:06:51.845679 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5462fac8-b03c-48c0-bc3d-b1a1b1285cab-catalog-content\") pod \"5462fac8-b03c-48c0-bc3d-b1a1b1285cab\" (UID: \"5462fac8-b03c-48c0-bc3d-b1a1b1285cab\") " Jan 21 12:06:51 crc kubenswrapper[4881]: I0121 12:06:51.845876 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v77g4\" (UniqueName: \"kubernetes.io/projected/5462fac8-b03c-48c0-bc3d-b1a1b1285cab-kube-api-access-v77g4\") pod \"5462fac8-b03c-48c0-bc3d-b1a1b1285cab\" (UID: \"5462fac8-b03c-48c0-bc3d-b1a1b1285cab\") " Jan 21 12:06:51 crc kubenswrapper[4881]: I0121 12:06:51.845968 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5462fac8-b03c-48c0-bc3d-b1a1b1285cab-utilities\") pod \"5462fac8-b03c-48c0-bc3d-b1a1b1285cab\" (UID: \"5462fac8-b03c-48c0-bc3d-b1a1b1285cab\") " Jan 21 12:06:51 crc kubenswrapper[4881]: I0121 12:06:51.846955 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5462fac8-b03c-48c0-bc3d-b1a1b1285cab-utilities" (OuterVolumeSpecName: "utilities") pod "5462fac8-b03c-48c0-bc3d-b1a1b1285cab" (UID: "5462fac8-b03c-48c0-bc3d-b1a1b1285cab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:06:51 crc kubenswrapper[4881]: I0121 12:06:51.861290 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5462fac8-b03c-48c0-bc3d-b1a1b1285cab-kube-api-access-v77g4" (OuterVolumeSpecName: "kube-api-access-v77g4") pod "5462fac8-b03c-48c0-bc3d-b1a1b1285cab" (UID: "5462fac8-b03c-48c0-bc3d-b1a1b1285cab"). InnerVolumeSpecName "kube-api-access-v77g4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:06:51 crc kubenswrapper[4881]: I0121 12:06:51.947911 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v77g4\" (UniqueName: \"kubernetes.io/projected/5462fac8-b03c-48c0-bc3d-b1a1b1285cab-kube-api-access-v77g4\") on node \"crc\" DevicePath \"\"" Jan 21 12:06:51 crc kubenswrapper[4881]: I0121 12:06:51.947967 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5462fac8-b03c-48c0-bc3d-b1a1b1285cab-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 12:06:52 crc kubenswrapper[4881]: I0121 12:06:52.154404 4881 generic.go:334] "Generic (PLEG): container finished" podID="5462fac8-b03c-48c0-bc3d-b1a1b1285cab" containerID="7bf836af88f96370d65c4e80cff36822d97d322d077dfe06a20fe2ed7714e53e" exitCode=0 Jan 21 12:06:52 crc kubenswrapper[4881]: I0121 12:06:52.154534 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4cbb" event={"ID":"5462fac8-b03c-48c0-bc3d-b1a1b1285cab","Type":"ContainerDied","Data":"7bf836af88f96370d65c4e80cff36822d97d322d077dfe06a20fe2ed7714e53e"} Jan 21 12:06:52 crc kubenswrapper[4881]: I0121 12:06:52.154564 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j4cbb" Jan 21 12:06:52 crc kubenswrapper[4881]: I0121 12:06:52.154595 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j4cbb" event={"ID":"5462fac8-b03c-48c0-bc3d-b1a1b1285cab","Type":"ContainerDied","Data":"f1af1bbc46ba691c69bc616913a216b385badd2ac173c74fb7757e7c43387e8d"} Jan 21 12:06:52 crc kubenswrapper[4881]: I0121 12:06:52.154618 4881 scope.go:117] "RemoveContainer" containerID="7bf836af88f96370d65c4e80cff36822d97d322d077dfe06a20fe2ed7714e53e" Jan 21 12:06:52 crc kubenswrapper[4881]: I0121 12:06:52.181374 4881 scope.go:117] "RemoveContainer" containerID="4fe6a6b0c1166ac85e56fcac7eb34994c07625a7c33d186edad67b8e9cde8084" Jan 21 12:06:52 crc kubenswrapper[4881]: I0121 12:06:52.207546 4881 scope.go:117] "RemoveContainer" containerID="e46841663886567f48ff14137d656646ff12629cc60b1215035b0dd66d9313e9" Jan 21 12:06:52 crc kubenswrapper[4881]: I0121 12:06:52.284410 4881 scope.go:117] "RemoveContainer" containerID="7bf836af88f96370d65c4e80cff36822d97d322d077dfe06a20fe2ed7714e53e" Jan 21 12:06:52 crc kubenswrapper[4881]: E0121 12:06:52.284956 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bf836af88f96370d65c4e80cff36822d97d322d077dfe06a20fe2ed7714e53e\": container with ID starting with 7bf836af88f96370d65c4e80cff36822d97d322d077dfe06a20fe2ed7714e53e not found: ID does not exist" containerID="7bf836af88f96370d65c4e80cff36822d97d322d077dfe06a20fe2ed7714e53e" Jan 21 12:06:52 crc kubenswrapper[4881]: I0121 12:06:52.284999 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bf836af88f96370d65c4e80cff36822d97d322d077dfe06a20fe2ed7714e53e"} err="failed to get container status \"7bf836af88f96370d65c4e80cff36822d97d322d077dfe06a20fe2ed7714e53e\": rpc error: code = NotFound desc = could not find container \"7bf836af88f96370d65c4e80cff36822d97d322d077dfe06a20fe2ed7714e53e\": container with ID starting with 7bf836af88f96370d65c4e80cff36822d97d322d077dfe06a20fe2ed7714e53e not found: ID does not exist" Jan 21 12:06:52 crc kubenswrapper[4881]: I0121 12:06:52.285029 4881 scope.go:117] "RemoveContainer" containerID="4fe6a6b0c1166ac85e56fcac7eb34994c07625a7c33d186edad67b8e9cde8084" Jan 21 12:06:52 crc kubenswrapper[4881]: E0121 12:06:52.285540 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fe6a6b0c1166ac85e56fcac7eb34994c07625a7c33d186edad67b8e9cde8084\": container with ID starting with 4fe6a6b0c1166ac85e56fcac7eb34994c07625a7c33d186edad67b8e9cde8084 not found: ID does not exist" containerID="4fe6a6b0c1166ac85e56fcac7eb34994c07625a7c33d186edad67b8e9cde8084" Jan 21 12:06:52 crc kubenswrapper[4881]: I0121 12:06:52.285571 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fe6a6b0c1166ac85e56fcac7eb34994c07625a7c33d186edad67b8e9cde8084"} err="failed to get container status \"4fe6a6b0c1166ac85e56fcac7eb34994c07625a7c33d186edad67b8e9cde8084\": rpc error: code = NotFound desc = could not find container \"4fe6a6b0c1166ac85e56fcac7eb34994c07625a7c33d186edad67b8e9cde8084\": container with ID starting with 4fe6a6b0c1166ac85e56fcac7eb34994c07625a7c33d186edad67b8e9cde8084 not found: ID does not exist" Jan 21 12:06:52 crc kubenswrapper[4881]: I0121 12:06:52.285595 4881 scope.go:117] "RemoveContainer" containerID="e46841663886567f48ff14137d656646ff12629cc60b1215035b0dd66d9313e9" Jan 21 12:06:52 crc kubenswrapper[4881]: E0121 12:06:52.285876 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e46841663886567f48ff14137d656646ff12629cc60b1215035b0dd66d9313e9\": container with ID starting with e46841663886567f48ff14137d656646ff12629cc60b1215035b0dd66d9313e9 not found: ID does not exist" containerID="e46841663886567f48ff14137d656646ff12629cc60b1215035b0dd66d9313e9" Jan 21 12:06:52 crc kubenswrapper[4881]: I0121 12:06:52.285920 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e46841663886567f48ff14137d656646ff12629cc60b1215035b0dd66d9313e9"} err="failed to get container status \"e46841663886567f48ff14137d656646ff12629cc60b1215035b0dd66d9313e9\": rpc error: code = NotFound desc = could not find container \"e46841663886567f48ff14137d656646ff12629cc60b1215035b0dd66d9313e9\": container with ID starting with e46841663886567f48ff14137d656646ff12629cc60b1215035b0dd66d9313e9 not found: ID does not exist" Jan 21 12:06:52 crc kubenswrapper[4881]: I0121 12:06:52.468750 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5462fac8-b03c-48c0-bc3d-b1a1b1285cab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5462fac8-b03c-48c0-bc3d-b1a1b1285cab" (UID: "5462fac8-b03c-48c0-bc3d-b1a1b1285cab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:06:52 crc kubenswrapper[4881]: I0121 12:06:52.562464 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5462fac8-b03c-48c0-bc3d-b1a1b1285cab-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 12:06:52 crc kubenswrapper[4881]: I0121 12:06:52.792958 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j4cbb"] Jan 21 12:06:52 crc kubenswrapper[4881]: I0121 12:06:52.801258 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-j4cbb"] Jan 21 12:06:53 crc kubenswrapper[4881]: I0121 12:06:53.327851 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5462fac8-b03c-48c0-bc3d-b1a1b1285cab" path="/var/lib/kubelet/pods/5462fac8-b03c-48c0-bc3d-b1a1b1285cab/volumes" Jan 21 12:07:59 crc kubenswrapper[4881]: I0121 12:07:59.850918 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:07:59 crc kubenswrapper[4881]: I0121 12:07:59.852070 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:08:29 crc kubenswrapper[4881]: I0121 12:08:29.851633 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:08:29 crc kubenswrapper[4881]: I0121 12:08:29.852977 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:08:55 crc kubenswrapper[4881]: I0121 12:08:55.096477 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7w9td"] Jan 21 12:08:55 crc kubenswrapper[4881]: E0121 12:08:55.097500 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5462fac8-b03c-48c0-bc3d-b1a1b1285cab" containerName="extract-content" Jan 21 12:08:55 crc kubenswrapper[4881]: I0121 12:08:55.097516 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="5462fac8-b03c-48c0-bc3d-b1a1b1285cab" containerName="extract-content" Jan 21 12:08:55 crc kubenswrapper[4881]: E0121 12:08:55.097531 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5462fac8-b03c-48c0-bc3d-b1a1b1285cab" containerName="registry-server" Jan 21 12:08:55 crc kubenswrapper[4881]: I0121 12:08:55.097537 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="5462fac8-b03c-48c0-bc3d-b1a1b1285cab" containerName="registry-server" Jan 21 12:08:55 crc kubenswrapper[4881]: E0121 12:08:55.097554 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5462fac8-b03c-48c0-bc3d-b1a1b1285cab" containerName="extract-utilities" Jan 21 12:08:55 crc kubenswrapper[4881]: I0121 12:08:55.097562 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="5462fac8-b03c-48c0-bc3d-b1a1b1285cab" containerName="extract-utilities" Jan 21 12:08:55 crc kubenswrapper[4881]: I0121 12:08:55.097814 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="5462fac8-b03c-48c0-bc3d-b1a1b1285cab" containerName="registry-server" Jan 21 12:08:55 crc kubenswrapper[4881]: I0121 12:08:55.099735 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7w9td" Jan 21 12:08:55 crc kubenswrapper[4881]: I0121 12:08:55.112916 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7w9td"] Jan 21 12:08:55 crc kubenswrapper[4881]: I0121 12:08:55.213637 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ae7c44c-9f78-4779-bff2-32f7e9246561-utilities\") pod \"redhat-marketplace-7w9td\" (UID: \"9ae7c44c-9f78-4779-bff2-32f7e9246561\") " pod="openshift-marketplace/redhat-marketplace-7w9td" Jan 21 12:08:55 crc kubenswrapper[4881]: I0121 12:08:55.213863 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnswq\" (UniqueName: \"kubernetes.io/projected/9ae7c44c-9f78-4779-bff2-32f7e9246561-kube-api-access-tnswq\") pod \"redhat-marketplace-7w9td\" (UID: \"9ae7c44c-9f78-4779-bff2-32f7e9246561\") " pod="openshift-marketplace/redhat-marketplace-7w9td" Jan 21 12:08:55 crc kubenswrapper[4881]: I0121 12:08:55.213898 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ae7c44c-9f78-4779-bff2-32f7e9246561-catalog-content\") pod \"redhat-marketplace-7w9td\" (UID: \"9ae7c44c-9f78-4779-bff2-32f7e9246561\") " pod="openshift-marketplace/redhat-marketplace-7w9td" Jan 21 12:08:55 crc kubenswrapper[4881]: I0121 12:08:55.316222 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnswq\" (UniqueName: \"kubernetes.io/projected/9ae7c44c-9f78-4779-bff2-32f7e9246561-kube-api-access-tnswq\") pod \"redhat-marketplace-7w9td\" (UID: \"9ae7c44c-9f78-4779-bff2-32f7e9246561\") " pod="openshift-marketplace/redhat-marketplace-7w9td" Jan 21 12:08:55 crc kubenswrapper[4881]: I0121 12:08:55.316279 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ae7c44c-9f78-4779-bff2-32f7e9246561-catalog-content\") pod \"redhat-marketplace-7w9td\" (UID: \"9ae7c44c-9f78-4779-bff2-32f7e9246561\") " pod="openshift-marketplace/redhat-marketplace-7w9td" Jan 21 12:08:55 crc kubenswrapper[4881]: I0121 12:08:55.316383 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ae7c44c-9f78-4779-bff2-32f7e9246561-utilities\") pod \"redhat-marketplace-7w9td\" (UID: \"9ae7c44c-9f78-4779-bff2-32f7e9246561\") " pod="openshift-marketplace/redhat-marketplace-7w9td" Jan 21 12:08:55 crc kubenswrapper[4881]: I0121 12:08:55.317150 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ae7c44c-9f78-4779-bff2-32f7e9246561-catalog-content\") pod \"redhat-marketplace-7w9td\" (UID: \"9ae7c44c-9f78-4779-bff2-32f7e9246561\") " pod="openshift-marketplace/redhat-marketplace-7w9td" Jan 21 12:08:55 crc kubenswrapper[4881]: I0121 12:08:55.318392 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ae7c44c-9f78-4779-bff2-32f7e9246561-utilities\") pod \"redhat-marketplace-7w9td\" (UID: \"9ae7c44c-9f78-4779-bff2-32f7e9246561\") " pod="openshift-marketplace/redhat-marketplace-7w9td" Jan 21 12:08:55 crc kubenswrapper[4881]: I0121 12:08:55.348731 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnswq\" (UniqueName: \"kubernetes.io/projected/9ae7c44c-9f78-4779-bff2-32f7e9246561-kube-api-access-tnswq\") pod \"redhat-marketplace-7w9td\" (UID: \"9ae7c44c-9f78-4779-bff2-32f7e9246561\") " pod="openshift-marketplace/redhat-marketplace-7w9td" Jan 21 12:08:55 crc kubenswrapper[4881]: I0121 12:08:55.426573 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7w9td" Jan 21 12:08:56 crc kubenswrapper[4881]: W0121 12:08:56.074009 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ae7c44c_9f78_4779_bff2_32f7e9246561.slice/crio-3ed9f13914506207c9422f141b24b636ccc90d4691f6a58a623a8afce4a6435c WatchSource:0}: Error finding container 3ed9f13914506207c9422f141b24b636ccc90d4691f6a58a623a8afce4a6435c: Status 404 returned error can't find the container with id 3ed9f13914506207c9422f141b24b636ccc90d4691f6a58a623a8afce4a6435c Jan 21 12:08:56 crc kubenswrapper[4881]: I0121 12:08:56.074275 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7w9td"] Jan 21 12:08:56 crc kubenswrapper[4881]: I0121 12:08:56.575767 4881 generic.go:334] "Generic (PLEG): container finished" podID="9ae7c44c-9f78-4779-bff2-32f7e9246561" containerID="e22a5c92c26572faa88a6270d679c6e86564b96b3bd18b49411f96d82f0edf3a" exitCode=0 Jan 21 12:08:56 crc kubenswrapper[4881]: I0121 12:08:56.575887 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7w9td" event={"ID":"9ae7c44c-9f78-4779-bff2-32f7e9246561","Type":"ContainerDied","Data":"e22a5c92c26572faa88a6270d679c6e86564b96b3bd18b49411f96d82f0edf3a"} Jan 21 12:08:56 crc kubenswrapper[4881]: I0121 12:08:56.577154 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7w9td" event={"ID":"9ae7c44c-9f78-4779-bff2-32f7e9246561","Type":"ContainerStarted","Data":"3ed9f13914506207c9422f141b24b636ccc90d4691f6a58a623a8afce4a6435c"} Jan 21 12:08:57 crc kubenswrapper[4881]: I0121 12:08:57.589354 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7w9td" event={"ID":"9ae7c44c-9f78-4779-bff2-32f7e9246561","Type":"ContainerStarted","Data":"33738adcb837298193b5fcb1b545a05c76ce8e00c4eb51940c56a7d4f8ae54a9"} Jan 21 12:08:58 crc kubenswrapper[4881]: I0121 12:08:58.600768 4881 generic.go:334] "Generic (PLEG): container finished" podID="9ae7c44c-9f78-4779-bff2-32f7e9246561" containerID="33738adcb837298193b5fcb1b545a05c76ce8e00c4eb51940c56a7d4f8ae54a9" exitCode=0 Jan 21 12:08:58 crc kubenswrapper[4881]: I0121 12:08:58.600873 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7w9td" event={"ID":"9ae7c44c-9f78-4779-bff2-32f7e9246561","Type":"ContainerDied","Data":"33738adcb837298193b5fcb1b545a05c76ce8e00c4eb51940c56a7d4f8ae54a9"} Jan 21 12:08:59 crc kubenswrapper[4881]: I0121 12:08:59.462469 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-m69zl"] Jan 21 12:08:59 crc kubenswrapper[4881]: I0121 12:08:59.465202 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m69zl" Jan 21 12:08:59 crc kubenswrapper[4881]: I0121 12:08:59.474564 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m69zl"] Jan 21 12:08:59 crc kubenswrapper[4881]: I0121 12:08:59.579359 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ca80118-375e-4587-af3d-453c7aef306d-utilities\") pod \"community-operators-m69zl\" (UID: \"1ca80118-375e-4587-af3d-453c7aef306d\") " pod="openshift-marketplace/community-operators-m69zl" Jan 21 12:08:59 crc kubenswrapper[4881]: I0121 12:08:59.579461 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p8sg\" (UniqueName: \"kubernetes.io/projected/1ca80118-375e-4587-af3d-453c7aef306d-kube-api-access-2p8sg\") pod \"community-operators-m69zl\" (UID: \"1ca80118-375e-4587-af3d-453c7aef306d\") " pod="openshift-marketplace/community-operators-m69zl" Jan 21 12:08:59 crc kubenswrapper[4881]: I0121 12:08:59.580093 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ca80118-375e-4587-af3d-453c7aef306d-catalog-content\") pod \"community-operators-m69zl\" (UID: \"1ca80118-375e-4587-af3d-453c7aef306d\") " pod="openshift-marketplace/community-operators-m69zl" Jan 21 12:08:59 crc kubenswrapper[4881]: I0121 12:08:59.627286 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7w9td" event={"ID":"9ae7c44c-9f78-4779-bff2-32f7e9246561","Type":"ContainerStarted","Data":"6cbcaf18899b545b2bb3924a1d127bfcef1623f612970032892559e5afff015c"} Jan 21 12:08:59 crc kubenswrapper[4881]: I0121 12:08:59.649758 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7w9td" podStartSLOduration=2.234592172 podStartE2EDuration="4.649738831s" podCreationTimestamp="2026-01-21 12:08:55 +0000 UTC" firstStartedPulling="2026-01-21 12:08:56.57919603 +0000 UTC m=+4323.839152499" lastFinishedPulling="2026-01-21 12:08:58.994342699 +0000 UTC m=+4326.254299158" observedRunningTime="2026-01-21 12:08:59.645058928 +0000 UTC m=+4326.905015397" watchObservedRunningTime="2026-01-21 12:08:59.649738831 +0000 UTC m=+4326.909695300" Jan 21 12:08:59 crc kubenswrapper[4881]: I0121 12:08:59.682677 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ca80118-375e-4587-af3d-453c7aef306d-catalog-content\") pod \"community-operators-m69zl\" (UID: \"1ca80118-375e-4587-af3d-453c7aef306d\") " pod="openshift-marketplace/community-operators-m69zl" Jan 21 12:08:59 crc kubenswrapper[4881]: I0121 12:08:59.682845 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ca80118-375e-4587-af3d-453c7aef306d-utilities\") pod \"community-operators-m69zl\" (UID: \"1ca80118-375e-4587-af3d-453c7aef306d\") " pod="openshift-marketplace/community-operators-m69zl" Jan 21 12:08:59 crc kubenswrapper[4881]: I0121 12:08:59.682937 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2p8sg\" (UniqueName: \"kubernetes.io/projected/1ca80118-375e-4587-af3d-453c7aef306d-kube-api-access-2p8sg\") pod \"community-operators-m69zl\" (UID: \"1ca80118-375e-4587-af3d-453c7aef306d\") " pod="openshift-marketplace/community-operators-m69zl" Jan 21 12:08:59 crc kubenswrapper[4881]: I0121 12:08:59.683324 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ca80118-375e-4587-af3d-453c7aef306d-catalog-content\") pod \"community-operators-m69zl\" (UID: \"1ca80118-375e-4587-af3d-453c7aef306d\") " pod="openshift-marketplace/community-operators-m69zl" Jan 21 12:08:59 crc kubenswrapper[4881]: I0121 12:08:59.683356 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ca80118-375e-4587-af3d-453c7aef306d-utilities\") pod \"community-operators-m69zl\" (UID: \"1ca80118-375e-4587-af3d-453c7aef306d\") " pod="openshift-marketplace/community-operators-m69zl" Jan 21 12:08:59 crc kubenswrapper[4881]: I0121 12:08:59.705768 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2p8sg\" (UniqueName: \"kubernetes.io/projected/1ca80118-375e-4587-af3d-453c7aef306d-kube-api-access-2p8sg\") pod \"community-operators-m69zl\" (UID: \"1ca80118-375e-4587-af3d-453c7aef306d\") " pod="openshift-marketplace/community-operators-m69zl" Jan 21 12:08:59 crc kubenswrapper[4881]: I0121 12:08:59.788083 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m69zl" Jan 21 12:08:59 crc kubenswrapper[4881]: I0121 12:08:59.855234 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:08:59 crc kubenswrapper[4881]: I0121 12:08:59.855306 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:08:59 crc kubenswrapper[4881]: I0121 12:08:59.855358 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 12:08:59 crc kubenswrapper[4881]: I0121 12:08:59.856294 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8fa2fcd197247817c68b133d6a51bf7eca2545a597f5deb7e87467827e522318"} pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 12:08:59 crc kubenswrapper[4881]: I0121 12:08:59.856362 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" containerID="cri-o://8fa2fcd197247817c68b133d6a51bf7eca2545a597f5deb7e87467827e522318" gracePeriod=600 Jan 21 12:09:00 crc kubenswrapper[4881]: W0121 12:09:00.425001 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ca80118_375e_4587_af3d_453c7aef306d.slice/crio-31cd2d8ee9e30c576f18af6af28a532b366882fea3d8d9cdfbf767da46a002fb WatchSource:0}: Error finding container 31cd2d8ee9e30c576f18af6af28a532b366882fea3d8d9cdfbf767da46a002fb: Status 404 returned error can't find the container with id 31cd2d8ee9e30c576f18af6af28a532b366882fea3d8d9cdfbf767da46a002fb Jan 21 12:09:00 crc kubenswrapper[4881]: I0121 12:09:00.431496 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m69zl"] Jan 21 12:09:00 crc kubenswrapper[4881]: I0121 12:09:00.640459 4881 generic.go:334] "Generic (PLEG): container finished" podID="3687b313-1df2-4274-80db-8c758b51bf2d" containerID="8fa2fcd197247817c68b133d6a51bf7eca2545a597f5deb7e87467827e522318" exitCode=0 Jan 21 12:09:00 crc kubenswrapper[4881]: I0121 12:09:00.640665 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerDied","Data":"8fa2fcd197247817c68b133d6a51bf7eca2545a597f5deb7e87467827e522318"} Jan 21 12:09:00 crc kubenswrapper[4881]: I0121 12:09:00.640841 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c"} Jan 21 12:09:00 crc kubenswrapper[4881]: I0121 12:09:00.640866 4881 scope.go:117] "RemoveContainer" containerID="cc4c2fdba8ce6542705b256b23e1d94b8d7eb0f0ca9e8607445022552d1c7bb9" Jan 21 12:09:00 crc kubenswrapper[4881]: I0121 12:09:00.644303 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m69zl" event={"ID":"1ca80118-375e-4587-af3d-453c7aef306d","Type":"ContainerStarted","Data":"31cd2d8ee9e30c576f18af6af28a532b366882fea3d8d9cdfbf767da46a002fb"} Jan 21 12:09:01 crc kubenswrapper[4881]: I0121 12:09:01.658362 4881 generic.go:334] "Generic (PLEG): container finished" podID="1ca80118-375e-4587-af3d-453c7aef306d" containerID="8484a112814abc2dceafcc57020ec2ffd43842766914ec559956998f2341eb55" exitCode=0 Jan 21 12:09:01 crc kubenswrapper[4881]: I0121 12:09:01.658491 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m69zl" event={"ID":"1ca80118-375e-4587-af3d-453c7aef306d","Type":"ContainerDied","Data":"8484a112814abc2dceafcc57020ec2ffd43842766914ec559956998f2341eb55"} Jan 21 12:09:02 crc kubenswrapper[4881]: I0121 12:09:02.676716 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m69zl" event={"ID":"1ca80118-375e-4587-af3d-453c7aef306d","Type":"ContainerStarted","Data":"6a6d993de2d9eb5fb7e796d408b5595c3142efe400d24742f86195b8bca57d22"} Jan 21 12:09:04 crc kubenswrapper[4881]: I0121 12:09:04.700769 4881 generic.go:334] "Generic (PLEG): container finished" podID="1ca80118-375e-4587-af3d-453c7aef306d" containerID="6a6d993de2d9eb5fb7e796d408b5595c3142efe400d24742f86195b8bca57d22" exitCode=0 Jan 21 12:09:04 crc kubenswrapper[4881]: I0121 12:09:04.700841 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m69zl" event={"ID":"1ca80118-375e-4587-af3d-453c7aef306d","Type":"ContainerDied","Data":"6a6d993de2d9eb5fb7e796d408b5595c3142efe400d24742f86195b8bca57d22"} Jan 21 12:09:05 crc kubenswrapper[4881]: I0121 12:09:05.426997 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7w9td" Jan 21 12:09:05 crc kubenswrapper[4881]: I0121 12:09:05.427667 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7w9td" Jan 21 12:09:05 crc kubenswrapper[4881]: I0121 12:09:05.479704 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7w9td" Jan 21 12:09:05 crc kubenswrapper[4881]: I0121 12:09:05.713132 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m69zl" event={"ID":"1ca80118-375e-4587-af3d-453c7aef306d","Type":"ContainerStarted","Data":"1c08d6247abf9de8a7636b7d6967781604b5af0caca601e2cf2305330c9a007f"} Jan 21 12:09:05 crc kubenswrapper[4881]: I0121 12:09:05.734048 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-m69zl" podStartSLOduration=3.232060341 podStartE2EDuration="6.734029465s" podCreationTimestamp="2026-01-21 12:08:59 +0000 UTC" firstStartedPulling="2026-01-21 12:09:01.660707446 +0000 UTC m=+4328.920663925" lastFinishedPulling="2026-01-21 12:09:05.16267658 +0000 UTC m=+4332.422633049" observedRunningTime="2026-01-21 12:09:05.730247214 +0000 UTC m=+4332.990203683" watchObservedRunningTime="2026-01-21 12:09:05.734029465 +0000 UTC m=+4332.993985934" Jan 21 12:09:05 crc kubenswrapper[4881]: I0121 12:09:05.779770 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7w9td" Jan 21 12:09:07 crc kubenswrapper[4881]: I0121 12:09:07.858893 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7w9td"] Jan 21 12:09:07 crc kubenswrapper[4881]: I0121 12:09:07.859651 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7w9td" podUID="9ae7c44c-9f78-4779-bff2-32f7e9246561" containerName="registry-server" containerID="cri-o://6cbcaf18899b545b2bb3924a1d127bfcef1623f612970032892559e5afff015c" gracePeriod=2 Jan 21 12:09:08 crc kubenswrapper[4881]: I0121 12:09:08.395745 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7w9td" Jan 21 12:09:08 crc kubenswrapper[4881]: I0121 12:09:08.509927 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ae7c44c-9f78-4779-bff2-32f7e9246561-utilities\") pod \"9ae7c44c-9f78-4779-bff2-32f7e9246561\" (UID: \"9ae7c44c-9f78-4779-bff2-32f7e9246561\") " Jan 21 12:09:08 crc kubenswrapper[4881]: I0121 12:09:08.510074 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ae7c44c-9f78-4779-bff2-32f7e9246561-catalog-content\") pod \"9ae7c44c-9f78-4779-bff2-32f7e9246561\" (UID: \"9ae7c44c-9f78-4779-bff2-32f7e9246561\") " Jan 21 12:09:08 crc kubenswrapper[4881]: I0121 12:09:08.510212 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnswq\" (UniqueName: \"kubernetes.io/projected/9ae7c44c-9f78-4779-bff2-32f7e9246561-kube-api-access-tnswq\") pod \"9ae7c44c-9f78-4779-bff2-32f7e9246561\" (UID: \"9ae7c44c-9f78-4779-bff2-32f7e9246561\") " Jan 21 12:09:08 crc kubenswrapper[4881]: I0121 12:09:08.511076 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ae7c44c-9f78-4779-bff2-32f7e9246561-utilities" (OuterVolumeSpecName: "utilities") pod "9ae7c44c-9f78-4779-bff2-32f7e9246561" (UID: "9ae7c44c-9f78-4779-bff2-32f7e9246561"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:09:08 crc kubenswrapper[4881]: I0121 12:09:08.516154 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ae7c44c-9f78-4779-bff2-32f7e9246561-kube-api-access-tnswq" (OuterVolumeSpecName: "kube-api-access-tnswq") pod "9ae7c44c-9f78-4779-bff2-32f7e9246561" (UID: "9ae7c44c-9f78-4779-bff2-32f7e9246561"). InnerVolumeSpecName "kube-api-access-tnswq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:09:08 crc kubenswrapper[4881]: I0121 12:09:08.539221 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ae7c44c-9f78-4779-bff2-32f7e9246561-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9ae7c44c-9f78-4779-bff2-32f7e9246561" (UID: "9ae7c44c-9f78-4779-bff2-32f7e9246561"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:09:08 crc kubenswrapper[4881]: I0121 12:09:08.612474 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnswq\" (UniqueName: \"kubernetes.io/projected/9ae7c44c-9f78-4779-bff2-32f7e9246561-kube-api-access-tnswq\") on node \"crc\" DevicePath \"\"" Jan 21 12:09:08 crc kubenswrapper[4881]: I0121 12:09:08.612514 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ae7c44c-9f78-4779-bff2-32f7e9246561-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 12:09:08 crc kubenswrapper[4881]: I0121 12:09:08.612527 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ae7c44c-9f78-4779-bff2-32f7e9246561-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 12:09:08 crc kubenswrapper[4881]: I0121 12:09:08.746533 4881 generic.go:334] "Generic (PLEG): container finished" podID="9ae7c44c-9f78-4779-bff2-32f7e9246561" containerID="6cbcaf18899b545b2bb3924a1d127bfcef1623f612970032892559e5afff015c" exitCode=0 Jan 21 12:09:08 crc kubenswrapper[4881]: I0121 12:09:08.746580 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7w9td" event={"ID":"9ae7c44c-9f78-4779-bff2-32f7e9246561","Type":"ContainerDied","Data":"6cbcaf18899b545b2bb3924a1d127bfcef1623f612970032892559e5afff015c"} Jan 21 12:09:08 crc kubenswrapper[4881]: I0121 12:09:08.746616 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7w9td" event={"ID":"9ae7c44c-9f78-4779-bff2-32f7e9246561","Type":"ContainerDied","Data":"3ed9f13914506207c9422f141b24b636ccc90d4691f6a58a623a8afce4a6435c"} Jan 21 12:09:08 crc kubenswrapper[4881]: I0121 12:09:08.746624 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7w9td" Jan 21 12:09:08 crc kubenswrapper[4881]: I0121 12:09:08.746635 4881 scope.go:117] "RemoveContainer" containerID="6cbcaf18899b545b2bb3924a1d127bfcef1623f612970032892559e5afff015c" Jan 21 12:09:08 crc kubenswrapper[4881]: I0121 12:09:08.776323 4881 scope.go:117] "RemoveContainer" containerID="33738adcb837298193b5fcb1b545a05c76ce8e00c4eb51940c56a7d4f8ae54a9" Jan 21 12:09:08 crc kubenswrapper[4881]: I0121 12:09:08.802969 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7w9td"] Jan 21 12:09:08 crc kubenswrapper[4881]: I0121 12:09:08.805595 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7w9td"] Jan 21 12:09:08 crc kubenswrapper[4881]: I0121 12:09:08.825024 4881 scope.go:117] "RemoveContainer" containerID="e22a5c92c26572faa88a6270d679c6e86564b96b3bd18b49411f96d82f0edf3a" Jan 21 12:09:08 crc kubenswrapper[4881]: I0121 12:09:08.860114 4881 scope.go:117] "RemoveContainer" containerID="6cbcaf18899b545b2bb3924a1d127bfcef1623f612970032892559e5afff015c" Jan 21 12:09:08 crc kubenswrapper[4881]: E0121 12:09:08.860595 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cbcaf18899b545b2bb3924a1d127bfcef1623f612970032892559e5afff015c\": container with ID starting with 6cbcaf18899b545b2bb3924a1d127bfcef1623f612970032892559e5afff015c not found: ID does not exist" containerID="6cbcaf18899b545b2bb3924a1d127bfcef1623f612970032892559e5afff015c" Jan 21 12:09:08 crc kubenswrapper[4881]: I0121 12:09:08.860635 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cbcaf18899b545b2bb3924a1d127bfcef1623f612970032892559e5afff015c"} err="failed to get container status \"6cbcaf18899b545b2bb3924a1d127bfcef1623f612970032892559e5afff015c\": rpc error: code = NotFound desc = could not find container \"6cbcaf18899b545b2bb3924a1d127bfcef1623f612970032892559e5afff015c\": container with ID starting with 6cbcaf18899b545b2bb3924a1d127bfcef1623f612970032892559e5afff015c not found: ID does not exist" Jan 21 12:09:08 crc kubenswrapper[4881]: I0121 12:09:08.860659 4881 scope.go:117] "RemoveContainer" containerID="33738adcb837298193b5fcb1b545a05c76ce8e00c4eb51940c56a7d4f8ae54a9" Jan 21 12:09:08 crc kubenswrapper[4881]: E0121 12:09:08.861050 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33738adcb837298193b5fcb1b545a05c76ce8e00c4eb51940c56a7d4f8ae54a9\": container with ID starting with 33738adcb837298193b5fcb1b545a05c76ce8e00c4eb51940c56a7d4f8ae54a9 not found: ID does not exist" containerID="33738adcb837298193b5fcb1b545a05c76ce8e00c4eb51940c56a7d4f8ae54a9" Jan 21 12:09:08 crc kubenswrapper[4881]: I0121 12:09:08.861075 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33738adcb837298193b5fcb1b545a05c76ce8e00c4eb51940c56a7d4f8ae54a9"} err="failed to get container status \"33738adcb837298193b5fcb1b545a05c76ce8e00c4eb51940c56a7d4f8ae54a9\": rpc error: code = NotFound desc = could not find container \"33738adcb837298193b5fcb1b545a05c76ce8e00c4eb51940c56a7d4f8ae54a9\": container with ID starting with 33738adcb837298193b5fcb1b545a05c76ce8e00c4eb51940c56a7d4f8ae54a9 not found: ID does not exist" Jan 21 12:09:08 crc kubenswrapper[4881]: I0121 12:09:08.861092 4881 scope.go:117] "RemoveContainer" containerID="e22a5c92c26572faa88a6270d679c6e86564b96b3bd18b49411f96d82f0edf3a" Jan 21 12:09:08 crc kubenswrapper[4881]: E0121 12:09:08.861349 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e22a5c92c26572faa88a6270d679c6e86564b96b3bd18b49411f96d82f0edf3a\": container with ID starting with e22a5c92c26572faa88a6270d679c6e86564b96b3bd18b49411f96d82f0edf3a not found: ID does not exist" containerID="e22a5c92c26572faa88a6270d679c6e86564b96b3bd18b49411f96d82f0edf3a" Jan 21 12:09:08 crc kubenswrapper[4881]: I0121 12:09:08.861375 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e22a5c92c26572faa88a6270d679c6e86564b96b3bd18b49411f96d82f0edf3a"} err="failed to get container status \"e22a5c92c26572faa88a6270d679c6e86564b96b3bd18b49411f96d82f0edf3a\": rpc error: code = NotFound desc = could not find container \"e22a5c92c26572faa88a6270d679c6e86564b96b3bd18b49411f96d82f0edf3a\": container with ID starting with e22a5c92c26572faa88a6270d679c6e86564b96b3bd18b49411f96d82f0edf3a not found: ID does not exist" Jan 21 12:09:09 crc kubenswrapper[4881]: I0121 12:09:09.322272 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ae7c44c-9f78-4779-bff2-32f7e9246561" path="/var/lib/kubelet/pods/9ae7c44c-9f78-4779-bff2-32f7e9246561/volumes" Jan 21 12:09:09 crc kubenswrapper[4881]: I0121 12:09:09.789024 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-m69zl" Jan 21 12:09:09 crc kubenswrapper[4881]: I0121 12:09:09.789686 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-m69zl" Jan 21 12:09:09 crc kubenswrapper[4881]: I0121 12:09:09.853543 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-m69zl" Jan 21 12:09:10 crc kubenswrapper[4881]: I0121 12:09:10.877650 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-m69zl" Jan 21 12:09:11 crc kubenswrapper[4881]: I0121 12:09:11.259553 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m69zl"] Jan 21 12:09:12 crc kubenswrapper[4881]: I0121 12:09:12.791907 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-m69zl" podUID="1ca80118-375e-4587-af3d-453c7aef306d" containerName="registry-server" containerID="cri-o://1c08d6247abf9de8a7636b7d6967781604b5af0caca601e2cf2305330c9a007f" gracePeriod=2 Jan 21 12:09:13 crc kubenswrapper[4881]: I0121 12:09:13.601906 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m69zl" Jan 21 12:09:13 crc kubenswrapper[4881]: I0121 12:09:13.734459 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ca80118-375e-4587-af3d-453c7aef306d-catalog-content\") pod \"1ca80118-375e-4587-af3d-453c7aef306d\" (UID: \"1ca80118-375e-4587-af3d-453c7aef306d\") " Jan 21 12:09:13 crc kubenswrapper[4881]: I0121 12:09:13.734620 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ca80118-375e-4587-af3d-453c7aef306d-utilities\") pod \"1ca80118-375e-4587-af3d-453c7aef306d\" (UID: \"1ca80118-375e-4587-af3d-453c7aef306d\") " Jan 21 12:09:13 crc kubenswrapper[4881]: I0121 12:09:13.734663 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2p8sg\" (UniqueName: \"kubernetes.io/projected/1ca80118-375e-4587-af3d-453c7aef306d-kube-api-access-2p8sg\") pod \"1ca80118-375e-4587-af3d-453c7aef306d\" (UID: \"1ca80118-375e-4587-af3d-453c7aef306d\") " Jan 21 12:09:13 crc kubenswrapper[4881]: I0121 12:09:13.736050 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ca80118-375e-4587-af3d-453c7aef306d-utilities" (OuterVolumeSpecName: "utilities") pod "1ca80118-375e-4587-af3d-453c7aef306d" (UID: "1ca80118-375e-4587-af3d-453c7aef306d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:09:13 crc kubenswrapper[4881]: I0121 12:09:13.743118 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ca80118-375e-4587-af3d-453c7aef306d-kube-api-access-2p8sg" (OuterVolumeSpecName: "kube-api-access-2p8sg") pod "1ca80118-375e-4587-af3d-453c7aef306d" (UID: "1ca80118-375e-4587-af3d-453c7aef306d"). InnerVolumeSpecName "kube-api-access-2p8sg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:09:13 crc kubenswrapper[4881]: I0121 12:09:13.808025 4881 generic.go:334] "Generic (PLEG): container finished" podID="1ca80118-375e-4587-af3d-453c7aef306d" containerID="1c08d6247abf9de8a7636b7d6967781604b5af0caca601e2cf2305330c9a007f" exitCode=0 Jan 21 12:09:13 crc kubenswrapper[4881]: I0121 12:09:13.808086 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m69zl" event={"ID":"1ca80118-375e-4587-af3d-453c7aef306d","Type":"ContainerDied","Data":"1c08d6247abf9de8a7636b7d6967781604b5af0caca601e2cf2305330c9a007f"} Jan 21 12:09:13 crc kubenswrapper[4881]: I0121 12:09:13.808125 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m69zl" event={"ID":"1ca80118-375e-4587-af3d-453c7aef306d","Type":"ContainerDied","Data":"31cd2d8ee9e30c576f18af6af28a532b366882fea3d8d9cdfbf767da46a002fb"} Jan 21 12:09:13 crc kubenswrapper[4881]: I0121 12:09:13.808149 4881 scope.go:117] "RemoveContainer" containerID="1c08d6247abf9de8a7636b7d6967781604b5af0caca601e2cf2305330c9a007f" Jan 21 12:09:13 crc kubenswrapper[4881]: I0121 12:09:13.808089 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m69zl" Jan 21 12:09:13 crc kubenswrapper[4881]: I0121 12:09:13.809903 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ca80118-375e-4587-af3d-453c7aef306d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1ca80118-375e-4587-af3d-453c7aef306d" (UID: "1ca80118-375e-4587-af3d-453c7aef306d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:09:13 crc kubenswrapper[4881]: I0121 12:09:13.829007 4881 scope.go:117] "RemoveContainer" containerID="6a6d993de2d9eb5fb7e796d408b5595c3142efe400d24742f86195b8bca57d22" Jan 21 12:09:13 crc kubenswrapper[4881]: I0121 12:09:13.837421 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ca80118-375e-4587-af3d-453c7aef306d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 12:09:13 crc kubenswrapper[4881]: I0121 12:09:13.837464 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ca80118-375e-4587-af3d-453c7aef306d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 12:09:13 crc kubenswrapper[4881]: I0121 12:09:13.837479 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2p8sg\" (UniqueName: \"kubernetes.io/projected/1ca80118-375e-4587-af3d-453c7aef306d-kube-api-access-2p8sg\") on node \"crc\" DevicePath \"\"" Jan 21 12:09:13 crc kubenswrapper[4881]: I0121 12:09:13.850176 4881 scope.go:117] "RemoveContainer" containerID="8484a112814abc2dceafcc57020ec2ffd43842766914ec559956998f2341eb55" Jan 21 12:09:13 crc kubenswrapper[4881]: I0121 12:09:13.902591 4881 scope.go:117] "RemoveContainer" containerID="1c08d6247abf9de8a7636b7d6967781604b5af0caca601e2cf2305330c9a007f" Jan 21 12:09:13 crc kubenswrapper[4881]: E0121 12:09:13.903446 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c08d6247abf9de8a7636b7d6967781604b5af0caca601e2cf2305330c9a007f\": container with ID starting with 1c08d6247abf9de8a7636b7d6967781604b5af0caca601e2cf2305330c9a007f not found: ID does not exist" containerID="1c08d6247abf9de8a7636b7d6967781604b5af0caca601e2cf2305330c9a007f" Jan 21 12:09:13 crc kubenswrapper[4881]: I0121 12:09:13.903503 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c08d6247abf9de8a7636b7d6967781604b5af0caca601e2cf2305330c9a007f"} err="failed to get container status \"1c08d6247abf9de8a7636b7d6967781604b5af0caca601e2cf2305330c9a007f\": rpc error: code = NotFound desc = could not find container \"1c08d6247abf9de8a7636b7d6967781604b5af0caca601e2cf2305330c9a007f\": container with ID starting with 1c08d6247abf9de8a7636b7d6967781604b5af0caca601e2cf2305330c9a007f not found: ID does not exist" Jan 21 12:09:13 crc kubenswrapper[4881]: I0121 12:09:13.903536 4881 scope.go:117] "RemoveContainer" containerID="6a6d993de2d9eb5fb7e796d408b5595c3142efe400d24742f86195b8bca57d22" Jan 21 12:09:13 crc kubenswrapper[4881]: E0121 12:09:13.903954 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a6d993de2d9eb5fb7e796d408b5595c3142efe400d24742f86195b8bca57d22\": container with ID starting with 6a6d993de2d9eb5fb7e796d408b5595c3142efe400d24742f86195b8bca57d22 not found: ID does not exist" containerID="6a6d993de2d9eb5fb7e796d408b5595c3142efe400d24742f86195b8bca57d22" Jan 21 12:09:13 crc kubenswrapper[4881]: I0121 12:09:13.903986 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a6d993de2d9eb5fb7e796d408b5595c3142efe400d24742f86195b8bca57d22"} err="failed to get container status \"6a6d993de2d9eb5fb7e796d408b5595c3142efe400d24742f86195b8bca57d22\": rpc error: code = NotFound desc = could not find container \"6a6d993de2d9eb5fb7e796d408b5595c3142efe400d24742f86195b8bca57d22\": container with ID starting with 6a6d993de2d9eb5fb7e796d408b5595c3142efe400d24742f86195b8bca57d22 not found: ID does not exist" Jan 21 12:09:13 crc kubenswrapper[4881]: I0121 12:09:13.903999 4881 scope.go:117] "RemoveContainer" containerID="8484a112814abc2dceafcc57020ec2ffd43842766914ec559956998f2341eb55" Jan 21 12:09:13 crc kubenswrapper[4881]: E0121 12:09:13.904308 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8484a112814abc2dceafcc57020ec2ffd43842766914ec559956998f2341eb55\": container with ID starting with 8484a112814abc2dceafcc57020ec2ffd43842766914ec559956998f2341eb55 not found: ID does not exist" containerID="8484a112814abc2dceafcc57020ec2ffd43842766914ec559956998f2341eb55" Jan 21 12:09:13 crc kubenswrapper[4881]: I0121 12:09:13.904347 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8484a112814abc2dceafcc57020ec2ffd43842766914ec559956998f2341eb55"} err="failed to get container status \"8484a112814abc2dceafcc57020ec2ffd43842766914ec559956998f2341eb55\": rpc error: code = NotFound desc = could not find container \"8484a112814abc2dceafcc57020ec2ffd43842766914ec559956998f2341eb55\": container with ID starting with 8484a112814abc2dceafcc57020ec2ffd43842766914ec559956998f2341eb55 not found: ID does not exist" Jan 21 12:09:14 crc kubenswrapper[4881]: I0121 12:09:14.144856 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m69zl"] Jan 21 12:09:14 crc kubenswrapper[4881]: I0121 12:09:14.156218 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-m69zl"] Jan 21 12:09:15 crc kubenswrapper[4881]: I0121 12:09:15.326610 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ca80118-375e-4587-af3d-453c7aef306d" path="/var/lib/kubelet/pods/1ca80118-375e-4587-af3d-453c7aef306d/volumes" Jan 21 12:11:29 crc kubenswrapper[4881]: I0121 12:11:29.851611 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:11:29 crc kubenswrapper[4881]: I0121 12:11:29.852289 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:12:00 crc kubenswrapper[4881]: I0121 12:11:59.852393 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:12:00 crc kubenswrapper[4881]: I0121 12:11:59.853059 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:12:06 crc kubenswrapper[4881]: I0121 12:12:06.361491 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-7vz4j" podUID="0a051fc2-b6e4-463c-bb0a-b565d12b21b4" containerName="registry-server" probeResult="failure" output=< Jan 21 12:12:06 crc kubenswrapper[4881]: timeout: health rpc did not complete within 1s Jan 21 12:12:06 crc kubenswrapper[4881]: > Jan 21 12:12:29 crc kubenswrapper[4881]: I0121 12:12:29.851364 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:12:29 crc kubenswrapper[4881]: I0121 12:12:29.851988 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:12:29 crc kubenswrapper[4881]: I0121 12:12:29.852050 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 12:12:29 crc kubenswrapper[4881]: I0121 12:12:29.853085 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c"} pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 12:12:29 crc kubenswrapper[4881]: I0121 12:12:29.853157 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" containerID="cri-o://a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c" gracePeriod=600 Jan 21 12:12:29 crc kubenswrapper[4881]: E0121 12:12:29.985379 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:12:30 crc kubenswrapper[4881]: I0121 12:12:30.674451 4881 generic.go:334] "Generic (PLEG): container finished" podID="3687b313-1df2-4274-80db-8c758b51bf2d" containerID="a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c" exitCode=0 Jan 21 12:12:30 crc kubenswrapper[4881]: I0121 12:12:30.674677 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerDied","Data":"a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c"} Jan 21 12:12:30 crc kubenswrapper[4881]: I0121 12:12:30.675030 4881 scope.go:117] "RemoveContainer" containerID="8fa2fcd197247817c68b133d6a51bf7eca2545a597f5deb7e87467827e522318" Jan 21 12:12:30 crc kubenswrapper[4881]: I0121 12:12:30.676518 4881 scope.go:117] "RemoveContainer" containerID="a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c" Jan 21 12:12:30 crc kubenswrapper[4881]: E0121 12:12:30.677406 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:12:45 crc kubenswrapper[4881]: I0121 12:12:45.315523 4881 scope.go:117] "RemoveContainer" containerID="a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c" Jan 21 12:12:45 crc kubenswrapper[4881]: E0121 12:12:45.316405 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:12:56 crc kubenswrapper[4881]: I0121 12:12:56.313484 4881 scope.go:117] "RemoveContainer" containerID="a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c" Jan 21 12:12:56 crc kubenswrapper[4881]: E0121 12:12:56.314761 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:13:10 crc kubenswrapper[4881]: I0121 12:13:10.395496 4881 scope.go:117] "RemoveContainer" containerID="a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c" Jan 21 12:13:10 crc kubenswrapper[4881]: E0121 12:13:10.396271 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:13:13 crc kubenswrapper[4881]: I0121 12:13:13.767906 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="cd1973a5-773b-438b-aab7-709fb828324d" containerName="galera" probeResult="failure" output="command timed out" Jan 21 12:13:22 crc kubenswrapper[4881]: I0121 12:13:22.311869 4881 scope.go:117] "RemoveContainer" containerID="a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c" Jan 21 12:13:22 crc kubenswrapper[4881]: E0121 12:13:22.313448 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:13:33 crc kubenswrapper[4881]: I0121 12:13:33.334150 4881 scope.go:117] "RemoveContainer" containerID="a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c" Jan 21 12:13:33 crc kubenswrapper[4881]: E0121 12:13:33.335679 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:13:44 crc kubenswrapper[4881]: I0121 12:13:44.312811 4881 scope.go:117] "RemoveContainer" containerID="a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c" Jan 21 12:13:44 crc kubenswrapper[4881]: E0121 12:13:44.313701 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:13:56 crc kubenswrapper[4881]: I0121 12:13:56.311705 4881 scope.go:117] "RemoveContainer" containerID="a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c" Jan 21 12:13:56 crc kubenswrapper[4881]: E0121 12:13:56.312494 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:14:11 crc kubenswrapper[4881]: I0121 12:14:11.311047 4881 scope.go:117] "RemoveContainer" containerID="a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c" Jan 21 12:14:11 crc kubenswrapper[4881]: E0121 12:14:11.311739 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:14:24 crc kubenswrapper[4881]: I0121 12:14:24.311945 4881 scope.go:117] "RemoveContainer" containerID="a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c" Jan 21 12:14:24 crc kubenswrapper[4881]: E0121 12:14:24.312879 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:14:37 crc kubenswrapper[4881]: I0121 12:14:37.311422 4881 scope.go:117] "RemoveContainer" containerID="a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c" Jan 21 12:14:37 crc kubenswrapper[4881]: E0121 12:14:37.312940 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:14:48 crc kubenswrapper[4881]: I0121 12:14:48.311430 4881 scope.go:117] "RemoveContainer" containerID="a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c" Jan 21 12:14:48 crc kubenswrapper[4881]: E0121 12:14:48.312278 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:14:59 crc kubenswrapper[4881]: I0121 12:14:59.313864 4881 scope.go:117] "RemoveContainer" containerID="a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c" Jan 21 12:14:59 crc kubenswrapper[4881]: E0121 12:14:59.314737 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:15:00 crc kubenswrapper[4881]: I0121 12:15:00.186895 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483295-8zv6c"] Jan 21 12:15:00 crc kubenswrapper[4881]: E0121 12:15:00.187671 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ca80118-375e-4587-af3d-453c7aef306d" containerName="extract-utilities" Jan 21 12:15:00 crc kubenswrapper[4881]: I0121 12:15:00.187700 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ca80118-375e-4587-af3d-453c7aef306d" containerName="extract-utilities" Jan 21 12:15:00 crc kubenswrapper[4881]: E0121 12:15:00.187714 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ca80118-375e-4587-af3d-453c7aef306d" containerName="registry-server" Jan 21 12:15:00 crc kubenswrapper[4881]: I0121 12:15:00.187720 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ca80118-375e-4587-af3d-453c7aef306d" containerName="registry-server" Jan 21 12:15:00 crc kubenswrapper[4881]: E0121 12:15:00.187742 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ae7c44c-9f78-4779-bff2-32f7e9246561" containerName="extract-utilities" Jan 21 12:15:00 crc kubenswrapper[4881]: I0121 12:15:00.187749 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ae7c44c-9f78-4779-bff2-32f7e9246561" containerName="extract-utilities" Jan 21 12:15:00 crc kubenswrapper[4881]: E0121 12:15:00.187771 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ca80118-375e-4587-af3d-453c7aef306d" containerName="extract-content" Jan 21 12:15:00 crc kubenswrapper[4881]: I0121 12:15:00.187777 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ca80118-375e-4587-af3d-453c7aef306d" containerName="extract-content" Jan 21 12:15:00 crc kubenswrapper[4881]: E0121 12:15:00.187786 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ae7c44c-9f78-4779-bff2-32f7e9246561" containerName="registry-server" Jan 21 12:15:00 crc kubenswrapper[4881]: I0121 12:15:00.187793 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ae7c44c-9f78-4779-bff2-32f7e9246561" containerName="registry-server" Jan 21 12:15:00 crc kubenswrapper[4881]: E0121 12:15:00.187833 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ae7c44c-9f78-4779-bff2-32f7e9246561" containerName="extract-content" Jan 21 12:15:00 crc kubenswrapper[4881]: I0121 12:15:00.187843 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ae7c44c-9f78-4779-bff2-32f7e9246561" containerName="extract-content" Jan 21 12:15:00 crc kubenswrapper[4881]: I0121 12:15:00.188136 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ca80118-375e-4587-af3d-453c7aef306d" containerName="registry-server" Jan 21 12:15:00 crc kubenswrapper[4881]: I0121 12:15:00.188158 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ae7c44c-9f78-4779-bff2-32f7e9246561" containerName="registry-server" Jan 21 12:15:00 crc kubenswrapper[4881]: I0121 12:15:00.189129 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483295-8zv6c" Jan 21 12:15:00 crc kubenswrapper[4881]: I0121 12:15:00.192453 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 12:15:00 crc kubenswrapper[4881]: I0121 12:15:00.192619 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 12:15:00 crc kubenswrapper[4881]: I0121 12:15:00.198936 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483295-8zv6c"] Jan 21 12:15:00 crc kubenswrapper[4881]: I0121 12:15:00.282306 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/22846423-24bd-4d85-b2da-a5c75401cd25-secret-volume\") pod \"collect-profiles-29483295-8zv6c\" (UID: \"22846423-24bd-4d85-b2da-a5c75401cd25\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483295-8zv6c" Jan 21 12:15:00 crc kubenswrapper[4881]: I0121 12:15:00.282418 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t82zd\" (UniqueName: \"kubernetes.io/projected/22846423-24bd-4d85-b2da-a5c75401cd25-kube-api-access-t82zd\") pod \"collect-profiles-29483295-8zv6c\" (UID: \"22846423-24bd-4d85-b2da-a5c75401cd25\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483295-8zv6c" Jan 21 12:15:00 crc kubenswrapper[4881]: I0121 12:15:00.282452 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22846423-24bd-4d85-b2da-a5c75401cd25-config-volume\") pod \"collect-profiles-29483295-8zv6c\" (UID: \"22846423-24bd-4d85-b2da-a5c75401cd25\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483295-8zv6c" Jan 21 12:15:00 crc kubenswrapper[4881]: I0121 12:15:00.385753 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/22846423-24bd-4d85-b2da-a5c75401cd25-secret-volume\") pod \"collect-profiles-29483295-8zv6c\" (UID: \"22846423-24bd-4d85-b2da-a5c75401cd25\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483295-8zv6c" Jan 21 12:15:00 crc kubenswrapper[4881]: I0121 12:15:00.385938 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t82zd\" (UniqueName: \"kubernetes.io/projected/22846423-24bd-4d85-b2da-a5c75401cd25-kube-api-access-t82zd\") pod \"collect-profiles-29483295-8zv6c\" (UID: \"22846423-24bd-4d85-b2da-a5c75401cd25\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483295-8zv6c" Jan 21 12:15:00 crc kubenswrapper[4881]: I0121 12:15:00.386004 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22846423-24bd-4d85-b2da-a5c75401cd25-config-volume\") pod \"collect-profiles-29483295-8zv6c\" (UID: \"22846423-24bd-4d85-b2da-a5c75401cd25\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483295-8zv6c" Jan 21 12:15:00 crc kubenswrapper[4881]: I0121 12:15:00.387356 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22846423-24bd-4d85-b2da-a5c75401cd25-config-volume\") pod \"collect-profiles-29483295-8zv6c\" (UID: \"22846423-24bd-4d85-b2da-a5c75401cd25\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483295-8zv6c" Jan 21 12:15:00 crc kubenswrapper[4881]: I0121 12:15:00.394310 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/22846423-24bd-4d85-b2da-a5c75401cd25-secret-volume\") pod \"collect-profiles-29483295-8zv6c\" (UID: \"22846423-24bd-4d85-b2da-a5c75401cd25\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483295-8zv6c" Jan 21 12:15:00 crc kubenswrapper[4881]: I0121 12:15:00.406778 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t82zd\" (UniqueName: \"kubernetes.io/projected/22846423-24bd-4d85-b2da-a5c75401cd25-kube-api-access-t82zd\") pod \"collect-profiles-29483295-8zv6c\" (UID: \"22846423-24bd-4d85-b2da-a5c75401cd25\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483295-8zv6c" Jan 21 12:15:00 crc kubenswrapper[4881]: I0121 12:15:00.508577 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483295-8zv6c" Jan 21 12:15:01 crc kubenswrapper[4881]: I0121 12:15:01.032637 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483295-8zv6c"] Jan 21 12:15:01 crc kubenswrapper[4881]: I0121 12:15:01.190302 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483295-8zv6c" event={"ID":"22846423-24bd-4d85-b2da-a5c75401cd25","Type":"ContainerStarted","Data":"f965918bb02890baac237dc8df43e156a9095552fde727cfe31938539fdd3625"} Jan 21 12:15:02 crc kubenswrapper[4881]: I0121 12:15:02.203115 4881 generic.go:334] "Generic (PLEG): container finished" podID="22846423-24bd-4d85-b2da-a5c75401cd25" containerID="bf9af12b6f88ac7a2c2f3b75d58737d697a4cfe360d0edd4e874140a2c1b67eb" exitCode=0 Jan 21 12:15:02 crc kubenswrapper[4881]: I0121 12:15:02.203721 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483295-8zv6c" event={"ID":"22846423-24bd-4d85-b2da-a5c75401cd25","Type":"ContainerDied","Data":"bf9af12b6f88ac7a2c2f3b75d58737d697a4cfe360d0edd4e874140a2c1b67eb"} Jan 21 12:15:03 crc kubenswrapper[4881]: I0121 12:15:03.654969 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483295-8zv6c" Jan 21 12:15:03 crc kubenswrapper[4881]: I0121 12:15:03.660851 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t82zd\" (UniqueName: \"kubernetes.io/projected/22846423-24bd-4d85-b2da-a5c75401cd25-kube-api-access-t82zd\") pod \"22846423-24bd-4d85-b2da-a5c75401cd25\" (UID: \"22846423-24bd-4d85-b2da-a5c75401cd25\") " Jan 21 12:15:03 crc kubenswrapper[4881]: I0121 12:15:03.660894 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22846423-24bd-4d85-b2da-a5c75401cd25-config-volume\") pod \"22846423-24bd-4d85-b2da-a5c75401cd25\" (UID: \"22846423-24bd-4d85-b2da-a5c75401cd25\") " Jan 21 12:15:03 crc kubenswrapper[4881]: I0121 12:15:03.660953 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/22846423-24bd-4d85-b2da-a5c75401cd25-secret-volume\") pod \"22846423-24bd-4d85-b2da-a5c75401cd25\" (UID: \"22846423-24bd-4d85-b2da-a5c75401cd25\") " Jan 21 12:15:03 crc kubenswrapper[4881]: I0121 12:15:03.661837 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22846423-24bd-4d85-b2da-a5c75401cd25-config-volume" (OuterVolumeSpecName: "config-volume") pod "22846423-24bd-4d85-b2da-a5c75401cd25" (UID: "22846423-24bd-4d85-b2da-a5c75401cd25"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 12:15:03 crc kubenswrapper[4881]: I0121 12:15:03.668662 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22846423-24bd-4d85-b2da-a5c75401cd25-kube-api-access-t82zd" (OuterVolumeSpecName: "kube-api-access-t82zd") pod "22846423-24bd-4d85-b2da-a5c75401cd25" (UID: "22846423-24bd-4d85-b2da-a5c75401cd25"). InnerVolumeSpecName "kube-api-access-t82zd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:15:03 crc kubenswrapper[4881]: I0121 12:15:03.671932 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22846423-24bd-4d85-b2da-a5c75401cd25-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "22846423-24bd-4d85-b2da-a5c75401cd25" (UID: "22846423-24bd-4d85-b2da-a5c75401cd25"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 12:15:03 crc kubenswrapper[4881]: I0121 12:15:03.763654 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t82zd\" (UniqueName: \"kubernetes.io/projected/22846423-24bd-4d85-b2da-a5c75401cd25-kube-api-access-t82zd\") on node \"crc\" DevicePath \"\"" Jan 21 12:15:03 crc kubenswrapper[4881]: I0121 12:15:03.763701 4881 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/22846423-24bd-4d85-b2da-a5c75401cd25-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 12:15:03 crc kubenswrapper[4881]: I0121 12:15:03.763715 4881 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/22846423-24bd-4d85-b2da-a5c75401cd25-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 12:15:04 crc kubenswrapper[4881]: I0121 12:15:04.228909 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483295-8zv6c" event={"ID":"22846423-24bd-4d85-b2da-a5c75401cd25","Type":"ContainerDied","Data":"f965918bb02890baac237dc8df43e156a9095552fde727cfe31938539fdd3625"} Jan 21 12:15:04 crc kubenswrapper[4881]: I0121 12:15:04.229279 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f965918bb02890baac237dc8df43e156a9095552fde727cfe31938539fdd3625" Jan 21 12:15:04 crc kubenswrapper[4881]: I0121 12:15:04.229050 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483295-8zv6c" Jan 21 12:15:04 crc kubenswrapper[4881]: I0121 12:15:04.766243 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483250-hpz5k"] Jan 21 12:15:04 crc kubenswrapper[4881]: I0121 12:15:04.779006 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483250-hpz5k"] Jan 21 12:15:05 crc kubenswrapper[4881]: I0121 12:15:05.333177 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0563880c-563e-4cc5-93a0-c2af095788cb" path="/var/lib/kubelet/pods/0563880c-563e-4cc5-93a0-c2af095788cb/volumes" Jan 21 12:15:11 crc kubenswrapper[4881]: I0121 12:15:11.311425 4881 scope.go:117] "RemoveContainer" containerID="a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c" Jan 21 12:15:11 crc kubenswrapper[4881]: E0121 12:15:11.312264 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:15:24 crc kubenswrapper[4881]: I0121 12:15:24.311633 4881 scope.go:117] "RemoveContainer" containerID="a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c" Jan 21 12:15:24 crc kubenswrapper[4881]: E0121 12:15:24.312906 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:15:33 crc kubenswrapper[4881]: I0121 12:15:33.042566 4881 scope.go:117] "RemoveContainer" containerID="c97b0fba984ac7ac90aa9867ceabf4a4b1015c378fef6bf95655dcf59a8cdfd7" Jan 21 12:15:36 crc kubenswrapper[4881]: I0121 12:15:36.310618 4881 scope.go:117] "RemoveContainer" containerID="a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c" Jan 21 12:15:36 crc kubenswrapper[4881]: E0121 12:15:36.311491 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:15:47 crc kubenswrapper[4881]: I0121 12:15:47.311182 4881 scope.go:117] "RemoveContainer" containerID="a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c" Jan 21 12:15:47 crc kubenswrapper[4881]: E0121 12:15:47.312047 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:16:01 crc kubenswrapper[4881]: I0121 12:16:01.311486 4881 scope.go:117] "RemoveContainer" containerID="a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c" Jan 21 12:16:01 crc kubenswrapper[4881]: E0121 12:16:01.313185 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:16:13 crc kubenswrapper[4881]: I0121 12:16:13.317573 4881 scope.go:117] "RemoveContainer" containerID="a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c" Jan 21 12:16:13 crc kubenswrapper[4881]: E0121 12:16:13.318335 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:16:24 crc kubenswrapper[4881]: I0121 12:16:24.321498 4881 scope.go:117] "RemoveContainer" containerID="a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c" Jan 21 12:16:24 crc kubenswrapper[4881]: E0121 12:16:24.323769 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:16:37 crc kubenswrapper[4881]: I0121 12:16:37.310927 4881 scope.go:117] "RemoveContainer" containerID="a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c" Jan 21 12:16:37 crc kubenswrapper[4881]: E0121 12:16:37.314090 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:16:51 crc kubenswrapper[4881]: I0121 12:16:51.312064 4881 scope.go:117] "RemoveContainer" containerID="a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c" Jan 21 12:16:51 crc kubenswrapper[4881]: E0121 12:16:51.312917 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:17:02 crc kubenswrapper[4881]: I0121 12:17:02.946199 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cpvbs"] Jan 21 12:17:02 crc kubenswrapper[4881]: E0121 12:17:02.949535 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22846423-24bd-4d85-b2da-a5c75401cd25" containerName="collect-profiles" Jan 21 12:17:02 crc kubenswrapper[4881]: I0121 12:17:02.949561 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="22846423-24bd-4d85-b2da-a5c75401cd25" containerName="collect-profiles" Jan 21 12:17:02 crc kubenswrapper[4881]: I0121 12:17:02.949841 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="22846423-24bd-4d85-b2da-a5c75401cd25" containerName="collect-profiles" Jan 21 12:17:02 crc kubenswrapper[4881]: I0121 12:17:02.951811 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cpvbs" Jan 21 12:17:02 crc kubenswrapper[4881]: I0121 12:17:02.989458 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cpvbs"] Jan 21 12:17:02 crc kubenswrapper[4881]: I0121 12:17:02.996200 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02f6c733-139c-44ae-8b73-a6e3057768be-catalog-content\") pod \"certified-operators-cpvbs\" (UID: \"02f6c733-139c-44ae-8b73-a6e3057768be\") " pod="openshift-marketplace/certified-operators-cpvbs" Jan 21 12:17:02 crc kubenswrapper[4881]: I0121 12:17:02.996299 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02f6c733-139c-44ae-8b73-a6e3057768be-utilities\") pod \"certified-operators-cpvbs\" (UID: \"02f6c733-139c-44ae-8b73-a6e3057768be\") " pod="openshift-marketplace/certified-operators-cpvbs" Jan 21 12:17:02 crc kubenswrapper[4881]: I0121 12:17:02.996369 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v98rg\" (UniqueName: \"kubernetes.io/projected/02f6c733-139c-44ae-8b73-a6e3057768be-kube-api-access-v98rg\") pod \"certified-operators-cpvbs\" (UID: \"02f6c733-139c-44ae-8b73-a6e3057768be\") " pod="openshift-marketplace/certified-operators-cpvbs" Jan 21 12:17:03 crc kubenswrapper[4881]: I0121 12:17:03.098646 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02f6c733-139c-44ae-8b73-a6e3057768be-catalog-content\") pod \"certified-operators-cpvbs\" (UID: \"02f6c733-139c-44ae-8b73-a6e3057768be\") " pod="openshift-marketplace/certified-operators-cpvbs" Jan 21 12:17:03 crc kubenswrapper[4881]: I0121 12:17:03.099092 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02f6c733-139c-44ae-8b73-a6e3057768be-utilities\") pod \"certified-operators-cpvbs\" (UID: \"02f6c733-139c-44ae-8b73-a6e3057768be\") " pod="openshift-marketplace/certified-operators-cpvbs" Jan 21 12:17:03 crc kubenswrapper[4881]: I0121 12:17:03.099268 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v98rg\" (UniqueName: \"kubernetes.io/projected/02f6c733-139c-44ae-8b73-a6e3057768be-kube-api-access-v98rg\") pod \"certified-operators-cpvbs\" (UID: \"02f6c733-139c-44ae-8b73-a6e3057768be\") " pod="openshift-marketplace/certified-operators-cpvbs" Jan 21 12:17:03 crc kubenswrapper[4881]: I0121 12:17:03.099606 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02f6c733-139c-44ae-8b73-a6e3057768be-catalog-content\") pod \"certified-operators-cpvbs\" (UID: \"02f6c733-139c-44ae-8b73-a6e3057768be\") " pod="openshift-marketplace/certified-operators-cpvbs" Jan 21 12:17:03 crc kubenswrapper[4881]: I0121 12:17:03.099663 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02f6c733-139c-44ae-8b73-a6e3057768be-utilities\") pod \"certified-operators-cpvbs\" (UID: \"02f6c733-139c-44ae-8b73-a6e3057768be\") " pod="openshift-marketplace/certified-operators-cpvbs" Jan 21 12:17:03 crc kubenswrapper[4881]: I0121 12:17:03.118062 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v98rg\" (UniqueName: \"kubernetes.io/projected/02f6c733-139c-44ae-8b73-a6e3057768be-kube-api-access-v98rg\") pod \"certified-operators-cpvbs\" (UID: \"02f6c733-139c-44ae-8b73-a6e3057768be\") " pod="openshift-marketplace/certified-operators-cpvbs" Jan 21 12:17:03 crc kubenswrapper[4881]: I0121 12:17:03.306657 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cpvbs" Jan 21 12:17:03 crc kubenswrapper[4881]: I0121 12:17:03.348743 4881 scope.go:117] "RemoveContainer" containerID="a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c" Jan 21 12:17:03 crc kubenswrapper[4881]: E0121 12:17:03.350121 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:17:03 crc kubenswrapper[4881]: I0121 12:17:03.943893 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cpvbs"] Jan 21 12:17:04 crc kubenswrapper[4881]: I0121 12:17:04.653521 4881 generic.go:334] "Generic (PLEG): container finished" podID="02f6c733-139c-44ae-8b73-a6e3057768be" containerID="a0b87011efc9f857e9c9b7e236d8b6a82ba7e871612d5ca1d16d6da3cb3149b9" exitCode=0 Jan 21 12:17:04 crc kubenswrapper[4881]: I0121 12:17:04.653575 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cpvbs" event={"ID":"02f6c733-139c-44ae-8b73-a6e3057768be","Type":"ContainerDied","Data":"a0b87011efc9f857e9c9b7e236d8b6a82ba7e871612d5ca1d16d6da3cb3149b9"} Jan 21 12:17:04 crc kubenswrapper[4881]: I0121 12:17:04.653607 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cpvbs" event={"ID":"02f6c733-139c-44ae-8b73-a6e3057768be","Type":"ContainerStarted","Data":"60e46a66d15f4fc424d916da6f3a3b1d0bc943c1977338a7d71a92b9ebcd7e0f"} Jan 21 12:17:04 crc kubenswrapper[4881]: I0121 12:17:04.656121 4881 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 12:17:06 crc kubenswrapper[4881]: I0121 12:17:06.677615 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cpvbs" event={"ID":"02f6c733-139c-44ae-8b73-a6e3057768be","Type":"ContainerStarted","Data":"ea75680997b7ad974c558c644f3582b50eefc713815cb4d9b60e64b010e20743"} Jan 21 12:17:07 crc kubenswrapper[4881]: I0121 12:17:07.688918 4881 generic.go:334] "Generic (PLEG): container finished" podID="02f6c733-139c-44ae-8b73-a6e3057768be" containerID="ea75680997b7ad974c558c644f3582b50eefc713815cb4d9b60e64b010e20743" exitCode=0 Jan 21 12:17:07 crc kubenswrapper[4881]: I0121 12:17:07.689004 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cpvbs" event={"ID":"02f6c733-139c-44ae-8b73-a6e3057768be","Type":"ContainerDied","Data":"ea75680997b7ad974c558c644f3582b50eefc713815cb4d9b60e64b010e20743"} Jan 21 12:17:10 crc kubenswrapper[4881]: I0121 12:17:10.723110 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cpvbs" event={"ID":"02f6c733-139c-44ae-8b73-a6e3057768be","Type":"ContainerStarted","Data":"efa4780df099e6fc0b25f85730952cac3a1da5dce78d5144ac0a9df0692a392d"} Jan 21 12:17:10 crc kubenswrapper[4881]: I0121 12:17:10.757995 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cpvbs" podStartSLOduration=3.56108408 podStartE2EDuration="8.757903736s" podCreationTimestamp="2026-01-21 12:17:02 +0000 UTC" firstStartedPulling="2026-01-21 12:17:04.655748062 +0000 UTC m=+4811.915704531" lastFinishedPulling="2026-01-21 12:17:09.852567708 +0000 UTC m=+4817.112524187" observedRunningTime="2026-01-21 12:17:10.741452829 +0000 UTC m=+4818.001409308" watchObservedRunningTime="2026-01-21 12:17:10.757903736 +0000 UTC m=+4818.017860225" Jan 21 12:17:13 crc kubenswrapper[4881]: I0121 12:17:13.308213 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cpvbs" Jan 21 12:17:13 crc kubenswrapper[4881]: I0121 12:17:13.308710 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cpvbs" Jan 21 12:17:13 crc kubenswrapper[4881]: I0121 12:17:13.426563 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cpvbs" Jan 21 12:17:14 crc kubenswrapper[4881]: I0121 12:17:14.310907 4881 scope.go:117] "RemoveContainer" containerID="a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c" Jan 21 12:17:14 crc kubenswrapper[4881]: E0121 12:17:14.312008 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:17:23 crc kubenswrapper[4881]: I0121 12:17:23.374421 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cpvbs" Jan 21 12:17:23 crc kubenswrapper[4881]: I0121 12:17:23.429471 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cpvbs"] Jan 21 12:17:23 crc kubenswrapper[4881]: I0121 12:17:23.859825 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cpvbs" podUID="02f6c733-139c-44ae-8b73-a6e3057768be" containerName="registry-server" containerID="cri-o://efa4780df099e6fc0b25f85730952cac3a1da5dce78d5144ac0a9df0692a392d" gracePeriod=2 Jan 21 12:17:24 crc kubenswrapper[4881]: I0121 12:17:24.880998 4881 generic.go:334] "Generic (PLEG): container finished" podID="02f6c733-139c-44ae-8b73-a6e3057768be" containerID="efa4780df099e6fc0b25f85730952cac3a1da5dce78d5144ac0a9df0692a392d" exitCode=0 Jan 21 12:17:24 crc kubenswrapper[4881]: I0121 12:17:24.881244 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cpvbs" event={"ID":"02f6c733-139c-44ae-8b73-a6e3057768be","Type":"ContainerDied","Data":"efa4780df099e6fc0b25f85730952cac3a1da5dce78d5144ac0a9df0692a392d"} Jan 21 12:17:25 crc kubenswrapper[4881]: I0121 12:17:25.311007 4881 scope.go:117] "RemoveContainer" containerID="a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c" Jan 21 12:17:25 crc kubenswrapper[4881]: E0121 12:17:25.311580 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:17:26 crc kubenswrapper[4881]: I0121 12:17:26.512977 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cpvbs" Jan 21 12:17:26 crc kubenswrapper[4881]: I0121 12:17:26.561859 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02f6c733-139c-44ae-8b73-a6e3057768be-utilities\") pod \"02f6c733-139c-44ae-8b73-a6e3057768be\" (UID: \"02f6c733-139c-44ae-8b73-a6e3057768be\") " Jan 21 12:17:26 crc kubenswrapper[4881]: I0121 12:17:26.561930 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02f6c733-139c-44ae-8b73-a6e3057768be-catalog-content\") pod \"02f6c733-139c-44ae-8b73-a6e3057768be\" (UID: \"02f6c733-139c-44ae-8b73-a6e3057768be\") " Jan 21 12:17:26 crc kubenswrapper[4881]: I0121 12:17:26.562052 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v98rg\" (UniqueName: \"kubernetes.io/projected/02f6c733-139c-44ae-8b73-a6e3057768be-kube-api-access-v98rg\") pod \"02f6c733-139c-44ae-8b73-a6e3057768be\" (UID: \"02f6c733-139c-44ae-8b73-a6e3057768be\") " Jan 21 12:17:26 crc kubenswrapper[4881]: I0121 12:17:26.563047 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02f6c733-139c-44ae-8b73-a6e3057768be-utilities" (OuterVolumeSpecName: "utilities") pod "02f6c733-139c-44ae-8b73-a6e3057768be" (UID: "02f6c733-139c-44ae-8b73-a6e3057768be"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:17:26 crc kubenswrapper[4881]: I0121 12:17:26.575092 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02f6c733-139c-44ae-8b73-a6e3057768be-kube-api-access-v98rg" (OuterVolumeSpecName: "kube-api-access-v98rg") pod "02f6c733-139c-44ae-8b73-a6e3057768be" (UID: "02f6c733-139c-44ae-8b73-a6e3057768be"). InnerVolumeSpecName "kube-api-access-v98rg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:17:26 crc kubenswrapper[4881]: I0121 12:17:26.609276 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02f6c733-139c-44ae-8b73-a6e3057768be-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "02f6c733-139c-44ae-8b73-a6e3057768be" (UID: "02f6c733-139c-44ae-8b73-a6e3057768be"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:17:26 crc kubenswrapper[4881]: I0121 12:17:26.664195 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v98rg\" (UniqueName: \"kubernetes.io/projected/02f6c733-139c-44ae-8b73-a6e3057768be-kube-api-access-v98rg\") on node \"crc\" DevicePath \"\"" Jan 21 12:17:26 crc kubenswrapper[4881]: I0121 12:17:26.664235 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02f6c733-139c-44ae-8b73-a6e3057768be-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 12:17:26 crc kubenswrapper[4881]: I0121 12:17:26.664245 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02f6c733-139c-44ae-8b73-a6e3057768be-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 12:17:26 crc kubenswrapper[4881]: I0121 12:17:26.903031 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cpvbs" event={"ID":"02f6c733-139c-44ae-8b73-a6e3057768be","Type":"ContainerDied","Data":"60e46a66d15f4fc424d916da6f3a3b1d0bc943c1977338a7d71a92b9ebcd7e0f"} Jan 21 12:17:26 crc kubenswrapper[4881]: I0121 12:17:26.903098 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cpvbs" Jan 21 12:17:26 crc kubenswrapper[4881]: I0121 12:17:26.903121 4881 scope.go:117] "RemoveContainer" containerID="efa4780df099e6fc0b25f85730952cac3a1da5dce78d5144ac0a9df0692a392d" Jan 21 12:17:26 crc kubenswrapper[4881]: I0121 12:17:26.933829 4881 scope.go:117] "RemoveContainer" containerID="ea75680997b7ad974c558c644f3582b50eefc713815cb4d9b60e64b010e20743" Jan 21 12:17:26 crc kubenswrapper[4881]: I0121 12:17:26.963858 4881 scope.go:117] "RemoveContainer" containerID="a0b87011efc9f857e9c9b7e236d8b6a82ba7e871612d5ca1d16d6da3cb3149b9" Jan 21 12:17:26 crc kubenswrapper[4881]: I0121 12:17:26.972106 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cpvbs"] Jan 21 12:17:26 crc kubenswrapper[4881]: I0121 12:17:26.989944 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cpvbs"] Jan 21 12:17:27 crc kubenswrapper[4881]: I0121 12:17:27.324711 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02f6c733-139c-44ae-8b73-a6e3057768be" path="/var/lib/kubelet/pods/02f6c733-139c-44ae-8b73-a6e3057768be/volumes" Jan 21 12:17:38 crc kubenswrapper[4881]: I0121 12:17:38.311050 4881 scope.go:117] "RemoveContainer" containerID="a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c" Jan 21 12:17:39 crc kubenswrapper[4881]: I0121 12:17:39.109155 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"51f7deb68e0f4978c7b2866156b4751c1ca416f1a21d198c62277ed590bf5923"} Jan 21 12:19:29 crc kubenswrapper[4881]: I0121 12:19:29.424566 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gmj66"] Jan 21 12:19:29 crc kubenswrapper[4881]: E0121 12:19:29.425724 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02f6c733-139c-44ae-8b73-a6e3057768be" containerName="extract-utilities" Jan 21 12:19:29 crc kubenswrapper[4881]: I0121 12:19:29.425744 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="02f6c733-139c-44ae-8b73-a6e3057768be" containerName="extract-utilities" Jan 21 12:19:29 crc kubenswrapper[4881]: E0121 12:19:29.425756 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02f6c733-139c-44ae-8b73-a6e3057768be" containerName="registry-server" Jan 21 12:19:29 crc kubenswrapper[4881]: I0121 12:19:29.425764 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="02f6c733-139c-44ae-8b73-a6e3057768be" containerName="registry-server" Jan 21 12:19:29 crc kubenswrapper[4881]: E0121 12:19:29.425781 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02f6c733-139c-44ae-8b73-a6e3057768be" containerName="extract-content" Jan 21 12:19:29 crc kubenswrapper[4881]: I0121 12:19:29.425814 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="02f6c733-139c-44ae-8b73-a6e3057768be" containerName="extract-content" Jan 21 12:19:29 crc kubenswrapper[4881]: I0121 12:19:29.426075 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="02f6c733-139c-44ae-8b73-a6e3057768be" containerName="registry-server" Jan 21 12:19:29 crc kubenswrapper[4881]: I0121 12:19:29.427596 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gmj66" Jan 21 12:19:29 crc kubenswrapper[4881]: I0121 12:19:29.442687 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gmj66"] Jan 21 12:19:29 crc kubenswrapper[4881]: I0121 12:19:29.558837 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f1e0f74-1d2a-4465-8563-fbe80d7c3eae-utilities\") pod \"community-operators-gmj66\" (UID: \"5f1e0f74-1d2a-4465-8563-fbe80d7c3eae\") " pod="openshift-marketplace/community-operators-gmj66" Jan 21 12:19:29 crc kubenswrapper[4881]: I0121 12:19:29.559050 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f1e0f74-1d2a-4465-8563-fbe80d7c3eae-catalog-content\") pod \"community-operators-gmj66\" (UID: \"5f1e0f74-1d2a-4465-8563-fbe80d7c3eae\") " pod="openshift-marketplace/community-operators-gmj66" Jan 21 12:19:29 crc kubenswrapper[4881]: I0121 12:19:29.559396 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8knlw\" (UniqueName: \"kubernetes.io/projected/5f1e0f74-1d2a-4465-8563-fbe80d7c3eae-kube-api-access-8knlw\") pod \"community-operators-gmj66\" (UID: \"5f1e0f74-1d2a-4465-8563-fbe80d7c3eae\") " pod="openshift-marketplace/community-operators-gmj66" Jan 21 12:19:29 crc kubenswrapper[4881]: I0121 12:19:29.662850 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f1e0f74-1d2a-4465-8563-fbe80d7c3eae-catalog-content\") pod \"community-operators-gmj66\" (UID: \"5f1e0f74-1d2a-4465-8563-fbe80d7c3eae\") " pod="openshift-marketplace/community-operators-gmj66" Jan 21 12:19:29 crc kubenswrapper[4881]: I0121 12:19:29.663267 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8knlw\" (UniqueName: \"kubernetes.io/projected/5f1e0f74-1d2a-4465-8563-fbe80d7c3eae-kube-api-access-8knlw\") pod \"community-operators-gmj66\" (UID: \"5f1e0f74-1d2a-4465-8563-fbe80d7c3eae\") " pod="openshift-marketplace/community-operators-gmj66" Jan 21 12:19:29 crc kubenswrapper[4881]: I0121 12:19:29.663519 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f1e0f74-1d2a-4465-8563-fbe80d7c3eae-utilities\") pod \"community-operators-gmj66\" (UID: \"5f1e0f74-1d2a-4465-8563-fbe80d7c3eae\") " pod="openshift-marketplace/community-operators-gmj66" Jan 21 12:19:29 crc kubenswrapper[4881]: I0121 12:19:29.663518 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f1e0f74-1d2a-4465-8563-fbe80d7c3eae-catalog-content\") pod \"community-operators-gmj66\" (UID: \"5f1e0f74-1d2a-4465-8563-fbe80d7c3eae\") " pod="openshift-marketplace/community-operators-gmj66" Jan 21 12:19:29 crc kubenswrapper[4881]: I0121 12:19:29.666110 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f1e0f74-1d2a-4465-8563-fbe80d7c3eae-utilities\") pod \"community-operators-gmj66\" (UID: \"5f1e0f74-1d2a-4465-8563-fbe80d7c3eae\") " pod="openshift-marketplace/community-operators-gmj66" Jan 21 12:19:29 crc kubenswrapper[4881]: I0121 12:19:29.687322 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8knlw\" (UniqueName: \"kubernetes.io/projected/5f1e0f74-1d2a-4465-8563-fbe80d7c3eae-kube-api-access-8knlw\") pod \"community-operators-gmj66\" (UID: \"5f1e0f74-1d2a-4465-8563-fbe80d7c3eae\") " pod="openshift-marketplace/community-operators-gmj66" Jan 21 12:19:29 crc kubenswrapper[4881]: I0121 12:19:29.745807 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gmj66" Jan 21 12:19:30 crc kubenswrapper[4881]: I0121 12:19:30.274557 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gmj66"] Jan 21 12:19:30 crc kubenswrapper[4881]: I0121 12:19:30.940579 4881 generic.go:334] "Generic (PLEG): container finished" podID="5f1e0f74-1d2a-4465-8563-fbe80d7c3eae" containerID="140c746fb595bcdc6444b28c06408889b47367d2f25c5808c0a8fcdbed1f2ac9" exitCode=0 Jan 21 12:19:30 crc kubenswrapper[4881]: I0121 12:19:30.940667 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmj66" event={"ID":"5f1e0f74-1d2a-4465-8563-fbe80d7c3eae","Type":"ContainerDied","Data":"140c746fb595bcdc6444b28c06408889b47367d2f25c5808c0a8fcdbed1f2ac9"} Jan 21 12:19:30 crc kubenswrapper[4881]: I0121 12:19:30.941225 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmj66" event={"ID":"5f1e0f74-1d2a-4465-8563-fbe80d7c3eae","Type":"ContainerStarted","Data":"00f9fc65f13c846fdde5e4ff3376ce54ecbfd5bbdaba0e6b34fd2b171a2ee7ea"} Jan 21 12:19:32 crc kubenswrapper[4881]: I0121 12:19:32.975302 4881 generic.go:334] "Generic (PLEG): container finished" podID="5f1e0f74-1d2a-4465-8563-fbe80d7c3eae" containerID="0a34300fe7fd36429f390a68e435ba2f8b3b17330d25fcff261987924f6d2dd6" exitCode=0 Jan 21 12:19:32 crc kubenswrapper[4881]: I0121 12:19:32.975493 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmj66" event={"ID":"5f1e0f74-1d2a-4465-8563-fbe80d7c3eae","Type":"ContainerDied","Data":"0a34300fe7fd36429f390a68e435ba2f8b3b17330d25fcff261987924f6d2dd6"} Jan 21 12:19:33 crc kubenswrapper[4881]: I0121 12:19:33.987304 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmj66" event={"ID":"5f1e0f74-1d2a-4465-8563-fbe80d7c3eae","Type":"ContainerStarted","Data":"f50bffc57c220ba8a1cb6602165d9dfff61080bfaa79bc9002dd57293d28f35a"} Jan 21 12:19:34 crc kubenswrapper[4881]: I0121 12:19:34.016143 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gmj66" podStartSLOduration=2.558955824 podStartE2EDuration="5.016120888s" podCreationTimestamp="2026-01-21 12:19:29 +0000 UTC" firstStartedPulling="2026-01-21 12:19:30.943031362 +0000 UTC m=+4958.202987831" lastFinishedPulling="2026-01-21 12:19:33.400196426 +0000 UTC m=+4960.660152895" observedRunningTime="2026-01-21 12:19:34.003939799 +0000 UTC m=+4961.263896268" watchObservedRunningTime="2026-01-21 12:19:34.016120888 +0000 UTC m=+4961.276077357" Jan 21 12:19:39 crc kubenswrapper[4881]: I0121 12:19:39.746605 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gmj66" Jan 21 12:19:39 crc kubenswrapper[4881]: I0121 12:19:39.747162 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gmj66" Jan 21 12:19:39 crc kubenswrapper[4881]: I0121 12:19:39.797205 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gmj66" Jan 21 12:19:40 crc kubenswrapper[4881]: I0121 12:19:40.101334 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gmj66" Jan 21 12:19:40 crc kubenswrapper[4881]: I0121 12:19:40.160700 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gmj66"] Jan 21 12:19:42 crc kubenswrapper[4881]: I0121 12:19:42.058225 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gmj66" podUID="5f1e0f74-1d2a-4465-8563-fbe80d7c3eae" containerName="registry-server" containerID="cri-o://f50bffc57c220ba8a1cb6602165d9dfff61080bfaa79bc9002dd57293d28f35a" gracePeriod=2 Jan 21 12:19:42 crc kubenswrapper[4881]: I0121 12:19:42.614002 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gmj66" Jan 21 12:19:42 crc kubenswrapper[4881]: I0121 12:19:42.783903 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8knlw\" (UniqueName: \"kubernetes.io/projected/5f1e0f74-1d2a-4465-8563-fbe80d7c3eae-kube-api-access-8knlw\") pod \"5f1e0f74-1d2a-4465-8563-fbe80d7c3eae\" (UID: \"5f1e0f74-1d2a-4465-8563-fbe80d7c3eae\") " Jan 21 12:19:42 crc kubenswrapper[4881]: I0121 12:19:42.784002 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f1e0f74-1d2a-4465-8563-fbe80d7c3eae-catalog-content\") pod \"5f1e0f74-1d2a-4465-8563-fbe80d7c3eae\" (UID: \"5f1e0f74-1d2a-4465-8563-fbe80d7c3eae\") " Jan 21 12:19:42 crc kubenswrapper[4881]: I0121 12:19:42.784292 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f1e0f74-1d2a-4465-8563-fbe80d7c3eae-utilities\") pod \"5f1e0f74-1d2a-4465-8563-fbe80d7c3eae\" (UID: \"5f1e0f74-1d2a-4465-8563-fbe80d7c3eae\") " Jan 21 12:19:42 crc kubenswrapper[4881]: I0121 12:19:42.785718 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f1e0f74-1d2a-4465-8563-fbe80d7c3eae-utilities" (OuterVolumeSpecName: "utilities") pod "5f1e0f74-1d2a-4465-8563-fbe80d7c3eae" (UID: "5f1e0f74-1d2a-4465-8563-fbe80d7c3eae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:19:42 crc kubenswrapper[4881]: I0121 12:19:42.887594 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f1e0f74-1d2a-4465-8563-fbe80d7c3eae-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 12:19:42 crc kubenswrapper[4881]: I0121 12:19:42.924046 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f1e0f74-1d2a-4465-8563-fbe80d7c3eae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5f1e0f74-1d2a-4465-8563-fbe80d7c3eae" (UID: "5f1e0f74-1d2a-4465-8563-fbe80d7c3eae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:19:42 crc kubenswrapper[4881]: I0121 12:19:42.991187 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f1e0f74-1d2a-4465-8563-fbe80d7c3eae-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 12:19:43 crc kubenswrapper[4881]: I0121 12:19:43.073211 4881 generic.go:334] "Generic (PLEG): container finished" podID="5f1e0f74-1d2a-4465-8563-fbe80d7c3eae" containerID="f50bffc57c220ba8a1cb6602165d9dfff61080bfaa79bc9002dd57293d28f35a" exitCode=0 Jan 21 12:19:43 crc kubenswrapper[4881]: I0121 12:19:43.073268 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmj66" event={"ID":"5f1e0f74-1d2a-4465-8563-fbe80d7c3eae","Type":"ContainerDied","Data":"f50bffc57c220ba8a1cb6602165d9dfff61080bfaa79bc9002dd57293d28f35a"} Jan 21 12:19:43 crc kubenswrapper[4881]: I0121 12:19:43.073307 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gmj66" event={"ID":"5f1e0f74-1d2a-4465-8563-fbe80d7c3eae","Type":"ContainerDied","Data":"00f9fc65f13c846fdde5e4ff3376ce54ecbfd5bbdaba0e6b34fd2b171a2ee7ea"} Jan 21 12:19:43 crc kubenswrapper[4881]: I0121 12:19:43.073334 4881 scope.go:117] "RemoveContainer" containerID="f50bffc57c220ba8a1cb6602165d9dfff61080bfaa79bc9002dd57293d28f35a" Jan 21 12:19:43 crc kubenswrapper[4881]: I0121 12:19:43.073527 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gmj66" Jan 21 12:19:43 crc kubenswrapper[4881]: I0121 12:19:43.108687 4881 scope.go:117] "RemoveContainer" containerID="0a34300fe7fd36429f390a68e435ba2f8b3b17330d25fcff261987924f6d2dd6" Jan 21 12:19:43 crc kubenswrapper[4881]: I0121 12:19:43.271192 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f1e0f74-1d2a-4465-8563-fbe80d7c3eae-kube-api-access-8knlw" (OuterVolumeSpecName: "kube-api-access-8knlw") pod "5f1e0f74-1d2a-4465-8563-fbe80d7c3eae" (UID: "5f1e0f74-1d2a-4465-8563-fbe80d7c3eae"). InnerVolumeSpecName "kube-api-access-8knlw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:19:43 crc kubenswrapper[4881]: I0121 12:19:43.290131 4881 scope.go:117] "RemoveContainer" containerID="140c746fb595bcdc6444b28c06408889b47367d2f25c5808c0a8fcdbed1f2ac9" Jan 21 12:19:43 crc kubenswrapper[4881]: I0121 12:19:43.298753 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8knlw\" (UniqueName: \"kubernetes.io/projected/5f1e0f74-1d2a-4465-8563-fbe80d7c3eae-kube-api-access-8knlw\") on node \"crc\" DevicePath \"\"" Jan 21 12:19:43 crc kubenswrapper[4881]: I0121 12:19:43.441928 4881 scope.go:117] "RemoveContainer" containerID="f50bffc57c220ba8a1cb6602165d9dfff61080bfaa79bc9002dd57293d28f35a" Jan 21 12:19:43 crc kubenswrapper[4881]: E0121 12:19:43.442488 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f50bffc57c220ba8a1cb6602165d9dfff61080bfaa79bc9002dd57293d28f35a\": container with ID starting with f50bffc57c220ba8a1cb6602165d9dfff61080bfaa79bc9002dd57293d28f35a not found: ID does not exist" containerID="f50bffc57c220ba8a1cb6602165d9dfff61080bfaa79bc9002dd57293d28f35a" Jan 21 12:19:43 crc kubenswrapper[4881]: I0121 12:19:43.442555 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f50bffc57c220ba8a1cb6602165d9dfff61080bfaa79bc9002dd57293d28f35a"} err="failed to get container status \"f50bffc57c220ba8a1cb6602165d9dfff61080bfaa79bc9002dd57293d28f35a\": rpc error: code = NotFound desc = could not find container \"f50bffc57c220ba8a1cb6602165d9dfff61080bfaa79bc9002dd57293d28f35a\": container with ID starting with f50bffc57c220ba8a1cb6602165d9dfff61080bfaa79bc9002dd57293d28f35a not found: ID does not exist" Jan 21 12:19:43 crc kubenswrapper[4881]: I0121 12:19:43.442593 4881 scope.go:117] "RemoveContainer" containerID="0a34300fe7fd36429f390a68e435ba2f8b3b17330d25fcff261987924f6d2dd6" Jan 21 12:19:43 crc kubenswrapper[4881]: E0121 12:19:43.443006 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a34300fe7fd36429f390a68e435ba2f8b3b17330d25fcff261987924f6d2dd6\": container with ID starting with 0a34300fe7fd36429f390a68e435ba2f8b3b17330d25fcff261987924f6d2dd6 not found: ID does not exist" containerID="0a34300fe7fd36429f390a68e435ba2f8b3b17330d25fcff261987924f6d2dd6" Jan 21 12:19:43 crc kubenswrapper[4881]: I0121 12:19:43.443043 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a34300fe7fd36429f390a68e435ba2f8b3b17330d25fcff261987924f6d2dd6"} err="failed to get container status \"0a34300fe7fd36429f390a68e435ba2f8b3b17330d25fcff261987924f6d2dd6\": rpc error: code = NotFound desc = could not find container \"0a34300fe7fd36429f390a68e435ba2f8b3b17330d25fcff261987924f6d2dd6\": container with ID starting with 0a34300fe7fd36429f390a68e435ba2f8b3b17330d25fcff261987924f6d2dd6 not found: ID does not exist" Jan 21 12:19:43 crc kubenswrapper[4881]: I0121 12:19:43.443068 4881 scope.go:117] "RemoveContainer" containerID="140c746fb595bcdc6444b28c06408889b47367d2f25c5808c0a8fcdbed1f2ac9" Jan 21 12:19:43 crc kubenswrapper[4881]: E0121 12:19:43.444956 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"140c746fb595bcdc6444b28c06408889b47367d2f25c5808c0a8fcdbed1f2ac9\": container with ID starting with 140c746fb595bcdc6444b28c06408889b47367d2f25c5808c0a8fcdbed1f2ac9 not found: ID does not exist" containerID="140c746fb595bcdc6444b28c06408889b47367d2f25c5808c0a8fcdbed1f2ac9" Jan 21 12:19:43 crc kubenswrapper[4881]: I0121 12:19:43.445005 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"140c746fb595bcdc6444b28c06408889b47367d2f25c5808c0a8fcdbed1f2ac9"} err="failed to get container status \"140c746fb595bcdc6444b28c06408889b47367d2f25c5808c0a8fcdbed1f2ac9\": rpc error: code = NotFound desc = could not find container \"140c746fb595bcdc6444b28c06408889b47367d2f25c5808c0a8fcdbed1f2ac9\": container with ID starting with 140c746fb595bcdc6444b28c06408889b47367d2f25c5808c0a8fcdbed1f2ac9 not found: ID does not exist" Jan 21 12:19:43 crc kubenswrapper[4881]: I0121 12:19:43.500571 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gmj66"] Jan 21 12:19:43 crc kubenswrapper[4881]: I0121 12:19:43.510657 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gmj66"] Jan 21 12:19:45 crc kubenswrapper[4881]: I0121 12:19:45.325270 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f1e0f74-1d2a-4465-8563-fbe80d7c3eae" path="/var/lib/kubelet/pods/5f1e0f74-1d2a-4465-8563-fbe80d7c3eae/volumes" Jan 21 12:19:59 crc kubenswrapper[4881]: I0121 12:19:59.851659 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:19:59 crc kubenswrapper[4881]: I0121 12:19:59.852394 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:20:29 crc kubenswrapper[4881]: I0121 12:20:29.851381 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:20:29 crc kubenswrapper[4881]: I0121 12:20:29.851939 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:20:59 crc kubenswrapper[4881]: I0121 12:20:59.851718 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:20:59 crc kubenswrapper[4881]: I0121 12:20:59.852353 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:20:59 crc kubenswrapper[4881]: I0121 12:20:59.852416 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 12:20:59 crc kubenswrapper[4881]: I0121 12:20:59.853375 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"51f7deb68e0f4978c7b2866156b4751c1ca416f1a21d198c62277ed590bf5923"} pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 12:20:59 crc kubenswrapper[4881]: I0121 12:20:59.853439 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" containerID="cri-o://51f7deb68e0f4978c7b2866156b4751c1ca416f1a21d198c62277ed590bf5923" gracePeriod=600 Jan 21 12:21:00 crc kubenswrapper[4881]: I0121 12:21:00.068740 4881 generic.go:334] "Generic (PLEG): container finished" podID="3687b313-1df2-4274-80db-8c758b51bf2d" containerID="51f7deb68e0f4978c7b2866156b4751c1ca416f1a21d198c62277ed590bf5923" exitCode=0 Jan 21 12:21:00 crc kubenswrapper[4881]: I0121 12:21:00.068802 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerDied","Data":"51f7deb68e0f4978c7b2866156b4751c1ca416f1a21d198c62277ed590bf5923"} Jan 21 12:21:00 crc kubenswrapper[4881]: I0121 12:21:00.068843 4881 scope.go:117] "RemoveContainer" containerID="a0b600abfe841c17453a09b22987c9b1b3e4a3784f4461f5181d18cc06f69f4c" Jan 21 12:21:02 crc kubenswrapper[4881]: I0121 12:21:02.094752 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"8e478b369ca9de619a99750dbc2a4e8ceacdcdd9265b30e32ad54bb5b5205ab0"} Jan 21 12:23:29 crc kubenswrapper[4881]: I0121 12:23:29.851119 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:23:29 crc kubenswrapper[4881]: I0121 12:23:29.851748 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:23:59 crc kubenswrapper[4881]: I0121 12:23:59.851711 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:23:59 crc kubenswrapper[4881]: I0121 12:23:59.852336 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:24:10 crc kubenswrapper[4881]: I0121 12:24:10.516245 4881 trace.go:236] Trace[298046428]: "Calculate volume metrics of prometheus-metric-storage-db for pod openstack/prometheus-metric-storage-0" (21-Jan-2026 12:24:03.749) (total time: 6767ms): Jan 21 12:24:10 crc kubenswrapper[4881]: Trace[298046428]: [6.767087145s] [6.767087145s] END Jan 21 12:24:10 crc kubenswrapper[4881]: I0121 12:24:10.768983 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rm5xm"] Jan 21 12:24:10 crc kubenswrapper[4881]: E0121 12:24:10.774099 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f1e0f74-1d2a-4465-8563-fbe80d7c3eae" containerName="registry-server" Jan 21 12:24:10 crc kubenswrapper[4881]: I0121 12:24:10.774156 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f1e0f74-1d2a-4465-8563-fbe80d7c3eae" containerName="registry-server" Jan 21 12:24:10 crc kubenswrapper[4881]: E0121 12:24:10.774237 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f1e0f74-1d2a-4465-8563-fbe80d7c3eae" containerName="extract-utilities" Jan 21 12:24:10 crc kubenswrapper[4881]: I0121 12:24:10.774246 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f1e0f74-1d2a-4465-8563-fbe80d7c3eae" containerName="extract-utilities" Jan 21 12:24:10 crc kubenswrapper[4881]: E0121 12:24:10.774261 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f1e0f74-1d2a-4465-8563-fbe80d7c3eae" containerName="extract-content" Jan 21 12:24:10 crc kubenswrapper[4881]: I0121 12:24:10.774269 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f1e0f74-1d2a-4465-8563-fbe80d7c3eae" containerName="extract-content" Jan 21 12:24:10 crc kubenswrapper[4881]: I0121 12:24:10.774746 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f1e0f74-1d2a-4465-8563-fbe80d7c3eae" containerName="registry-server" Jan 21 12:24:10 crc kubenswrapper[4881]: I0121 12:24:10.782143 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rm5xm" Jan 21 12:24:10 crc kubenswrapper[4881]: I0121 12:24:10.797878 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rm5xm"] Jan 21 12:24:10 crc kubenswrapper[4881]: I0121 12:24:10.829178 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8rsd\" (UniqueName: \"kubernetes.io/projected/0ba62402-c750-4507-afb1-a4bc0cbb5659-kube-api-access-p8rsd\") pod \"redhat-marketplace-rm5xm\" (UID: \"0ba62402-c750-4507-afb1-a4bc0cbb5659\") " pod="openshift-marketplace/redhat-marketplace-rm5xm" Jan 21 12:24:10 crc kubenswrapper[4881]: I0121 12:24:10.829473 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ba62402-c750-4507-afb1-a4bc0cbb5659-utilities\") pod \"redhat-marketplace-rm5xm\" (UID: \"0ba62402-c750-4507-afb1-a4bc0cbb5659\") " pod="openshift-marketplace/redhat-marketplace-rm5xm" Jan 21 12:24:10 crc kubenswrapper[4881]: I0121 12:24:10.829720 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ba62402-c750-4507-afb1-a4bc0cbb5659-catalog-content\") pod \"redhat-marketplace-rm5xm\" (UID: \"0ba62402-c750-4507-afb1-a4bc0cbb5659\") " pod="openshift-marketplace/redhat-marketplace-rm5xm" Jan 21 12:24:10 crc kubenswrapper[4881]: I0121 12:24:10.930416 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7696g"] Jan 21 12:24:10 crc kubenswrapper[4881]: I0121 12:24:10.932119 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ba62402-c750-4507-afb1-a4bc0cbb5659-catalog-content\") pod \"redhat-marketplace-rm5xm\" (UID: \"0ba62402-c750-4507-afb1-a4bc0cbb5659\") " pod="openshift-marketplace/redhat-marketplace-rm5xm" Jan 21 12:24:10 crc kubenswrapper[4881]: I0121 12:24:10.932217 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8rsd\" (UniqueName: \"kubernetes.io/projected/0ba62402-c750-4507-afb1-a4bc0cbb5659-kube-api-access-p8rsd\") pod \"redhat-marketplace-rm5xm\" (UID: \"0ba62402-c750-4507-afb1-a4bc0cbb5659\") " pod="openshift-marketplace/redhat-marketplace-rm5xm" Jan 21 12:24:10 crc kubenswrapper[4881]: I0121 12:24:10.932277 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ba62402-c750-4507-afb1-a4bc0cbb5659-utilities\") pod \"redhat-marketplace-rm5xm\" (UID: \"0ba62402-c750-4507-afb1-a4bc0cbb5659\") " pod="openshift-marketplace/redhat-marketplace-rm5xm" Jan 21 12:24:10 crc kubenswrapper[4881]: I0121 12:24:10.932853 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7696g" Jan 21 12:24:10 crc kubenswrapper[4881]: I0121 12:24:10.933010 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ba62402-c750-4507-afb1-a4bc0cbb5659-utilities\") pod \"redhat-marketplace-rm5xm\" (UID: \"0ba62402-c750-4507-afb1-a4bc0cbb5659\") " pod="openshift-marketplace/redhat-marketplace-rm5xm" Jan 21 12:24:10 crc kubenswrapper[4881]: I0121 12:24:10.933307 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ba62402-c750-4507-afb1-a4bc0cbb5659-catalog-content\") pod \"redhat-marketplace-rm5xm\" (UID: \"0ba62402-c750-4507-afb1-a4bc0cbb5659\") " pod="openshift-marketplace/redhat-marketplace-rm5xm" Jan 21 12:24:10 crc kubenswrapper[4881]: I0121 12:24:10.945334 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7696g"] Jan 21 12:24:10 crc kubenswrapper[4881]: I0121 12:24:10.957684 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8rsd\" (UniqueName: \"kubernetes.io/projected/0ba62402-c750-4507-afb1-a4bc0cbb5659-kube-api-access-p8rsd\") pod \"redhat-marketplace-rm5xm\" (UID: \"0ba62402-c750-4507-afb1-a4bc0cbb5659\") " pod="openshift-marketplace/redhat-marketplace-rm5xm" Jan 21 12:24:11 crc kubenswrapper[4881]: I0121 12:24:11.035005 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19920016-1549-4841-b51a-4571079dfd12-utilities\") pod \"redhat-operators-7696g\" (UID: \"19920016-1549-4841-b51a-4571079dfd12\") " pod="openshift-marketplace/redhat-operators-7696g" Jan 21 12:24:11 crc kubenswrapper[4881]: I0121 12:24:11.035121 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19920016-1549-4841-b51a-4571079dfd12-catalog-content\") pod \"redhat-operators-7696g\" (UID: \"19920016-1549-4841-b51a-4571079dfd12\") " pod="openshift-marketplace/redhat-operators-7696g" Jan 21 12:24:11 crc kubenswrapper[4881]: I0121 12:24:11.035209 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x278n\" (UniqueName: \"kubernetes.io/projected/19920016-1549-4841-b51a-4571079dfd12-kube-api-access-x278n\") pod \"redhat-operators-7696g\" (UID: \"19920016-1549-4841-b51a-4571079dfd12\") " pod="openshift-marketplace/redhat-operators-7696g" Jan 21 12:24:11 crc kubenswrapper[4881]: I0121 12:24:11.111314 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rm5xm" Jan 21 12:24:11 crc kubenswrapper[4881]: I0121 12:24:11.143775 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19920016-1549-4841-b51a-4571079dfd12-catalog-content\") pod \"redhat-operators-7696g\" (UID: \"19920016-1549-4841-b51a-4571079dfd12\") " pod="openshift-marketplace/redhat-operators-7696g" Jan 21 12:24:11 crc kubenswrapper[4881]: I0121 12:24:11.143892 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x278n\" (UniqueName: \"kubernetes.io/projected/19920016-1549-4841-b51a-4571079dfd12-kube-api-access-x278n\") pod \"redhat-operators-7696g\" (UID: \"19920016-1549-4841-b51a-4571079dfd12\") " pod="openshift-marketplace/redhat-operators-7696g" Jan 21 12:24:11 crc kubenswrapper[4881]: I0121 12:24:11.144094 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19920016-1549-4841-b51a-4571079dfd12-utilities\") pod \"redhat-operators-7696g\" (UID: \"19920016-1549-4841-b51a-4571079dfd12\") " pod="openshift-marketplace/redhat-operators-7696g" Jan 21 12:24:11 crc kubenswrapper[4881]: I0121 12:24:11.144989 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19920016-1549-4841-b51a-4571079dfd12-catalog-content\") pod \"redhat-operators-7696g\" (UID: \"19920016-1549-4841-b51a-4571079dfd12\") " pod="openshift-marketplace/redhat-operators-7696g" Jan 21 12:24:11 crc kubenswrapper[4881]: I0121 12:24:11.147827 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19920016-1549-4841-b51a-4571079dfd12-utilities\") pod \"redhat-operators-7696g\" (UID: \"19920016-1549-4841-b51a-4571079dfd12\") " pod="openshift-marketplace/redhat-operators-7696g" Jan 21 12:24:11 crc kubenswrapper[4881]: I0121 12:24:11.177720 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x278n\" (UniqueName: \"kubernetes.io/projected/19920016-1549-4841-b51a-4571079dfd12-kube-api-access-x278n\") pod \"redhat-operators-7696g\" (UID: \"19920016-1549-4841-b51a-4571079dfd12\") " pod="openshift-marketplace/redhat-operators-7696g" Jan 21 12:24:11 crc kubenswrapper[4881]: I0121 12:24:11.258481 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7696g" Jan 21 12:24:11 crc kubenswrapper[4881]: I0121 12:24:11.867393 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rm5xm"] Jan 21 12:24:11 crc kubenswrapper[4881]: I0121 12:24:11.965594 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7696g"] Jan 21 12:24:12 crc kubenswrapper[4881]: I0121 12:24:12.920257 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7696g" event={"ID":"19920016-1549-4841-b51a-4571079dfd12","Type":"ContainerStarted","Data":"55d842e7bd717974cdf52ec1477da5ecf0227134a5bbda5a2e4ccd1cb867fd3b"} Jan 21 12:24:12 crc kubenswrapper[4881]: I0121 12:24:12.940962 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rm5xm" event={"ID":"0ba62402-c750-4507-afb1-a4bc0cbb5659","Type":"ContainerStarted","Data":"3bf40f3659d32a5324d6f5ded95c6c1fa84643efcf43ead247e37f6b81603f5f"} Jan 21 12:24:13 crc kubenswrapper[4881]: I0121 12:24:13.955230 4881 generic.go:334] "Generic (PLEG): container finished" podID="0ba62402-c750-4507-afb1-a4bc0cbb5659" containerID="37f4380c9f0bded2a8d74e846aa0359ad6632d6691c8866fa1b38a0840862cee" exitCode=0 Jan 21 12:24:13 crc kubenswrapper[4881]: I0121 12:24:13.955292 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rm5xm" event={"ID":"0ba62402-c750-4507-afb1-a4bc0cbb5659","Type":"ContainerDied","Data":"37f4380c9f0bded2a8d74e846aa0359ad6632d6691c8866fa1b38a0840862cee"} Jan 21 12:24:13 crc kubenswrapper[4881]: I0121 12:24:13.958169 4881 generic.go:334] "Generic (PLEG): container finished" podID="19920016-1549-4841-b51a-4571079dfd12" containerID="e96a1cae04cf77c68174148d44645ae46ea9275c3a26364221425c3a279d1888" exitCode=0 Jan 21 12:24:13 crc kubenswrapper[4881]: I0121 12:24:13.958219 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7696g" event={"ID":"19920016-1549-4841-b51a-4571079dfd12","Type":"ContainerDied","Data":"e96a1cae04cf77c68174148d44645ae46ea9275c3a26364221425c3a279d1888"} Jan 21 12:24:13 crc kubenswrapper[4881]: I0121 12:24:13.958265 4881 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 12:24:14 crc kubenswrapper[4881]: I0121 12:24:14.977260 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7696g" event={"ID":"19920016-1549-4841-b51a-4571079dfd12","Type":"ContainerStarted","Data":"654b76266297416bb42449a45940dda64d9c9dce72b47a3a5bad8b637cf06338"} Jan 21 12:24:16 crc kubenswrapper[4881]: I0121 12:24:16.227848 4881 generic.go:334] "Generic (PLEG): container finished" podID="0ba62402-c750-4507-afb1-a4bc0cbb5659" containerID="ffc511f2b91abb8aa0c1b1c2de1899a0ace55f62e734b5204a731da6814cb8ce" exitCode=0 Jan 21 12:24:16 crc kubenswrapper[4881]: I0121 12:24:16.230198 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rm5xm" event={"ID":"0ba62402-c750-4507-afb1-a4bc0cbb5659","Type":"ContainerDied","Data":"ffc511f2b91abb8aa0c1b1c2de1899a0ace55f62e734b5204a731da6814cb8ce"} Jan 21 12:24:19 crc kubenswrapper[4881]: I0121 12:24:19.269499 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rm5xm" event={"ID":"0ba62402-c750-4507-afb1-a4bc0cbb5659","Type":"ContainerStarted","Data":"cfe3f359c9da9107984b40eb9353cd42eb19a785d40d76f43671efed2ca5d72d"} Jan 21 12:24:19 crc kubenswrapper[4881]: I0121 12:24:19.300042 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rm5xm" podStartSLOduration=5.503195362 podStartE2EDuration="9.299998896s" podCreationTimestamp="2026-01-21 12:24:10 +0000 UTC" firstStartedPulling="2026-01-21 12:24:13.957833418 +0000 UTC m=+5241.217789887" lastFinishedPulling="2026-01-21 12:24:17.754636952 +0000 UTC m=+5245.014593421" observedRunningTime="2026-01-21 12:24:19.288034083 +0000 UTC m=+5246.547990562" watchObservedRunningTime="2026-01-21 12:24:19.299998896 +0000 UTC m=+5246.559955365" Jan 21 12:24:20 crc kubenswrapper[4881]: I0121 12:24:20.283012 4881 generic.go:334] "Generic (PLEG): container finished" podID="19920016-1549-4841-b51a-4571079dfd12" containerID="654b76266297416bb42449a45940dda64d9c9dce72b47a3a5bad8b637cf06338" exitCode=0 Jan 21 12:24:20 crc kubenswrapper[4881]: I0121 12:24:20.283083 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7696g" event={"ID":"19920016-1549-4841-b51a-4571079dfd12","Type":"ContainerDied","Data":"654b76266297416bb42449a45940dda64d9c9dce72b47a3a5bad8b637cf06338"} Jan 21 12:24:21 crc kubenswrapper[4881]: I0121 12:24:21.111522 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rm5xm" Jan 21 12:24:21 crc kubenswrapper[4881]: I0121 12:24:21.111877 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rm5xm" Jan 21 12:24:21 crc kubenswrapper[4881]: I0121 12:24:21.161922 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rm5xm" Jan 21 12:24:21 crc kubenswrapper[4881]: I0121 12:24:21.295295 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7696g" event={"ID":"19920016-1549-4841-b51a-4571079dfd12","Type":"ContainerStarted","Data":"ec4a6d88164cb676deacc2160eb5db150bf8626f37c2841b485ba6ee59a8c9fa"} Jan 21 12:24:21 crc kubenswrapper[4881]: I0121 12:24:21.326307 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7696g" podStartSLOduration=4.518929928 podStartE2EDuration="11.326287337s" podCreationTimestamp="2026-01-21 12:24:10 +0000 UTC" firstStartedPulling="2026-01-21 12:24:13.960177356 +0000 UTC m=+5241.220133835" lastFinishedPulling="2026-01-21 12:24:20.767534775 +0000 UTC m=+5248.027491244" observedRunningTime="2026-01-21 12:24:21.323288634 +0000 UTC m=+5248.583245103" watchObservedRunningTime="2026-01-21 12:24:21.326287337 +0000 UTC m=+5248.586243806" Jan 21 12:24:29 crc kubenswrapper[4881]: I0121 12:24:29.850877 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:24:29 crc kubenswrapper[4881]: I0121 12:24:29.851565 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:24:29 crc kubenswrapper[4881]: I0121 12:24:29.851642 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 12:24:29 crc kubenswrapper[4881]: I0121 12:24:29.852946 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8e478b369ca9de619a99750dbc2a4e8ceacdcdd9265b30e32ad54bb5b5205ab0"} pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 12:24:29 crc kubenswrapper[4881]: I0121 12:24:29.853116 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" containerID="cri-o://8e478b369ca9de619a99750dbc2a4e8ceacdcdd9265b30e32ad54bb5b5205ab0" gracePeriod=600 Jan 21 12:24:29 crc kubenswrapper[4881]: E0121 12:24:29.980516 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:24:30 crc kubenswrapper[4881]: I0121 12:24:30.397684 4881 generic.go:334] "Generic (PLEG): container finished" podID="3687b313-1df2-4274-80db-8c758b51bf2d" containerID="8e478b369ca9de619a99750dbc2a4e8ceacdcdd9265b30e32ad54bb5b5205ab0" exitCode=0 Jan 21 12:24:30 crc kubenswrapper[4881]: I0121 12:24:30.397983 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerDied","Data":"8e478b369ca9de619a99750dbc2a4e8ceacdcdd9265b30e32ad54bb5b5205ab0"} Jan 21 12:24:30 crc kubenswrapper[4881]: I0121 12:24:30.398322 4881 scope.go:117] "RemoveContainer" containerID="51f7deb68e0f4978c7b2866156b4751c1ca416f1a21d198c62277ed590bf5923" Jan 21 12:24:30 crc kubenswrapper[4881]: I0121 12:24:30.399414 4881 scope.go:117] "RemoveContainer" containerID="8e478b369ca9de619a99750dbc2a4e8ceacdcdd9265b30e32ad54bb5b5205ab0" Jan 21 12:24:30 crc kubenswrapper[4881]: E0121 12:24:30.400028 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:24:31 crc kubenswrapper[4881]: I0121 12:24:31.167643 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rm5xm" Jan 21 12:24:31 crc kubenswrapper[4881]: I0121 12:24:31.228911 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rm5xm"] Jan 21 12:24:31 crc kubenswrapper[4881]: I0121 12:24:31.259062 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7696g" Jan 21 12:24:31 crc kubenswrapper[4881]: I0121 12:24:31.259160 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7696g" Jan 21 12:24:31 crc kubenswrapper[4881]: I0121 12:24:31.306935 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7696g" Jan 21 12:24:31 crc kubenswrapper[4881]: I0121 12:24:31.409915 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rm5xm" podUID="0ba62402-c750-4507-afb1-a4bc0cbb5659" containerName="registry-server" containerID="cri-o://cfe3f359c9da9107984b40eb9353cd42eb19a785d40d76f43671efed2ca5d72d" gracePeriod=2 Jan 21 12:24:31 crc kubenswrapper[4881]: I0121 12:24:31.461415 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7696g" Jan 21 12:24:31 crc kubenswrapper[4881]: I0121 12:24:31.879531 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rm5xm" Jan 21 12:24:32 crc kubenswrapper[4881]: I0121 12:24:32.072803 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ba62402-c750-4507-afb1-a4bc0cbb5659-catalog-content\") pod \"0ba62402-c750-4507-afb1-a4bc0cbb5659\" (UID: \"0ba62402-c750-4507-afb1-a4bc0cbb5659\") " Jan 21 12:24:32 crc kubenswrapper[4881]: I0121 12:24:32.073280 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8rsd\" (UniqueName: \"kubernetes.io/projected/0ba62402-c750-4507-afb1-a4bc0cbb5659-kube-api-access-p8rsd\") pod \"0ba62402-c750-4507-afb1-a4bc0cbb5659\" (UID: \"0ba62402-c750-4507-afb1-a4bc0cbb5659\") " Jan 21 12:24:32 crc kubenswrapper[4881]: I0121 12:24:32.073427 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ba62402-c750-4507-afb1-a4bc0cbb5659-utilities\") pod \"0ba62402-c750-4507-afb1-a4bc0cbb5659\" (UID: \"0ba62402-c750-4507-afb1-a4bc0cbb5659\") " Jan 21 12:24:32 crc kubenswrapper[4881]: I0121 12:24:32.074138 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ba62402-c750-4507-afb1-a4bc0cbb5659-utilities" (OuterVolumeSpecName: "utilities") pod "0ba62402-c750-4507-afb1-a4bc0cbb5659" (UID: "0ba62402-c750-4507-afb1-a4bc0cbb5659"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:24:32 crc kubenswrapper[4881]: I0121 12:24:32.079432 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ba62402-c750-4507-afb1-a4bc0cbb5659-kube-api-access-p8rsd" (OuterVolumeSpecName: "kube-api-access-p8rsd") pod "0ba62402-c750-4507-afb1-a4bc0cbb5659" (UID: "0ba62402-c750-4507-afb1-a4bc0cbb5659"). InnerVolumeSpecName "kube-api-access-p8rsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:24:32 crc kubenswrapper[4881]: I0121 12:24:32.110329 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ba62402-c750-4507-afb1-a4bc0cbb5659-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0ba62402-c750-4507-afb1-a4bc0cbb5659" (UID: "0ba62402-c750-4507-afb1-a4bc0cbb5659"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:24:32 crc kubenswrapper[4881]: I0121 12:24:32.175686 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ba62402-c750-4507-afb1-a4bc0cbb5659-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 12:24:32 crc kubenswrapper[4881]: I0121 12:24:32.175720 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8rsd\" (UniqueName: \"kubernetes.io/projected/0ba62402-c750-4507-afb1-a4bc0cbb5659-kube-api-access-p8rsd\") on node \"crc\" DevicePath \"\"" Jan 21 12:24:32 crc kubenswrapper[4881]: I0121 12:24:32.175733 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ba62402-c750-4507-afb1-a4bc0cbb5659-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 12:24:32 crc kubenswrapper[4881]: I0121 12:24:32.424485 4881 generic.go:334] "Generic (PLEG): container finished" podID="0ba62402-c750-4507-afb1-a4bc0cbb5659" containerID="cfe3f359c9da9107984b40eb9353cd42eb19a785d40d76f43671efed2ca5d72d" exitCode=0 Jan 21 12:24:32 crc kubenswrapper[4881]: I0121 12:24:32.424587 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rm5xm" Jan 21 12:24:32 crc kubenswrapper[4881]: I0121 12:24:32.424593 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rm5xm" event={"ID":"0ba62402-c750-4507-afb1-a4bc0cbb5659","Type":"ContainerDied","Data":"cfe3f359c9da9107984b40eb9353cd42eb19a785d40d76f43671efed2ca5d72d"} Jan 21 12:24:32 crc kubenswrapper[4881]: I0121 12:24:32.424657 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rm5xm" event={"ID":"0ba62402-c750-4507-afb1-a4bc0cbb5659","Type":"ContainerDied","Data":"3bf40f3659d32a5324d6f5ded95c6c1fa84643efcf43ead247e37f6b81603f5f"} Jan 21 12:24:32 crc kubenswrapper[4881]: I0121 12:24:32.424682 4881 scope.go:117] "RemoveContainer" containerID="cfe3f359c9da9107984b40eb9353cd42eb19a785d40d76f43671efed2ca5d72d" Jan 21 12:24:32 crc kubenswrapper[4881]: I0121 12:24:32.450282 4881 scope.go:117] "RemoveContainer" containerID="ffc511f2b91abb8aa0c1b1c2de1899a0ace55f62e734b5204a731da6814cb8ce" Jan 21 12:24:32 crc kubenswrapper[4881]: I0121 12:24:32.479716 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rm5xm"] Jan 21 12:24:32 crc kubenswrapper[4881]: I0121 12:24:32.482333 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rm5xm"] Jan 21 12:24:32 crc kubenswrapper[4881]: I0121 12:24:32.501691 4881 scope.go:117] "RemoveContainer" containerID="37f4380c9f0bded2a8d74e846aa0359ad6632d6691c8866fa1b38a0840862cee" Jan 21 12:24:32 crc kubenswrapper[4881]: I0121 12:24:32.547200 4881 scope.go:117] "RemoveContainer" containerID="cfe3f359c9da9107984b40eb9353cd42eb19a785d40d76f43671efed2ca5d72d" Jan 21 12:24:32 crc kubenswrapper[4881]: E0121 12:24:32.549958 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfe3f359c9da9107984b40eb9353cd42eb19a785d40d76f43671efed2ca5d72d\": container with ID starting with cfe3f359c9da9107984b40eb9353cd42eb19a785d40d76f43671efed2ca5d72d not found: ID does not exist" containerID="cfe3f359c9da9107984b40eb9353cd42eb19a785d40d76f43671efed2ca5d72d" Jan 21 12:24:32 crc kubenswrapper[4881]: I0121 12:24:32.550046 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfe3f359c9da9107984b40eb9353cd42eb19a785d40d76f43671efed2ca5d72d"} err="failed to get container status \"cfe3f359c9da9107984b40eb9353cd42eb19a785d40d76f43671efed2ca5d72d\": rpc error: code = NotFound desc = could not find container \"cfe3f359c9da9107984b40eb9353cd42eb19a785d40d76f43671efed2ca5d72d\": container with ID starting with cfe3f359c9da9107984b40eb9353cd42eb19a785d40d76f43671efed2ca5d72d not found: ID does not exist" Jan 21 12:24:32 crc kubenswrapper[4881]: I0121 12:24:32.550087 4881 scope.go:117] "RemoveContainer" containerID="ffc511f2b91abb8aa0c1b1c2de1899a0ace55f62e734b5204a731da6814cb8ce" Jan 21 12:24:32 crc kubenswrapper[4881]: E0121 12:24:32.550503 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffc511f2b91abb8aa0c1b1c2de1899a0ace55f62e734b5204a731da6814cb8ce\": container with ID starting with ffc511f2b91abb8aa0c1b1c2de1899a0ace55f62e734b5204a731da6814cb8ce not found: ID does not exist" containerID="ffc511f2b91abb8aa0c1b1c2de1899a0ace55f62e734b5204a731da6814cb8ce" Jan 21 12:24:32 crc kubenswrapper[4881]: I0121 12:24:32.550536 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffc511f2b91abb8aa0c1b1c2de1899a0ace55f62e734b5204a731da6814cb8ce"} err="failed to get container status \"ffc511f2b91abb8aa0c1b1c2de1899a0ace55f62e734b5204a731da6814cb8ce\": rpc error: code = NotFound desc = could not find container \"ffc511f2b91abb8aa0c1b1c2de1899a0ace55f62e734b5204a731da6814cb8ce\": container with ID starting with ffc511f2b91abb8aa0c1b1c2de1899a0ace55f62e734b5204a731da6814cb8ce not found: ID does not exist" Jan 21 12:24:32 crc kubenswrapper[4881]: I0121 12:24:32.550559 4881 scope.go:117] "RemoveContainer" containerID="37f4380c9f0bded2a8d74e846aa0359ad6632d6691c8866fa1b38a0840862cee" Jan 21 12:24:32 crc kubenswrapper[4881]: E0121 12:24:32.551020 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37f4380c9f0bded2a8d74e846aa0359ad6632d6691c8866fa1b38a0840862cee\": container with ID starting with 37f4380c9f0bded2a8d74e846aa0359ad6632d6691c8866fa1b38a0840862cee not found: ID does not exist" containerID="37f4380c9f0bded2a8d74e846aa0359ad6632d6691c8866fa1b38a0840862cee" Jan 21 12:24:32 crc kubenswrapper[4881]: I0121 12:24:32.551050 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37f4380c9f0bded2a8d74e846aa0359ad6632d6691c8866fa1b38a0840862cee"} err="failed to get container status \"37f4380c9f0bded2a8d74e846aa0359ad6632d6691c8866fa1b38a0840862cee\": rpc error: code = NotFound desc = could not find container \"37f4380c9f0bded2a8d74e846aa0359ad6632d6691c8866fa1b38a0840862cee\": container with ID starting with 37f4380c9f0bded2a8d74e846aa0359ad6632d6691c8866fa1b38a0840862cee not found: ID does not exist" Jan 21 12:24:33 crc kubenswrapper[4881]: I0121 12:24:33.327738 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ba62402-c750-4507-afb1-a4bc0cbb5659" path="/var/lib/kubelet/pods/0ba62402-c750-4507-afb1-a4bc0cbb5659/volumes" Jan 21 12:24:33 crc kubenswrapper[4881]: I0121 12:24:33.615933 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7696g"] Jan 21 12:24:34 crc kubenswrapper[4881]: I0121 12:24:34.506610 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7696g" podUID="19920016-1549-4841-b51a-4571079dfd12" containerName="registry-server" containerID="cri-o://ec4a6d88164cb676deacc2160eb5db150bf8626f37c2841b485ba6ee59a8c9fa" gracePeriod=2 Jan 21 12:24:35 crc kubenswrapper[4881]: I0121 12:24:35.004277 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7696g" Jan 21 12:24:35 crc kubenswrapper[4881]: I0121 12:24:35.110400 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x278n\" (UniqueName: \"kubernetes.io/projected/19920016-1549-4841-b51a-4571079dfd12-kube-api-access-x278n\") pod \"19920016-1549-4841-b51a-4571079dfd12\" (UID: \"19920016-1549-4841-b51a-4571079dfd12\") " Jan 21 12:24:35 crc kubenswrapper[4881]: I0121 12:24:35.110461 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19920016-1549-4841-b51a-4571079dfd12-catalog-content\") pod \"19920016-1549-4841-b51a-4571079dfd12\" (UID: \"19920016-1549-4841-b51a-4571079dfd12\") " Jan 21 12:24:35 crc kubenswrapper[4881]: I0121 12:24:35.110673 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19920016-1549-4841-b51a-4571079dfd12-utilities\") pod \"19920016-1549-4841-b51a-4571079dfd12\" (UID: \"19920016-1549-4841-b51a-4571079dfd12\") " Jan 21 12:24:35 crc kubenswrapper[4881]: I0121 12:24:35.111698 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19920016-1549-4841-b51a-4571079dfd12-utilities" (OuterVolumeSpecName: "utilities") pod "19920016-1549-4841-b51a-4571079dfd12" (UID: "19920016-1549-4841-b51a-4571079dfd12"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:24:35 crc kubenswrapper[4881]: I0121 12:24:35.118056 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19920016-1549-4841-b51a-4571079dfd12-kube-api-access-x278n" (OuterVolumeSpecName: "kube-api-access-x278n") pod "19920016-1549-4841-b51a-4571079dfd12" (UID: "19920016-1549-4841-b51a-4571079dfd12"). InnerVolumeSpecName "kube-api-access-x278n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:24:35 crc kubenswrapper[4881]: I0121 12:24:35.214052 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x278n\" (UniqueName: \"kubernetes.io/projected/19920016-1549-4841-b51a-4571079dfd12-kube-api-access-x278n\") on node \"crc\" DevicePath \"\"" Jan 21 12:24:35 crc kubenswrapper[4881]: I0121 12:24:35.214101 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19920016-1549-4841-b51a-4571079dfd12-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 12:24:35 crc kubenswrapper[4881]: I0121 12:24:35.242845 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19920016-1549-4841-b51a-4571079dfd12-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "19920016-1549-4841-b51a-4571079dfd12" (UID: "19920016-1549-4841-b51a-4571079dfd12"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:24:35 crc kubenswrapper[4881]: I0121 12:24:35.316758 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19920016-1549-4841-b51a-4571079dfd12-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 12:24:35 crc kubenswrapper[4881]: I0121 12:24:35.522953 4881 generic.go:334] "Generic (PLEG): container finished" podID="19920016-1549-4841-b51a-4571079dfd12" containerID="ec4a6d88164cb676deacc2160eb5db150bf8626f37c2841b485ba6ee59a8c9fa" exitCode=0 Jan 21 12:24:35 crc kubenswrapper[4881]: I0121 12:24:35.523030 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7696g" event={"ID":"19920016-1549-4841-b51a-4571079dfd12","Type":"ContainerDied","Data":"ec4a6d88164cb676deacc2160eb5db150bf8626f37c2841b485ba6ee59a8c9fa"} Jan 21 12:24:35 crc kubenswrapper[4881]: I0121 12:24:35.523215 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7696g" event={"ID":"19920016-1549-4841-b51a-4571079dfd12","Type":"ContainerDied","Data":"55d842e7bd717974cdf52ec1477da5ecf0227134a5bbda5a2e4ccd1cb867fd3b"} Jan 21 12:24:35 crc kubenswrapper[4881]: I0121 12:24:35.523099 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7696g" Jan 21 12:24:35 crc kubenswrapper[4881]: I0121 12:24:35.523244 4881 scope.go:117] "RemoveContainer" containerID="ec4a6d88164cb676deacc2160eb5db150bf8626f37c2841b485ba6ee59a8c9fa" Jan 21 12:24:35 crc kubenswrapper[4881]: I0121 12:24:35.550902 4881 scope.go:117] "RemoveContainer" containerID="654b76266297416bb42449a45940dda64d9c9dce72b47a3a5bad8b637cf06338" Jan 21 12:24:35 crc kubenswrapper[4881]: I0121 12:24:35.557016 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7696g"] Jan 21 12:24:35 crc kubenswrapper[4881]: I0121 12:24:35.566744 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7696g"] Jan 21 12:24:35 crc kubenswrapper[4881]: I0121 12:24:35.577189 4881 scope.go:117] "RemoveContainer" containerID="e96a1cae04cf77c68174148d44645ae46ea9275c3a26364221425c3a279d1888" Jan 21 12:24:35 crc kubenswrapper[4881]: I0121 12:24:35.620023 4881 scope.go:117] "RemoveContainer" containerID="ec4a6d88164cb676deacc2160eb5db150bf8626f37c2841b485ba6ee59a8c9fa" Jan 21 12:24:35 crc kubenswrapper[4881]: E0121 12:24:35.620561 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec4a6d88164cb676deacc2160eb5db150bf8626f37c2841b485ba6ee59a8c9fa\": container with ID starting with ec4a6d88164cb676deacc2160eb5db150bf8626f37c2841b485ba6ee59a8c9fa not found: ID does not exist" containerID="ec4a6d88164cb676deacc2160eb5db150bf8626f37c2841b485ba6ee59a8c9fa" Jan 21 12:24:35 crc kubenswrapper[4881]: I0121 12:24:35.620607 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec4a6d88164cb676deacc2160eb5db150bf8626f37c2841b485ba6ee59a8c9fa"} err="failed to get container status \"ec4a6d88164cb676deacc2160eb5db150bf8626f37c2841b485ba6ee59a8c9fa\": rpc error: code = NotFound desc = could not find container \"ec4a6d88164cb676deacc2160eb5db150bf8626f37c2841b485ba6ee59a8c9fa\": container with ID starting with ec4a6d88164cb676deacc2160eb5db150bf8626f37c2841b485ba6ee59a8c9fa not found: ID does not exist" Jan 21 12:24:35 crc kubenswrapper[4881]: I0121 12:24:35.620638 4881 scope.go:117] "RemoveContainer" containerID="654b76266297416bb42449a45940dda64d9c9dce72b47a3a5bad8b637cf06338" Jan 21 12:24:35 crc kubenswrapper[4881]: E0121 12:24:35.621019 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"654b76266297416bb42449a45940dda64d9c9dce72b47a3a5bad8b637cf06338\": container with ID starting with 654b76266297416bb42449a45940dda64d9c9dce72b47a3a5bad8b637cf06338 not found: ID does not exist" containerID="654b76266297416bb42449a45940dda64d9c9dce72b47a3a5bad8b637cf06338" Jan 21 12:24:35 crc kubenswrapper[4881]: I0121 12:24:35.621087 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"654b76266297416bb42449a45940dda64d9c9dce72b47a3a5bad8b637cf06338"} err="failed to get container status \"654b76266297416bb42449a45940dda64d9c9dce72b47a3a5bad8b637cf06338\": rpc error: code = NotFound desc = could not find container \"654b76266297416bb42449a45940dda64d9c9dce72b47a3a5bad8b637cf06338\": container with ID starting with 654b76266297416bb42449a45940dda64d9c9dce72b47a3a5bad8b637cf06338 not found: ID does not exist" Jan 21 12:24:35 crc kubenswrapper[4881]: I0121 12:24:35.621121 4881 scope.go:117] "RemoveContainer" containerID="e96a1cae04cf77c68174148d44645ae46ea9275c3a26364221425c3a279d1888" Jan 21 12:24:35 crc kubenswrapper[4881]: E0121 12:24:35.621455 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e96a1cae04cf77c68174148d44645ae46ea9275c3a26364221425c3a279d1888\": container with ID starting with e96a1cae04cf77c68174148d44645ae46ea9275c3a26364221425c3a279d1888 not found: ID does not exist" containerID="e96a1cae04cf77c68174148d44645ae46ea9275c3a26364221425c3a279d1888" Jan 21 12:24:35 crc kubenswrapper[4881]: I0121 12:24:35.621513 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e96a1cae04cf77c68174148d44645ae46ea9275c3a26364221425c3a279d1888"} err="failed to get container status \"e96a1cae04cf77c68174148d44645ae46ea9275c3a26364221425c3a279d1888\": rpc error: code = NotFound desc = could not find container \"e96a1cae04cf77c68174148d44645ae46ea9275c3a26364221425c3a279d1888\": container with ID starting with e96a1cae04cf77c68174148d44645ae46ea9275c3a26364221425c3a279d1888 not found: ID does not exist" Jan 21 12:24:37 crc kubenswrapper[4881]: I0121 12:24:37.324179 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19920016-1549-4841-b51a-4571079dfd12" path="/var/lib/kubelet/pods/19920016-1549-4841-b51a-4571079dfd12/volumes" Jan 21 12:24:44 crc kubenswrapper[4881]: I0121 12:24:44.312100 4881 scope.go:117] "RemoveContainer" containerID="8e478b369ca9de619a99750dbc2a4e8ceacdcdd9265b30e32ad54bb5b5205ab0" Jan 21 12:24:44 crc kubenswrapper[4881]: E0121 12:24:44.313287 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:24:55 crc kubenswrapper[4881]: I0121 12:24:55.310946 4881 scope.go:117] "RemoveContainer" containerID="8e478b369ca9de619a99750dbc2a4e8ceacdcdd9265b30e32ad54bb5b5205ab0" Jan 21 12:24:55 crc kubenswrapper[4881]: E0121 12:24:55.311843 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:25:08 crc kubenswrapper[4881]: I0121 12:25:08.311297 4881 scope.go:117] "RemoveContainer" containerID="8e478b369ca9de619a99750dbc2a4e8ceacdcdd9265b30e32ad54bb5b5205ab0" Jan 21 12:25:08 crc kubenswrapper[4881]: E0121 12:25:08.312300 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:25:23 crc kubenswrapper[4881]: I0121 12:25:23.323365 4881 scope.go:117] "RemoveContainer" containerID="8e478b369ca9de619a99750dbc2a4e8ceacdcdd9265b30e32ad54bb5b5205ab0" Jan 21 12:25:23 crc kubenswrapper[4881]: E0121 12:25:23.325522 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:25:38 crc kubenswrapper[4881]: I0121 12:25:38.310604 4881 scope.go:117] "RemoveContainer" containerID="8e478b369ca9de619a99750dbc2a4e8ceacdcdd9265b30e32ad54bb5b5205ab0" Jan 21 12:25:38 crc kubenswrapper[4881]: E0121 12:25:38.311683 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:25:49 crc kubenswrapper[4881]: I0121 12:25:49.311563 4881 scope.go:117] "RemoveContainer" containerID="8e478b369ca9de619a99750dbc2a4e8ceacdcdd9265b30e32ad54bb5b5205ab0" Jan 21 12:25:49 crc kubenswrapper[4881]: E0121 12:25:49.314054 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:26:01 crc kubenswrapper[4881]: I0121 12:26:01.311820 4881 scope.go:117] "RemoveContainer" containerID="8e478b369ca9de619a99750dbc2a4e8ceacdcdd9265b30e32ad54bb5b5205ab0" Jan 21 12:26:01 crc kubenswrapper[4881]: E0121 12:26:01.312683 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:26:13 crc kubenswrapper[4881]: I0121 12:26:13.320763 4881 scope.go:117] "RemoveContainer" containerID="8e478b369ca9de619a99750dbc2a4e8ceacdcdd9265b30e32ad54bb5b5205ab0" Jan 21 12:26:13 crc kubenswrapper[4881]: E0121 12:26:13.322768 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:26:27 crc kubenswrapper[4881]: I0121 12:26:27.311436 4881 scope.go:117] "RemoveContainer" containerID="8e478b369ca9de619a99750dbc2a4e8ceacdcdd9265b30e32ad54bb5b5205ab0" Jan 21 12:26:27 crc kubenswrapper[4881]: E0121 12:26:27.312383 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:26:39 crc kubenswrapper[4881]: I0121 12:26:39.311100 4881 scope.go:117] "RemoveContainer" containerID="8e478b369ca9de619a99750dbc2a4e8ceacdcdd9265b30e32ad54bb5b5205ab0" Jan 21 12:26:39 crc kubenswrapper[4881]: E0121 12:26:39.328088 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:26:52 crc kubenswrapper[4881]: I0121 12:26:52.312324 4881 scope.go:117] "RemoveContainer" containerID="8e478b369ca9de619a99750dbc2a4e8ceacdcdd9265b30e32ad54bb5b5205ab0" Jan 21 12:26:52 crc kubenswrapper[4881]: E0121 12:26:52.313243 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:27:03 crc kubenswrapper[4881]: I0121 12:27:03.335603 4881 scope.go:117] "RemoveContainer" containerID="8e478b369ca9de619a99750dbc2a4e8ceacdcdd9265b30e32ad54bb5b5205ab0" Jan 21 12:27:03 crc kubenswrapper[4881]: E0121 12:27:03.336833 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:27:14 crc kubenswrapper[4881]: I0121 12:27:14.311513 4881 scope.go:117] "RemoveContainer" containerID="8e478b369ca9de619a99750dbc2a4e8ceacdcdd9265b30e32ad54bb5b5205ab0" Jan 21 12:27:14 crc kubenswrapper[4881]: E0121 12:27:14.312385 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:27:25 crc kubenswrapper[4881]: I0121 12:27:25.311125 4881 scope.go:117] "RemoveContainer" containerID="8e478b369ca9de619a99750dbc2a4e8ceacdcdd9265b30e32ad54bb5b5205ab0" Jan 21 12:27:25 crc kubenswrapper[4881]: E0121 12:27:25.311949 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:27:39 crc kubenswrapper[4881]: I0121 12:27:39.311057 4881 scope.go:117] "RemoveContainer" containerID="8e478b369ca9de619a99750dbc2a4e8ceacdcdd9265b30e32ad54bb5b5205ab0" Jan 21 12:27:39 crc kubenswrapper[4881]: E0121 12:27:39.311923 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:27:52 crc kubenswrapper[4881]: I0121 12:27:52.311360 4881 scope.go:117] "RemoveContainer" containerID="8e478b369ca9de619a99750dbc2a4e8ceacdcdd9265b30e32ad54bb5b5205ab0" Jan 21 12:27:52 crc kubenswrapper[4881]: E0121 12:27:52.312125 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:28:04 crc kubenswrapper[4881]: I0121 12:28:04.313020 4881 scope.go:117] "RemoveContainer" containerID="8e478b369ca9de619a99750dbc2a4e8ceacdcdd9265b30e32ad54bb5b5205ab0" Jan 21 12:28:04 crc kubenswrapper[4881]: E0121 12:28:04.314150 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:28:16 crc kubenswrapper[4881]: I0121 12:28:16.311629 4881 scope.go:117] "RemoveContainer" containerID="8e478b369ca9de619a99750dbc2a4e8ceacdcdd9265b30e32ad54bb5b5205ab0" Jan 21 12:28:16 crc kubenswrapper[4881]: E0121 12:28:16.313476 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:28:29 crc kubenswrapper[4881]: I0121 12:28:29.312067 4881 scope.go:117] "RemoveContainer" containerID="8e478b369ca9de619a99750dbc2a4e8ceacdcdd9265b30e32ad54bb5b5205ab0" Jan 21 12:28:29 crc kubenswrapper[4881]: E0121 12:28:29.314200 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:28:40 crc kubenswrapper[4881]: I0121 12:28:40.312018 4881 scope.go:117] "RemoveContainer" containerID="8e478b369ca9de619a99750dbc2a4e8ceacdcdd9265b30e32ad54bb5b5205ab0" Jan 21 12:28:40 crc kubenswrapper[4881]: E0121 12:28:40.313007 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:28:53 crc kubenswrapper[4881]: I0121 12:28:53.321705 4881 scope.go:117] "RemoveContainer" containerID="8e478b369ca9de619a99750dbc2a4e8ceacdcdd9265b30e32ad54bb5b5205ab0" Jan 21 12:28:53 crc kubenswrapper[4881]: E0121 12:28:53.323131 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:29:06 crc kubenswrapper[4881]: I0121 12:29:06.310928 4881 scope.go:117] "RemoveContainer" containerID="8e478b369ca9de619a99750dbc2a4e8ceacdcdd9265b30e32ad54bb5b5205ab0" Jan 21 12:29:06 crc kubenswrapper[4881]: E0121 12:29:06.311719 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:29:20 crc kubenswrapper[4881]: I0121 12:29:20.311096 4881 scope.go:117] "RemoveContainer" containerID="8e478b369ca9de619a99750dbc2a4e8ceacdcdd9265b30e32ad54bb5b5205ab0" Jan 21 12:29:20 crc kubenswrapper[4881]: E0121 12:29:20.312985 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:29:34 crc kubenswrapper[4881]: I0121 12:29:34.312571 4881 scope.go:117] "RemoveContainer" containerID="8e478b369ca9de619a99750dbc2a4e8ceacdcdd9265b30e32ad54bb5b5205ab0" Jan 21 12:29:34 crc kubenswrapper[4881]: I0121 12:29:34.783544 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"5ce4f2646890b2b0b35075452c84c9194c468c1e2e3c942d6c0c4679e67f5d4f"} Jan 21 12:30:00 crc kubenswrapper[4881]: I0121 12:30:00.260681 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483310-ntw6g"] Jan 21 12:30:00 crc kubenswrapper[4881]: E0121 12:30:00.261764 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ba62402-c750-4507-afb1-a4bc0cbb5659" containerName="extract-content" Jan 21 12:30:00 crc kubenswrapper[4881]: I0121 12:30:00.261802 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ba62402-c750-4507-afb1-a4bc0cbb5659" containerName="extract-content" Jan 21 12:30:00 crc kubenswrapper[4881]: E0121 12:30:00.261832 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19920016-1549-4841-b51a-4571079dfd12" containerName="extract-content" Jan 21 12:30:00 crc kubenswrapper[4881]: I0121 12:30:00.261840 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="19920016-1549-4841-b51a-4571079dfd12" containerName="extract-content" Jan 21 12:30:00 crc kubenswrapper[4881]: E0121 12:30:00.261870 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19920016-1549-4841-b51a-4571079dfd12" containerName="extract-utilities" Jan 21 12:30:00 crc kubenswrapper[4881]: I0121 12:30:00.261880 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="19920016-1549-4841-b51a-4571079dfd12" containerName="extract-utilities" Jan 21 12:30:00 crc kubenswrapper[4881]: E0121 12:30:00.261895 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ba62402-c750-4507-afb1-a4bc0cbb5659" containerName="registry-server" Jan 21 12:30:00 crc kubenswrapper[4881]: I0121 12:30:00.261903 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ba62402-c750-4507-afb1-a4bc0cbb5659" containerName="registry-server" Jan 21 12:30:00 crc kubenswrapper[4881]: E0121 12:30:00.261929 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19920016-1549-4841-b51a-4571079dfd12" containerName="registry-server" Jan 21 12:30:00 crc kubenswrapper[4881]: I0121 12:30:00.261936 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="19920016-1549-4841-b51a-4571079dfd12" containerName="registry-server" Jan 21 12:30:00 crc kubenswrapper[4881]: E0121 12:30:00.261954 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ba62402-c750-4507-afb1-a4bc0cbb5659" containerName="extract-utilities" Jan 21 12:30:00 crc kubenswrapper[4881]: I0121 12:30:00.261964 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ba62402-c750-4507-afb1-a4bc0cbb5659" containerName="extract-utilities" Jan 21 12:30:00 crc kubenswrapper[4881]: I0121 12:30:00.262220 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ba62402-c750-4507-afb1-a4bc0cbb5659" containerName="registry-server" Jan 21 12:30:00 crc kubenswrapper[4881]: I0121 12:30:00.262239 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="19920016-1549-4841-b51a-4571079dfd12" containerName="registry-server" Jan 21 12:30:00 crc kubenswrapper[4881]: I0121 12:30:00.263217 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483310-ntw6g" Jan 21 12:30:00 crc kubenswrapper[4881]: I0121 12:30:00.263707 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483310-ntw6g"] Jan 21 12:30:00 crc kubenswrapper[4881]: I0121 12:30:00.266210 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 12:30:00 crc kubenswrapper[4881]: I0121 12:30:00.268026 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 12:30:00 crc kubenswrapper[4881]: I0121 12:30:00.445160 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5368d7c4-a23a-46aa-8dea-1fde26f5df53-secret-volume\") pod \"collect-profiles-29483310-ntw6g\" (UID: \"5368d7c4-a23a-46aa-8dea-1fde26f5df53\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483310-ntw6g" Jan 21 12:30:00 crc kubenswrapper[4881]: I0121 12:30:00.446819 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wfd4\" (UniqueName: \"kubernetes.io/projected/5368d7c4-a23a-46aa-8dea-1fde26f5df53-kube-api-access-7wfd4\") pod \"collect-profiles-29483310-ntw6g\" (UID: \"5368d7c4-a23a-46aa-8dea-1fde26f5df53\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483310-ntw6g" Jan 21 12:30:00 crc kubenswrapper[4881]: I0121 12:30:00.446980 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5368d7c4-a23a-46aa-8dea-1fde26f5df53-config-volume\") pod \"collect-profiles-29483310-ntw6g\" (UID: \"5368d7c4-a23a-46aa-8dea-1fde26f5df53\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483310-ntw6g" Jan 21 12:30:00 crc kubenswrapper[4881]: I0121 12:30:00.548940 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5368d7c4-a23a-46aa-8dea-1fde26f5df53-secret-volume\") pod \"collect-profiles-29483310-ntw6g\" (UID: \"5368d7c4-a23a-46aa-8dea-1fde26f5df53\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483310-ntw6g" Jan 21 12:30:00 crc kubenswrapper[4881]: I0121 12:30:00.549416 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wfd4\" (UniqueName: \"kubernetes.io/projected/5368d7c4-a23a-46aa-8dea-1fde26f5df53-kube-api-access-7wfd4\") pod \"collect-profiles-29483310-ntw6g\" (UID: \"5368d7c4-a23a-46aa-8dea-1fde26f5df53\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483310-ntw6g" Jan 21 12:30:00 crc kubenswrapper[4881]: I0121 12:30:00.549524 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5368d7c4-a23a-46aa-8dea-1fde26f5df53-config-volume\") pod \"collect-profiles-29483310-ntw6g\" (UID: \"5368d7c4-a23a-46aa-8dea-1fde26f5df53\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483310-ntw6g" Jan 21 12:30:00 crc kubenswrapper[4881]: I0121 12:30:00.550836 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5368d7c4-a23a-46aa-8dea-1fde26f5df53-config-volume\") pod \"collect-profiles-29483310-ntw6g\" (UID: \"5368d7c4-a23a-46aa-8dea-1fde26f5df53\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483310-ntw6g" Jan 21 12:30:00 crc kubenswrapper[4881]: I0121 12:30:00.557905 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5368d7c4-a23a-46aa-8dea-1fde26f5df53-secret-volume\") pod \"collect-profiles-29483310-ntw6g\" (UID: \"5368d7c4-a23a-46aa-8dea-1fde26f5df53\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483310-ntw6g" Jan 21 12:30:00 crc kubenswrapper[4881]: I0121 12:30:00.572085 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wfd4\" (UniqueName: \"kubernetes.io/projected/5368d7c4-a23a-46aa-8dea-1fde26f5df53-kube-api-access-7wfd4\") pod \"collect-profiles-29483310-ntw6g\" (UID: \"5368d7c4-a23a-46aa-8dea-1fde26f5df53\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483310-ntw6g" Jan 21 12:30:00 crc kubenswrapper[4881]: I0121 12:30:00.585156 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483310-ntw6g" Jan 21 12:30:01 crc kubenswrapper[4881]: I0121 12:30:01.148684 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483310-ntw6g"] Jan 21 12:30:01 crc kubenswrapper[4881]: I0121 12:30:01.201389 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483310-ntw6g" event={"ID":"5368d7c4-a23a-46aa-8dea-1fde26f5df53","Type":"ContainerStarted","Data":"4164bf5d4bc11259b2f83b181016e2372dbf6f746c00a5e5d99d2c9e0c84bec1"} Jan 21 12:30:02 crc kubenswrapper[4881]: I0121 12:30:02.213387 4881 generic.go:334] "Generic (PLEG): container finished" podID="5368d7c4-a23a-46aa-8dea-1fde26f5df53" containerID="b60782b6ad5aeb71531d28ab48543fd988c6726bf0975c069d2238cd6237f3ab" exitCode=0 Jan 21 12:30:02 crc kubenswrapper[4881]: I0121 12:30:02.213559 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483310-ntw6g" event={"ID":"5368d7c4-a23a-46aa-8dea-1fde26f5df53","Type":"ContainerDied","Data":"b60782b6ad5aeb71531d28ab48543fd988c6726bf0975c069d2238cd6237f3ab"} Jan 21 12:30:03 crc kubenswrapper[4881]: I0121 12:30:03.690856 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483310-ntw6g" Jan 21 12:30:03 crc kubenswrapper[4881]: I0121 12:30:03.820869 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wfd4\" (UniqueName: \"kubernetes.io/projected/5368d7c4-a23a-46aa-8dea-1fde26f5df53-kube-api-access-7wfd4\") pod \"5368d7c4-a23a-46aa-8dea-1fde26f5df53\" (UID: \"5368d7c4-a23a-46aa-8dea-1fde26f5df53\") " Jan 21 12:30:03 crc kubenswrapper[4881]: I0121 12:30:03.821069 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5368d7c4-a23a-46aa-8dea-1fde26f5df53-secret-volume\") pod \"5368d7c4-a23a-46aa-8dea-1fde26f5df53\" (UID: \"5368d7c4-a23a-46aa-8dea-1fde26f5df53\") " Jan 21 12:30:03 crc kubenswrapper[4881]: I0121 12:30:03.821149 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5368d7c4-a23a-46aa-8dea-1fde26f5df53-config-volume\") pod \"5368d7c4-a23a-46aa-8dea-1fde26f5df53\" (UID: \"5368d7c4-a23a-46aa-8dea-1fde26f5df53\") " Jan 21 12:30:03 crc kubenswrapper[4881]: I0121 12:30:03.822366 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5368d7c4-a23a-46aa-8dea-1fde26f5df53-config-volume" (OuterVolumeSpecName: "config-volume") pod "5368d7c4-a23a-46aa-8dea-1fde26f5df53" (UID: "5368d7c4-a23a-46aa-8dea-1fde26f5df53"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 12:30:03 crc kubenswrapper[4881]: I0121 12:30:03.828893 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5368d7c4-a23a-46aa-8dea-1fde26f5df53-kube-api-access-7wfd4" (OuterVolumeSpecName: "kube-api-access-7wfd4") pod "5368d7c4-a23a-46aa-8dea-1fde26f5df53" (UID: "5368d7c4-a23a-46aa-8dea-1fde26f5df53"). InnerVolumeSpecName "kube-api-access-7wfd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:30:03 crc kubenswrapper[4881]: I0121 12:30:03.831133 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5368d7c4-a23a-46aa-8dea-1fde26f5df53-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5368d7c4-a23a-46aa-8dea-1fde26f5df53" (UID: "5368d7c4-a23a-46aa-8dea-1fde26f5df53"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 12:30:03 crc kubenswrapper[4881]: I0121 12:30:03.924150 4881 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5368d7c4-a23a-46aa-8dea-1fde26f5df53-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 12:30:03 crc kubenswrapper[4881]: I0121 12:30:03.924214 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wfd4\" (UniqueName: \"kubernetes.io/projected/5368d7c4-a23a-46aa-8dea-1fde26f5df53-kube-api-access-7wfd4\") on node \"crc\" DevicePath \"\"" Jan 21 12:30:03 crc kubenswrapper[4881]: I0121 12:30:03.924232 4881 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5368d7c4-a23a-46aa-8dea-1fde26f5df53-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 12:30:04 crc kubenswrapper[4881]: I0121 12:30:04.239977 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483310-ntw6g" event={"ID":"5368d7c4-a23a-46aa-8dea-1fde26f5df53","Type":"ContainerDied","Data":"4164bf5d4bc11259b2f83b181016e2372dbf6f746c00a5e5d99d2c9e0c84bec1"} Jan 21 12:30:04 crc kubenswrapper[4881]: I0121 12:30:04.240026 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4164bf5d4bc11259b2f83b181016e2372dbf6f746c00a5e5d99d2c9e0c84bec1" Jan 21 12:30:04 crc kubenswrapper[4881]: I0121 12:30:04.240102 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483310-ntw6g" Jan 21 12:30:04 crc kubenswrapper[4881]: I0121 12:30:04.779771 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483265-wh6tk"] Jan 21 12:30:04 crc kubenswrapper[4881]: I0121 12:30:04.796056 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483265-wh6tk"] Jan 21 12:30:05 crc kubenswrapper[4881]: I0121 12:30:05.325955 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49387e54-5709-46bd-9f76-cd79369d9abe" path="/var/lib/kubelet/pods/49387e54-5709-46bd-9f76-cd79369d9abe/volumes" Jan 21 12:30:33 crc kubenswrapper[4881]: I0121 12:30:33.632697 4881 scope.go:117] "RemoveContainer" containerID="03feba2a29229654c706a38fc1bff6c4df03df1eca6406a125ce3ee72913286b" Jan 21 12:31:25 crc kubenswrapper[4881]: I0121 12:31:25.495017 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-5cd4664cfc-6lg4r" podUID="a194c95e-cbcb-4d7e-a631-d4a14989e985" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.55:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 12:31:25 crc kubenswrapper[4881]: I0121 12:31:25.495037 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-5cd4664cfc-6lg4r" podUID="a194c95e-cbcb-4d7e-a631-d4a14989e985" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.55:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 12:31:59 crc kubenswrapper[4881]: I0121 12:31:59.851086 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:31:59 crc kubenswrapper[4881]: I0121 12:31:59.851591 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:32:29 crc kubenswrapper[4881]: I0121 12:32:29.851462 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:32:29 crc kubenswrapper[4881]: I0121 12:32:29.852418 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:32:59 crc kubenswrapper[4881]: I0121 12:32:59.851254 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:32:59 crc kubenswrapper[4881]: I0121 12:32:59.851873 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:32:59 crc kubenswrapper[4881]: I0121 12:32:59.851970 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 12:32:59 crc kubenswrapper[4881]: I0121 12:32:59.852961 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5ce4f2646890b2b0b35075452c84c9194c468c1e2e3c942d6c0c4679e67f5d4f"} pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 12:32:59 crc kubenswrapper[4881]: I0121 12:32:59.853160 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" containerID="cri-o://5ce4f2646890b2b0b35075452c84c9194c468c1e2e3c942d6c0c4679e67f5d4f" gracePeriod=600 Jan 21 12:33:00 crc kubenswrapper[4881]: I0121 12:33:00.951816 4881 generic.go:334] "Generic (PLEG): container finished" podID="3687b313-1df2-4274-80db-8c758b51bf2d" containerID="5ce4f2646890b2b0b35075452c84c9194c468c1e2e3c942d6c0c4679e67f5d4f" exitCode=0 Jan 21 12:33:00 crc kubenswrapper[4881]: I0121 12:33:00.951938 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerDied","Data":"5ce4f2646890b2b0b35075452c84c9194c468c1e2e3c942d6c0c4679e67f5d4f"} Jan 21 12:33:00 crc kubenswrapper[4881]: I0121 12:33:00.952337 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"58cd0c5668032beb15f336f840c62d17fca5d4719530de4cd0b64bce55dc94e3"} Jan 21 12:33:00 crc kubenswrapper[4881]: I0121 12:33:00.952409 4881 scope.go:117] "RemoveContainer" containerID="8e478b369ca9de619a99750dbc2a4e8ceacdcdd9265b30e32ad54bb5b5205ab0" Jan 21 12:34:19 crc kubenswrapper[4881]: I0121 12:34:19.307290 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vspzs"] Jan 21 12:34:19 crc kubenswrapper[4881]: E0121 12:34:19.309783 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5368d7c4-a23a-46aa-8dea-1fde26f5df53" containerName="collect-profiles" Jan 21 12:34:19 crc kubenswrapper[4881]: I0121 12:34:19.309909 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="5368d7c4-a23a-46aa-8dea-1fde26f5df53" containerName="collect-profiles" Jan 21 12:34:19 crc kubenswrapper[4881]: I0121 12:34:19.310266 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="5368d7c4-a23a-46aa-8dea-1fde26f5df53" containerName="collect-profiles" Jan 21 12:34:19 crc kubenswrapper[4881]: I0121 12:34:19.312632 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vspzs" Jan 21 12:34:19 crc kubenswrapper[4881]: I0121 12:34:19.354451 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e7922fd-c90d-44be-924c-961055910625-catalog-content\") pod \"redhat-marketplace-vspzs\" (UID: \"6e7922fd-c90d-44be-924c-961055910625\") " pod="openshift-marketplace/redhat-marketplace-vspzs" Jan 21 12:34:19 crc kubenswrapper[4881]: I0121 12:34:19.354856 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drhbl\" (UniqueName: \"kubernetes.io/projected/6e7922fd-c90d-44be-924c-961055910625-kube-api-access-drhbl\") pod \"redhat-marketplace-vspzs\" (UID: \"6e7922fd-c90d-44be-924c-961055910625\") " pod="openshift-marketplace/redhat-marketplace-vspzs" Jan 21 12:34:19 crc kubenswrapper[4881]: I0121 12:34:19.355194 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e7922fd-c90d-44be-924c-961055910625-utilities\") pod \"redhat-marketplace-vspzs\" (UID: \"6e7922fd-c90d-44be-924c-961055910625\") " pod="openshift-marketplace/redhat-marketplace-vspzs" Jan 21 12:34:19 crc kubenswrapper[4881]: I0121 12:34:19.386148 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vspzs"] Jan 21 12:34:19 crc kubenswrapper[4881]: I0121 12:34:19.456991 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e7922fd-c90d-44be-924c-961055910625-catalog-content\") pod \"redhat-marketplace-vspzs\" (UID: \"6e7922fd-c90d-44be-924c-961055910625\") " pod="openshift-marketplace/redhat-marketplace-vspzs" Jan 21 12:34:19 crc kubenswrapper[4881]: I0121 12:34:19.457097 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drhbl\" (UniqueName: \"kubernetes.io/projected/6e7922fd-c90d-44be-924c-961055910625-kube-api-access-drhbl\") pod \"redhat-marketplace-vspzs\" (UID: \"6e7922fd-c90d-44be-924c-961055910625\") " pod="openshift-marketplace/redhat-marketplace-vspzs" Jan 21 12:34:19 crc kubenswrapper[4881]: I0121 12:34:19.457177 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e7922fd-c90d-44be-924c-961055910625-utilities\") pod \"redhat-marketplace-vspzs\" (UID: \"6e7922fd-c90d-44be-924c-961055910625\") " pod="openshift-marketplace/redhat-marketplace-vspzs" Jan 21 12:34:19 crc kubenswrapper[4881]: I0121 12:34:19.458161 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e7922fd-c90d-44be-924c-961055910625-catalog-content\") pod \"redhat-marketplace-vspzs\" (UID: \"6e7922fd-c90d-44be-924c-961055910625\") " pod="openshift-marketplace/redhat-marketplace-vspzs" Jan 21 12:34:19 crc kubenswrapper[4881]: I0121 12:34:19.458820 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e7922fd-c90d-44be-924c-961055910625-utilities\") pod \"redhat-marketplace-vspzs\" (UID: \"6e7922fd-c90d-44be-924c-961055910625\") " pod="openshift-marketplace/redhat-marketplace-vspzs" Jan 21 12:34:19 crc kubenswrapper[4881]: I0121 12:34:19.581628 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drhbl\" (UniqueName: \"kubernetes.io/projected/6e7922fd-c90d-44be-924c-961055910625-kube-api-access-drhbl\") pod \"redhat-marketplace-vspzs\" (UID: \"6e7922fd-c90d-44be-924c-961055910625\") " pod="openshift-marketplace/redhat-marketplace-vspzs" Jan 21 12:34:19 crc kubenswrapper[4881]: I0121 12:34:19.653933 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vspzs" Jan 21 12:34:20 crc kubenswrapper[4881]: I0121 12:34:20.222145 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vspzs"] Jan 21 12:34:20 crc kubenswrapper[4881]: I0121 12:34:20.850061 4881 generic.go:334] "Generic (PLEG): container finished" podID="6e7922fd-c90d-44be-924c-961055910625" containerID="aa00cccb6838c50f5a442f098a65d1d05eece62c87a5fbef804e514a756f7f64" exitCode=0 Jan 21 12:34:20 crc kubenswrapper[4881]: I0121 12:34:20.850163 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vspzs" event={"ID":"6e7922fd-c90d-44be-924c-961055910625","Type":"ContainerDied","Data":"aa00cccb6838c50f5a442f098a65d1d05eece62c87a5fbef804e514a756f7f64"} Jan 21 12:34:20 crc kubenswrapper[4881]: I0121 12:34:20.850327 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vspzs" event={"ID":"6e7922fd-c90d-44be-924c-961055910625","Type":"ContainerStarted","Data":"f64906d1b9120521632645eb3d6bcfdbd7f3e7bb5868aa2f3886549c679f4f5f"} Jan 21 12:34:20 crc kubenswrapper[4881]: I0121 12:34:20.854359 4881 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 12:34:22 crc kubenswrapper[4881]: I0121 12:34:22.873588 4881 generic.go:334] "Generic (PLEG): container finished" podID="6e7922fd-c90d-44be-924c-961055910625" containerID="bb827d0c521f710737507c54a40bb3151f05c5326264b9c349f57dd2e400b8ee" exitCode=0 Jan 21 12:34:22 crc kubenswrapper[4881]: I0121 12:34:22.873666 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vspzs" event={"ID":"6e7922fd-c90d-44be-924c-961055910625","Type":"ContainerDied","Data":"bb827d0c521f710737507c54a40bb3151f05c5326264b9c349f57dd2e400b8ee"} Jan 21 12:34:23 crc kubenswrapper[4881]: I0121 12:34:23.887047 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vspzs" event={"ID":"6e7922fd-c90d-44be-924c-961055910625","Type":"ContainerStarted","Data":"aac79371442850cad994c9e4cb25b92c5cb4ef8f3e9e7cbd47ed7dc0f33169a3"} Jan 21 12:34:23 crc kubenswrapper[4881]: I0121 12:34:23.916766 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vspzs" podStartSLOduration=2.504747499 podStartE2EDuration="4.916697727s" podCreationTimestamp="2026-01-21 12:34:19 +0000 UTC" firstStartedPulling="2026-01-21 12:34:20.853531074 +0000 UTC m=+5848.113487583" lastFinishedPulling="2026-01-21 12:34:23.265481342 +0000 UTC m=+5850.525437811" observedRunningTime="2026-01-21 12:34:23.905613935 +0000 UTC m=+5851.165570414" watchObservedRunningTime="2026-01-21 12:34:23.916697727 +0000 UTC m=+5851.176654196" Jan 21 12:34:29 crc kubenswrapper[4881]: I0121 12:34:29.654749 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vspzs" Jan 21 12:34:29 crc kubenswrapper[4881]: I0121 12:34:29.655339 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vspzs" Jan 21 12:34:29 crc kubenswrapper[4881]: I0121 12:34:29.706861 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vspzs" Jan 21 12:34:29 crc kubenswrapper[4881]: I0121 12:34:29.993712 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vspzs" Jan 21 12:34:30 crc kubenswrapper[4881]: I0121 12:34:30.049074 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vspzs"] Jan 21 12:34:31 crc kubenswrapper[4881]: I0121 12:34:31.963242 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vspzs" podUID="6e7922fd-c90d-44be-924c-961055910625" containerName="registry-server" containerID="cri-o://aac79371442850cad994c9e4cb25b92c5cb4ef8f3e9e7cbd47ed7dc0f33169a3" gracePeriod=2 Jan 21 12:34:33 crc kubenswrapper[4881]: I0121 12:34:33.034931 4881 generic.go:334] "Generic (PLEG): container finished" podID="6e7922fd-c90d-44be-924c-961055910625" containerID="aac79371442850cad994c9e4cb25b92c5cb4ef8f3e9e7cbd47ed7dc0f33169a3" exitCode=0 Jan 21 12:34:33 crc kubenswrapper[4881]: I0121 12:34:33.035038 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vspzs" event={"ID":"6e7922fd-c90d-44be-924c-961055910625","Type":"ContainerDied","Data":"aac79371442850cad994c9e4cb25b92c5cb4ef8f3e9e7cbd47ed7dc0f33169a3"} Jan 21 12:34:33 crc kubenswrapper[4881]: I0121 12:34:33.286387 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vspzs" Jan 21 12:34:33 crc kubenswrapper[4881]: I0121 12:34:33.378726 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e7922fd-c90d-44be-924c-961055910625-catalog-content\") pod \"6e7922fd-c90d-44be-924c-961055910625\" (UID: \"6e7922fd-c90d-44be-924c-961055910625\") " Jan 21 12:34:33 crc kubenswrapper[4881]: I0121 12:34:33.378854 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drhbl\" (UniqueName: \"kubernetes.io/projected/6e7922fd-c90d-44be-924c-961055910625-kube-api-access-drhbl\") pod \"6e7922fd-c90d-44be-924c-961055910625\" (UID: \"6e7922fd-c90d-44be-924c-961055910625\") " Jan 21 12:34:33 crc kubenswrapper[4881]: I0121 12:34:33.378934 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e7922fd-c90d-44be-924c-961055910625-utilities\") pod \"6e7922fd-c90d-44be-924c-961055910625\" (UID: \"6e7922fd-c90d-44be-924c-961055910625\") " Jan 21 12:34:33 crc kubenswrapper[4881]: I0121 12:34:33.380264 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e7922fd-c90d-44be-924c-961055910625-utilities" (OuterVolumeSpecName: "utilities") pod "6e7922fd-c90d-44be-924c-961055910625" (UID: "6e7922fd-c90d-44be-924c-961055910625"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:34:33 crc kubenswrapper[4881]: I0121 12:34:33.389734 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e7922fd-c90d-44be-924c-961055910625-kube-api-access-drhbl" (OuterVolumeSpecName: "kube-api-access-drhbl") pod "6e7922fd-c90d-44be-924c-961055910625" (UID: "6e7922fd-c90d-44be-924c-961055910625"). InnerVolumeSpecName "kube-api-access-drhbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:34:33 crc kubenswrapper[4881]: I0121 12:34:33.412067 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e7922fd-c90d-44be-924c-961055910625-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6e7922fd-c90d-44be-924c-961055910625" (UID: "6e7922fd-c90d-44be-924c-961055910625"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:34:33 crc kubenswrapper[4881]: I0121 12:34:33.481304 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e7922fd-c90d-44be-924c-961055910625-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 12:34:33 crc kubenswrapper[4881]: I0121 12:34:33.481339 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drhbl\" (UniqueName: \"kubernetes.io/projected/6e7922fd-c90d-44be-924c-961055910625-kube-api-access-drhbl\") on node \"crc\" DevicePath \"\"" Jan 21 12:34:33 crc kubenswrapper[4881]: I0121 12:34:33.481348 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e7922fd-c90d-44be-924c-961055910625-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 12:34:34 crc kubenswrapper[4881]: I0121 12:34:34.049111 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vspzs" event={"ID":"6e7922fd-c90d-44be-924c-961055910625","Type":"ContainerDied","Data":"f64906d1b9120521632645eb3d6bcfdbd7f3e7bb5868aa2f3886549c679f4f5f"} Jan 21 12:34:34 crc kubenswrapper[4881]: I0121 12:34:34.049945 4881 scope.go:117] "RemoveContainer" containerID="aac79371442850cad994c9e4cb25b92c5cb4ef8f3e9e7cbd47ed7dc0f33169a3" Jan 21 12:34:34 crc kubenswrapper[4881]: I0121 12:34:34.049235 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vspzs" Jan 21 12:34:34 crc kubenswrapper[4881]: I0121 12:34:34.082044 4881 scope.go:117] "RemoveContainer" containerID="bb827d0c521f710737507c54a40bb3151f05c5326264b9c349f57dd2e400b8ee" Jan 21 12:34:34 crc kubenswrapper[4881]: I0121 12:34:34.108119 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vspzs"] Jan 21 12:34:34 crc kubenswrapper[4881]: I0121 12:34:34.118524 4881 scope.go:117] "RemoveContainer" containerID="aa00cccb6838c50f5a442f098a65d1d05eece62c87a5fbef804e514a756f7f64" Jan 21 12:34:34 crc kubenswrapper[4881]: I0121 12:34:34.119149 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vspzs"] Jan 21 12:34:35 crc kubenswrapper[4881]: I0121 12:34:35.323615 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e7922fd-c90d-44be-924c-961055910625" path="/var/lib/kubelet/pods/6e7922fd-c90d-44be-924c-961055910625/volumes" Jan 21 12:34:58 crc kubenswrapper[4881]: I0121 12:34:58.337761 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-x8pdp"] Jan 21 12:34:58 crc kubenswrapper[4881]: E0121 12:34:58.339961 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e7922fd-c90d-44be-924c-961055910625" containerName="extract-utilities" Jan 21 12:34:58 crc kubenswrapper[4881]: I0121 12:34:58.340061 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e7922fd-c90d-44be-924c-961055910625" containerName="extract-utilities" Jan 21 12:34:58 crc kubenswrapper[4881]: E0121 12:34:58.340158 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e7922fd-c90d-44be-924c-961055910625" containerName="extract-content" Jan 21 12:34:58 crc kubenswrapper[4881]: I0121 12:34:58.340261 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e7922fd-c90d-44be-924c-961055910625" containerName="extract-content" Jan 21 12:34:58 crc kubenswrapper[4881]: E0121 12:34:58.340353 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e7922fd-c90d-44be-924c-961055910625" containerName="registry-server" Jan 21 12:34:58 crc kubenswrapper[4881]: I0121 12:34:58.340429 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e7922fd-c90d-44be-924c-961055910625" containerName="registry-server" Jan 21 12:34:58 crc kubenswrapper[4881]: I0121 12:34:58.340809 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e7922fd-c90d-44be-924c-961055910625" containerName="registry-server" Jan 21 12:34:58 crc kubenswrapper[4881]: I0121 12:34:58.343064 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x8pdp" Jan 21 12:34:58 crc kubenswrapper[4881]: I0121 12:34:58.353486 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x8pdp"] Jan 21 12:34:58 crc kubenswrapper[4881]: I0121 12:34:58.491200 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/134cc2ce-d598-4f3e-8e4d-0d52621fa050-utilities\") pod \"redhat-operators-x8pdp\" (UID: \"134cc2ce-d598-4f3e-8e4d-0d52621fa050\") " pod="openshift-marketplace/redhat-operators-x8pdp" Jan 21 12:34:58 crc kubenswrapper[4881]: I0121 12:34:58.491592 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmf9k\" (UniqueName: \"kubernetes.io/projected/134cc2ce-d598-4f3e-8e4d-0d52621fa050-kube-api-access-cmf9k\") pod \"redhat-operators-x8pdp\" (UID: \"134cc2ce-d598-4f3e-8e4d-0d52621fa050\") " pod="openshift-marketplace/redhat-operators-x8pdp" Jan 21 12:34:58 crc kubenswrapper[4881]: I0121 12:34:58.491944 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/134cc2ce-d598-4f3e-8e4d-0d52621fa050-catalog-content\") pod \"redhat-operators-x8pdp\" (UID: \"134cc2ce-d598-4f3e-8e4d-0d52621fa050\") " pod="openshift-marketplace/redhat-operators-x8pdp" Jan 21 12:34:58 crc kubenswrapper[4881]: I0121 12:34:58.594600 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/134cc2ce-d598-4f3e-8e4d-0d52621fa050-utilities\") pod \"redhat-operators-x8pdp\" (UID: \"134cc2ce-d598-4f3e-8e4d-0d52621fa050\") " pod="openshift-marketplace/redhat-operators-x8pdp" Jan 21 12:34:58 crc kubenswrapper[4881]: I0121 12:34:58.594671 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmf9k\" (UniqueName: \"kubernetes.io/projected/134cc2ce-d598-4f3e-8e4d-0d52621fa050-kube-api-access-cmf9k\") pod \"redhat-operators-x8pdp\" (UID: \"134cc2ce-d598-4f3e-8e4d-0d52621fa050\") " pod="openshift-marketplace/redhat-operators-x8pdp" Jan 21 12:34:58 crc kubenswrapper[4881]: I0121 12:34:58.594773 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/134cc2ce-d598-4f3e-8e4d-0d52621fa050-catalog-content\") pod \"redhat-operators-x8pdp\" (UID: \"134cc2ce-d598-4f3e-8e4d-0d52621fa050\") " pod="openshift-marketplace/redhat-operators-x8pdp" Jan 21 12:34:58 crc kubenswrapper[4881]: I0121 12:34:58.595396 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/134cc2ce-d598-4f3e-8e4d-0d52621fa050-catalog-content\") pod \"redhat-operators-x8pdp\" (UID: \"134cc2ce-d598-4f3e-8e4d-0d52621fa050\") " pod="openshift-marketplace/redhat-operators-x8pdp" Jan 21 12:34:58 crc kubenswrapper[4881]: I0121 12:34:58.595537 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/134cc2ce-d598-4f3e-8e4d-0d52621fa050-utilities\") pod \"redhat-operators-x8pdp\" (UID: \"134cc2ce-d598-4f3e-8e4d-0d52621fa050\") " pod="openshift-marketplace/redhat-operators-x8pdp" Jan 21 12:34:58 crc kubenswrapper[4881]: I0121 12:34:58.615903 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmf9k\" (UniqueName: \"kubernetes.io/projected/134cc2ce-d598-4f3e-8e4d-0d52621fa050-kube-api-access-cmf9k\") pod \"redhat-operators-x8pdp\" (UID: \"134cc2ce-d598-4f3e-8e4d-0d52621fa050\") " pod="openshift-marketplace/redhat-operators-x8pdp" Jan 21 12:34:58 crc kubenswrapper[4881]: I0121 12:34:58.716541 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x8pdp" Jan 21 12:34:59 crc kubenswrapper[4881]: I0121 12:34:59.246182 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x8pdp"] Jan 21 12:34:59 crc kubenswrapper[4881]: I0121 12:34:59.404585 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x8pdp" event={"ID":"134cc2ce-d598-4f3e-8e4d-0d52621fa050","Type":"ContainerStarted","Data":"cb2b1cbb1fbd26965587ad7d26030f5cf1d51c84e4e2def7ab4d1253a5497981"} Jan 21 12:35:00 crc kubenswrapper[4881]: I0121 12:35:00.365025 4881 generic.go:334] "Generic (PLEG): container finished" podID="134cc2ce-d598-4f3e-8e4d-0d52621fa050" containerID="398b4f488091ff49ef189925e727269e874db2445ca7a8ddd47eaae69295ebfc" exitCode=0 Jan 21 12:35:00 crc kubenswrapper[4881]: I0121 12:35:00.365173 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x8pdp" event={"ID":"134cc2ce-d598-4f3e-8e4d-0d52621fa050","Type":"ContainerDied","Data":"398b4f488091ff49ef189925e727269e874db2445ca7a8ddd47eaae69295ebfc"} Jan 21 12:35:02 crc kubenswrapper[4881]: I0121 12:35:02.417282 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x8pdp" event={"ID":"134cc2ce-d598-4f3e-8e4d-0d52621fa050","Type":"ContainerStarted","Data":"1f290cadc3541924844870281ece658c9562f54d57a37cf220fdf38a5bf8d619"} Jan 21 12:35:05 crc kubenswrapper[4881]: I0121 12:35:05.610703 4881 generic.go:334] "Generic (PLEG): container finished" podID="134cc2ce-d598-4f3e-8e4d-0d52621fa050" containerID="1f290cadc3541924844870281ece658c9562f54d57a37cf220fdf38a5bf8d619" exitCode=0 Jan 21 12:35:05 crc kubenswrapper[4881]: I0121 12:35:05.610890 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x8pdp" event={"ID":"134cc2ce-d598-4f3e-8e4d-0d52621fa050","Type":"ContainerDied","Data":"1f290cadc3541924844870281ece658c9562f54d57a37cf220fdf38a5bf8d619"} Jan 21 12:35:07 crc kubenswrapper[4881]: I0121 12:35:07.635804 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x8pdp" event={"ID":"134cc2ce-d598-4f3e-8e4d-0d52621fa050","Type":"ContainerStarted","Data":"819e3d0f6e1c9842f15399003abf3a97022c2ca96f73b6e2e6bb6abc3c30b323"} Jan 21 12:35:07 crc kubenswrapper[4881]: I0121 12:35:07.662223 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-x8pdp" podStartSLOduration=3.285319415 podStartE2EDuration="9.662196441s" podCreationTimestamp="2026-01-21 12:34:58 +0000 UTC" firstStartedPulling="2026-01-21 12:35:00.367149038 +0000 UTC m=+5887.627105507" lastFinishedPulling="2026-01-21 12:35:06.744026044 +0000 UTC m=+5894.003982533" observedRunningTime="2026-01-21 12:35:07.660021727 +0000 UTC m=+5894.919978246" watchObservedRunningTime="2026-01-21 12:35:07.662196441 +0000 UTC m=+5894.922152950" Jan 21 12:35:08 crc kubenswrapper[4881]: I0121 12:35:08.717325 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-x8pdp" Jan 21 12:35:08 crc kubenswrapper[4881]: I0121 12:35:08.717705 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-x8pdp" Jan 21 12:35:09 crc kubenswrapper[4881]: I0121 12:35:09.815536 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x8pdp" podUID="134cc2ce-d598-4f3e-8e4d-0d52621fa050" containerName="registry-server" probeResult="failure" output=< Jan 21 12:35:09 crc kubenswrapper[4881]: timeout: failed to connect service ":50051" within 1s Jan 21 12:35:09 crc kubenswrapper[4881]: > Jan 21 12:35:18 crc kubenswrapper[4881]: I0121 12:35:18.766186 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-x8pdp" Jan 21 12:35:18 crc kubenswrapper[4881]: I0121 12:35:18.860622 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-x8pdp" Jan 21 12:35:19 crc kubenswrapper[4881]: I0121 12:35:19.007123 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x8pdp"] Jan 21 12:35:20 crc kubenswrapper[4881]: I0121 12:35:20.778975 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-x8pdp" podUID="134cc2ce-d598-4f3e-8e4d-0d52621fa050" containerName="registry-server" containerID="cri-o://819e3d0f6e1c9842f15399003abf3a97022c2ca96f73b6e2e6bb6abc3c30b323" gracePeriod=2 Jan 21 12:35:21 crc kubenswrapper[4881]: I0121 12:35:21.411868 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x8pdp" Jan 21 12:35:21 crc kubenswrapper[4881]: I0121 12:35:21.438554 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/134cc2ce-d598-4f3e-8e4d-0d52621fa050-utilities\") pod \"134cc2ce-d598-4f3e-8e4d-0d52621fa050\" (UID: \"134cc2ce-d598-4f3e-8e4d-0d52621fa050\") " Jan 21 12:35:21 crc kubenswrapper[4881]: I0121 12:35:21.438659 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/134cc2ce-d598-4f3e-8e4d-0d52621fa050-catalog-content\") pod \"134cc2ce-d598-4f3e-8e4d-0d52621fa050\" (UID: \"134cc2ce-d598-4f3e-8e4d-0d52621fa050\") " Jan 21 12:35:21 crc kubenswrapper[4881]: I0121 12:35:21.438701 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmf9k\" (UniqueName: \"kubernetes.io/projected/134cc2ce-d598-4f3e-8e4d-0d52621fa050-kube-api-access-cmf9k\") pod \"134cc2ce-d598-4f3e-8e4d-0d52621fa050\" (UID: \"134cc2ce-d598-4f3e-8e4d-0d52621fa050\") " Jan 21 12:35:21 crc kubenswrapper[4881]: I0121 12:35:21.439810 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/134cc2ce-d598-4f3e-8e4d-0d52621fa050-utilities" (OuterVolumeSpecName: "utilities") pod "134cc2ce-d598-4f3e-8e4d-0d52621fa050" (UID: "134cc2ce-d598-4f3e-8e4d-0d52621fa050"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:35:21 crc kubenswrapper[4881]: I0121 12:35:21.449352 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/134cc2ce-d598-4f3e-8e4d-0d52621fa050-kube-api-access-cmf9k" (OuterVolumeSpecName: "kube-api-access-cmf9k") pod "134cc2ce-d598-4f3e-8e4d-0d52621fa050" (UID: "134cc2ce-d598-4f3e-8e4d-0d52621fa050"). InnerVolumeSpecName "kube-api-access-cmf9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:35:21 crc kubenswrapper[4881]: I0121 12:35:21.540459 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/134cc2ce-d598-4f3e-8e4d-0d52621fa050-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 12:35:21 crc kubenswrapper[4881]: I0121 12:35:21.540497 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmf9k\" (UniqueName: \"kubernetes.io/projected/134cc2ce-d598-4f3e-8e4d-0d52621fa050-kube-api-access-cmf9k\") on node \"crc\" DevicePath \"\"" Jan 21 12:35:21 crc kubenswrapper[4881]: I0121 12:35:21.585858 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/134cc2ce-d598-4f3e-8e4d-0d52621fa050-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "134cc2ce-d598-4f3e-8e4d-0d52621fa050" (UID: "134cc2ce-d598-4f3e-8e4d-0d52621fa050"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:35:21 crc kubenswrapper[4881]: I0121 12:35:21.642525 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/134cc2ce-d598-4f3e-8e4d-0d52621fa050-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 12:35:21 crc kubenswrapper[4881]: I0121 12:35:21.794665 4881 generic.go:334] "Generic (PLEG): container finished" podID="134cc2ce-d598-4f3e-8e4d-0d52621fa050" containerID="819e3d0f6e1c9842f15399003abf3a97022c2ca96f73b6e2e6bb6abc3c30b323" exitCode=0 Jan 21 12:35:21 crc kubenswrapper[4881]: I0121 12:35:21.794715 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x8pdp" event={"ID":"134cc2ce-d598-4f3e-8e4d-0d52621fa050","Type":"ContainerDied","Data":"819e3d0f6e1c9842f15399003abf3a97022c2ca96f73b6e2e6bb6abc3c30b323"} Jan 21 12:35:21 crc kubenswrapper[4881]: I0121 12:35:21.794749 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x8pdp" event={"ID":"134cc2ce-d598-4f3e-8e4d-0d52621fa050","Type":"ContainerDied","Data":"cb2b1cbb1fbd26965587ad7d26030f5cf1d51c84e4e2def7ab4d1253a5497981"} Jan 21 12:35:21 crc kubenswrapper[4881]: I0121 12:35:21.794765 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x8pdp" Jan 21 12:35:21 crc kubenswrapper[4881]: I0121 12:35:21.794774 4881 scope.go:117] "RemoveContainer" containerID="819e3d0f6e1c9842f15399003abf3a97022c2ca96f73b6e2e6bb6abc3c30b323" Jan 21 12:35:21 crc kubenswrapper[4881]: I0121 12:35:21.825277 4881 scope.go:117] "RemoveContainer" containerID="1f290cadc3541924844870281ece658c9562f54d57a37cf220fdf38a5bf8d619" Jan 21 12:35:21 crc kubenswrapper[4881]: I0121 12:35:21.846012 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x8pdp"] Jan 21 12:35:21 crc kubenswrapper[4881]: I0121 12:35:21.853720 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-x8pdp"] Jan 21 12:35:21 crc kubenswrapper[4881]: I0121 12:35:21.860962 4881 scope.go:117] "RemoveContainer" containerID="398b4f488091ff49ef189925e727269e874db2445ca7a8ddd47eaae69295ebfc" Jan 21 12:35:21 crc kubenswrapper[4881]: I0121 12:35:21.912635 4881 scope.go:117] "RemoveContainer" containerID="819e3d0f6e1c9842f15399003abf3a97022c2ca96f73b6e2e6bb6abc3c30b323" Jan 21 12:35:21 crc kubenswrapper[4881]: E0121 12:35:21.913132 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"819e3d0f6e1c9842f15399003abf3a97022c2ca96f73b6e2e6bb6abc3c30b323\": container with ID starting with 819e3d0f6e1c9842f15399003abf3a97022c2ca96f73b6e2e6bb6abc3c30b323 not found: ID does not exist" containerID="819e3d0f6e1c9842f15399003abf3a97022c2ca96f73b6e2e6bb6abc3c30b323" Jan 21 12:35:21 crc kubenswrapper[4881]: I0121 12:35:21.913187 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"819e3d0f6e1c9842f15399003abf3a97022c2ca96f73b6e2e6bb6abc3c30b323"} err="failed to get container status \"819e3d0f6e1c9842f15399003abf3a97022c2ca96f73b6e2e6bb6abc3c30b323\": rpc error: code = NotFound desc = could not find container \"819e3d0f6e1c9842f15399003abf3a97022c2ca96f73b6e2e6bb6abc3c30b323\": container with ID starting with 819e3d0f6e1c9842f15399003abf3a97022c2ca96f73b6e2e6bb6abc3c30b323 not found: ID does not exist" Jan 21 12:35:21 crc kubenswrapper[4881]: I0121 12:35:21.913222 4881 scope.go:117] "RemoveContainer" containerID="1f290cadc3541924844870281ece658c9562f54d57a37cf220fdf38a5bf8d619" Jan 21 12:35:21 crc kubenswrapper[4881]: E0121 12:35:21.913651 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f290cadc3541924844870281ece658c9562f54d57a37cf220fdf38a5bf8d619\": container with ID starting with 1f290cadc3541924844870281ece658c9562f54d57a37cf220fdf38a5bf8d619 not found: ID does not exist" containerID="1f290cadc3541924844870281ece658c9562f54d57a37cf220fdf38a5bf8d619" Jan 21 12:35:21 crc kubenswrapper[4881]: I0121 12:35:21.913682 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f290cadc3541924844870281ece658c9562f54d57a37cf220fdf38a5bf8d619"} err="failed to get container status \"1f290cadc3541924844870281ece658c9562f54d57a37cf220fdf38a5bf8d619\": rpc error: code = NotFound desc = could not find container \"1f290cadc3541924844870281ece658c9562f54d57a37cf220fdf38a5bf8d619\": container with ID starting with 1f290cadc3541924844870281ece658c9562f54d57a37cf220fdf38a5bf8d619 not found: ID does not exist" Jan 21 12:35:21 crc kubenswrapper[4881]: I0121 12:35:21.913704 4881 scope.go:117] "RemoveContainer" containerID="398b4f488091ff49ef189925e727269e874db2445ca7a8ddd47eaae69295ebfc" Jan 21 12:35:21 crc kubenswrapper[4881]: E0121 12:35:21.913939 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"398b4f488091ff49ef189925e727269e874db2445ca7a8ddd47eaae69295ebfc\": container with ID starting with 398b4f488091ff49ef189925e727269e874db2445ca7a8ddd47eaae69295ebfc not found: ID does not exist" containerID="398b4f488091ff49ef189925e727269e874db2445ca7a8ddd47eaae69295ebfc" Jan 21 12:35:21 crc kubenswrapper[4881]: I0121 12:35:21.913961 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"398b4f488091ff49ef189925e727269e874db2445ca7a8ddd47eaae69295ebfc"} err="failed to get container status \"398b4f488091ff49ef189925e727269e874db2445ca7a8ddd47eaae69295ebfc\": rpc error: code = NotFound desc = could not find container \"398b4f488091ff49ef189925e727269e874db2445ca7a8ddd47eaae69295ebfc\": container with ID starting with 398b4f488091ff49ef189925e727269e874db2445ca7a8ddd47eaae69295ebfc not found: ID does not exist" Jan 21 12:35:23 crc kubenswrapper[4881]: I0121 12:35:23.328085 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="134cc2ce-d598-4f3e-8e4d-0d52621fa050" path="/var/lib/kubelet/pods/134cc2ce-d598-4f3e-8e4d-0d52621fa050/volumes" Jan 21 12:35:29 crc kubenswrapper[4881]: I0121 12:35:29.850738 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:35:29 crc kubenswrapper[4881]: I0121 12:35:29.851205 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:35:59 crc kubenswrapper[4881]: I0121 12:35:59.850692 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:35:59 crc kubenswrapper[4881]: I0121 12:35:59.851267 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:36:29 crc kubenswrapper[4881]: I0121 12:36:29.851404 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:36:29 crc kubenswrapper[4881]: I0121 12:36:29.853632 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:36:29 crc kubenswrapper[4881]: I0121 12:36:29.853839 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 12:36:29 crc kubenswrapper[4881]: I0121 12:36:29.854763 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"58cd0c5668032beb15f336f840c62d17fca5d4719530de4cd0b64bce55dc94e3"} pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 12:36:29 crc kubenswrapper[4881]: I0121 12:36:29.855034 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" containerID="cri-o://58cd0c5668032beb15f336f840c62d17fca5d4719530de4cd0b64bce55dc94e3" gracePeriod=600 Jan 21 12:36:29 crc kubenswrapper[4881]: E0121 12:36:29.977451 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:36:30 crc kubenswrapper[4881]: I0121 12:36:30.701064 4881 generic.go:334] "Generic (PLEG): container finished" podID="3687b313-1df2-4274-80db-8c758b51bf2d" containerID="58cd0c5668032beb15f336f840c62d17fca5d4719530de4cd0b64bce55dc94e3" exitCode=0 Jan 21 12:36:30 crc kubenswrapper[4881]: I0121 12:36:30.701083 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerDied","Data":"58cd0c5668032beb15f336f840c62d17fca5d4719530de4cd0b64bce55dc94e3"} Jan 21 12:36:30 crc kubenswrapper[4881]: I0121 12:36:30.701198 4881 scope.go:117] "RemoveContainer" containerID="5ce4f2646890b2b0b35075452c84c9194c468c1e2e3c942d6c0c4679e67f5d4f" Jan 21 12:36:30 crc kubenswrapper[4881]: I0121 12:36:30.702122 4881 scope.go:117] "RemoveContainer" containerID="58cd0c5668032beb15f336f840c62d17fca5d4719530de4cd0b64bce55dc94e3" Jan 21 12:36:30 crc kubenswrapper[4881]: E0121 12:36:30.703085 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:36:34 crc kubenswrapper[4881]: I0121 12:36:34.507520 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-56jq6"] Jan 21 12:36:34 crc kubenswrapper[4881]: E0121 12:36:34.508587 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="134cc2ce-d598-4f3e-8e4d-0d52621fa050" containerName="extract-content" Jan 21 12:36:34 crc kubenswrapper[4881]: I0121 12:36:34.508602 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="134cc2ce-d598-4f3e-8e4d-0d52621fa050" containerName="extract-content" Jan 21 12:36:34 crc kubenswrapper[4881]: E0121 12:36:34.508614 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="134cc2ce-d598-4f3e-8e4d-0d52621fa050" containerName="registry-server" Jan 21 12:36:34 crc kubenswrapper[4881]: I0121 12:36:34.508620 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="134cc2ce-d598-4f3e-8e4d-0d52621fa050" containerName="registry-server" Jan 21 12:36:34 crc kubenswrapper[4881]: E0121 12:36:34.508630 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="134cc2ce-d598-4f3e-8e4d-0d52621fa050" containerName="extract-utilities" Jan 21 12:36:34 crc kubenswrapper[4881]: I0121 12:36:34.508637 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="134cc2ce-d598-4f3e-8e4d-0d52621fa050" containerName="extract-utilities" Jan 21 12:36:34 crc kubenswrapper[4881]: I0121 12:36:34.508918 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="134cc2ce-d598-4f3e-8e4d-0d52621fa050" containerName="registry-server" Jan 21 12:36:34 crc kubenswrapper[4881]: I0121 12:36:34.510953 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-56jq6" Jan 21 12:36:34 crc kubenswrapper[4881]: I0121 12:36:34.519719 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-56jq6"] Jan 21 12:36:34 crc kubenswrapper[4881]: I0121 12:36:34.635207 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7jtm\" (UniqueName: \"kubernetes.io/projected/8ef74d66-0c28-4544-849f-27a618c07f25-kube-api-access-g7jtm\") pod \"certified-operators-56jq6\" (UID: \"8ef74d66-0c28-4544-849f-27a618c07f25\") " pod="openshift-marketplace/certified-operators-56jq6" Jan 21 12:36:34 crc kubenswrapper[4881]: I0121 12:36:34.635440 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ef74d66-0c28-4544-849f-27a618c07f25-utilities\") pod \"certified-operators-56jq6\" (UID: \"8ef74d66-0c28-4544-849f-27a618c07f25\") " pod="openshift-marketplace/certified-operators-56jq6" Jan 21 12:36:34 crc kubenswrapper[4881]: I0121 12:36:34.635851 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ef74d66-0c28-4544-849f-27a618c07f25-catalog-content\") pod \"certified-operators-56jq6\" (UID: \"8ef74d66-0c28-4544-849f-27a618c07f25\") " pod="openshift-marketplace/certified-operators-56jq6" Jan 21 12:36:34 crc kubenswrapper[4881]: I0121 12:36:34.737975 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ef74d66-0c28-4544-849f-27a618c07f25-utilities\") pod \"certified-operators-56jq6\" (UID: \"8ef74d66-0c28-4544-849f-27a618c07f25\") " pod="openshift-marketplace/certified-operators-56jq6" Jan 21 12:36:34 crc kubenswrapper[4881]: I0121 12:36:34.738115 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ef74d66-0c28-4544-849f-27a618c07f25-catalog-content\") pod \"certified-operators-56jq6\" (UID: \"8ef74d66-0c28-4544-849f-27a618c07f25\") " pod="openshift-marketplace/certified-operators-56jq6" Jan 21 12:36:34 crc kubenswrapper[4881]: I0121 12:36:34.738294 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7jtm\" (UniqueName: \"kubernetes.io/projected/8ef74d66-0c28-4544-849f-27a618c07f25-kube-api-access-g7jtm\") pod \"certified-operators-56jq6\" (UID: \"8ef74d66-0c28-4544-849f-27a618c07f25\") " pod="openshift-marketplace/certified-operators-56jq6" Jan 21 12:36:34 crc kubenswrapper[4881]: I0121 12:36:34.738567 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ef74d66-0c28-4544-849f-27a618c07f25-catalog-content\") pod \"certified-operators-56jq6\" (UID: \"8ef74d66-0c28-4544-849f-27a618c07f25\") " pod="openshift-marketplace/certified-operators-56jq6" Jan 21 12:36:34 crc kubenswrapper[4881]: I0121 12:36:34.738763 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ef74d66-0c28-4544-849f-27a618c07f25-utilities\") pod \"certified-operators-56jq6\" (UID: \"8ef74d66-0c28-4544-849f-27a618c07f25\") " pod="openshift-marketplace/certified-operators-56jq6" Jan 21 12:36:34 crc kubenswrapper[4881]: I0121 12:36:34.758712 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7jtm\" (UniqueName: \"kubernetes.io/projected/8ef74d66-0c28-4544-849f-27a618c07f25-kube-api-access-g7jtm\") pod \"certified-operators-56jq6\" (UID: \"8ef74d66-0c28-4544-849f-27a618c07f25\") " pod="openshift-marketplace/certified-operators-56jq6" Jan 21 12:36:34 crc kubenswrapper[4881]: I0121 12:36:34.835170 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-56jq6" Jan 21 12:36:35 crc kubenswrapper[4881]: I0121 12:36:35.354572 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-56jq6"] Jan 21 12:36:35 crc kubenswrapper[4881]: I0121 12:36:35.774558 4881 generic.go:334] "Generic (PLEG): container finished" podID="8ef74d66-0c28-4544-849f-27a618c07f25" containerID="94358427c0b7aad8c60ccf1f15d3a5bdd6fe48a1d0ce0fffd39e8e43512aae28" exitCode=0 Jan 21 12:36:35 crc kubenswrapper[4881]: I0121 12:36:35.774649 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-56jq6" event={"ID":"8ef74d66-0c28-4544-849f-27a618c07f25","Type":"ContainerDied","Data":"94358427c0b7aad8c60ccf1f15d3a5bdd6fe48a1d0ce0fffd39e8e43512aae28"} Jan 21 12:36:35 crc kubenswrapper[4881]: I0121 12:36:35.775093 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-56jq6" event={"ID":"8ef74d66-0c28-4544-849f-27a618c07f25","Type":"ContainerStarted","Data":"7c991126e180b43f8ed8051ea2a401c78bffed23d0b8cb311f41cf189fbd2dfa"} Jan 21 12:36:36 crc kubenswrapper[4881]: I0121 12:36:36.789212 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-56jq6" event={"ID":"8ef74d66-0c28-4544-849f-27a618c07f25","Type":"ContainerStarted","Data":"d37189c03972c86a5249beff3ff66068254eecbcbd8f696c02ec91aab34478d7"} Jan 21 12:36:37 crc kubenswrapper[4881]: I0121 12:36:37.801487 4881 generic.go:334] "Generic (PLEG): container finished" podID="8ef74d66-0c28-4544-849f-27a618c07f25" containerID="d37189c03972c86a5249beff3ff66068254eecbcbd8f696c02ec91aab34478d7" exitCode=0 Jan 21 12:36:37 crc kubenswrapper[4881]: I0121 12:36:37.801732 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-56jq6" event={"ID":"8ef74d66-0c28-4544-849f-27a618c07f25","Type":"ContainerDied","Data":"d37189c03972c86a5249beff3ff66068254eecbcbd8f696c02ec91aab34478d7"} Jan 21 12:36:38 crc kubenswrapper[4881]: I0121 12:36:38.816713 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-56jq6" event={"ID":"8ef74d66-0c28-4544-849f-27a618c07f25","Type":"ContainerStarted","Data":"b23b18c80bd46c2b1574da5ddf36ca2de500862eaba1c7c8da6864b9043b3793"} Jan 21 12:36:38 crc kubenswrapper[4881]: I0121 12:36:38.849599 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-56jq6" podStartSLOduration=2.414744082 podStartE2EDuration="4.849570889s" podCreationTimestamp="2026-01-21 12:36:34 +0000 UTC" firstStartedPulling="2026-01-21 12:36:35.778198326 +0000 UTC m=+5983.038154805" lastFinishedPulling="2026-01-21 12:36:38.213025143 +0000 UTC m=+5985.472981612" observedRunningTime="2026-01-21 12:36:38.838924019 +0000 UTC m=+5986.098880498" watchObservedRunningTime="2026-01-21 12:36:38.849570889 +0000 UTC m=+5986.109527368" Jan 21 12:36:44 crc kubenswrapper[4881]: I0121 12:36:44.311817 4881 scope.go:117] "RemoveContainer" containerID="58cd0c5668032beb15f336f840c62d17fca5d4719530de4cd0b64bce55dc94e3" Jan 21 12:36:44 crc kubenswrapper[4881]: E0121 12:36:44.312936 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:36:44 crc kubenswrapper[4881]: I0121 12:36:44.836231 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-56jq6" Jan 21 12:36:44 crc kubenswrapper[4881]: I0121 12:36:44.836307 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-56jq6" Jan 21 12:36:44 crc kubenswrapper[4881]: I0121 12:36:44.884637 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-56jq6" Jan 21 12:36:44 crc kubenswrapper[4881]: I0121 12:36:44.934282 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-56jq6" Jan 21 12:36:45 crc kubenswrapper[4881]: I0121 12:36:45.131330 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-56jq6"] Jan 21 12:36:46 crc kubenswrapper[4881]: I0121 12:36:46.902482 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-56jq6" podUID="8ef74d66-0c28-4544-849f-27a618c07f25" containerName="registry-server" containerID="cri-o://b23b18c80bd46c2b1574da5ddf36ca2de500862eaba1c7c8da6864b9043b3793" gracePeriod=2 Jan 21 12:36:47 crc kubenswrapper[4881]: I0121 12:36:47.919932 4881 generic.go:334] "Generic (PLEG): container finished" podID="8ef74d66-0c28-4544-849f-27a618c07f25" containerID="b23b18c80bd46c2b1574da5ddf36ca2de500862eaba1c7c8da6864b9043b3793" exitCode=0 Jan 21 12:36:47 crc kubenswrapper[4881]: I0121 12:36:47.920006 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-56jq6" event={"ID":"8ef74d66-0c28-4544-849f-27a618c07f25","Type":"ContainerDied","Data":"b23b18c80bd46c2b1574da5ddf36ca2de500862eaba1c7c8da6864b9043b3793"} Jan 21 12:36:47 crc kubenswrapper[4881]: I0121 12:36:47.920517 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-56jq6" event={"ID":"8ef74d66-0c28-4544-849f-27a618c07f25","Type":"ContainerDied","Data":"7c991126e180b43f8ed8051ea2a401c78bffed23d0b8cb311f41cf189fbd2dfa"} Jan 21 12:36:47 crc kubenswrapper[4881]: I0121 12:36:47.920537 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c991126e180b43f8ed8051ea2a401c78bffed23d0b8cb311f41cf189fbd2dfa" Jan 21 12:36:47 crc kubenswrapper[4881]: I0121 12:36:47.922175 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-56jq6" Jan 21 12:36:48 crc kubenswrapper[4881]: I0121 12:36:48.086207 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ef74d66-0c28-4544-849f-27a618c07f25-utilities\") pod \"8ef74d66-0c28-4544-849f-27a618c07f25\" (UID: \"8ef74d66-0c28-4544-849f-27a618c07f25\") " Jan 21 12:36:48 crc kubenswrapper[4881]: I0121 12:36:48.086448 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ef74d66-0c28-4544-849f-27a618c07f25-catalog-content\") pod \"8ef74d66-0c28-4544-849f-27a618c07f25\" (UID: \"8ef74d66-0c28-4544-849f-27a618c07f25\") " Jan 21 12:36:48 crc kubenswrapper[4881]: I0121 12:36:48.086600 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7jtm\" (UniqueName: \"kubernetes.io/projected/8ef74d66-0c28-4544-849f-27a618c07f25-kube-api-access-g7jtm\") pod \"8ef74d66-0c28-4544-849f-27a618c07f25\" (UID: \"8ef74d66-0c28-4544-849f-27a618c07f25\") " Jan 21 12:36:48 crc kubenswrapper[4881]: I0121 12:36:48.087251 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ef74d66-0c28-4544-849f-27a618c07f25-utilities" (OuterVolumeSpecName: "utilities") pod "8ef74d66-0c28-4544-849f-27a618c07f25" (UID: "8ef74d66-0c28-4544-849f-27a618c07f25"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:36:48 crc kubenswrapper[4881]: I0121 12:36:48.092633 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ef74d66-0c28-4544-849f-27a618c07f25-kube-api-access-g7jtm" (OuterVolumeSpecName: "kube-api-access-g7jtm") pod "8ef74d66-0c28-4544-849f-27a618c07f25" (UID: "8ef74d66-0c28-4544-849f-27a618c07f25"). InnerVolumeSpecName "kube-api-access-g7jtm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:36:48 crc kubenswrapper[4881]: I0121 12:36:48.142851 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ef74d66-0c28-4544-849f-27a618c07f25-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8ef74d66-0c28-4544-849f-27a618c07f25" (UID: "8ef74d66-0c28-4544-849f-27a618c07f25"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:36:48 crc kubenswrapper[4881]: I0121 12:36:48.188955 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7jtm\" (UniqueName: \"kubernetes.io/projected/8ef74d66-0c28-4544-849f-27a618c07f25-kube-api-access-g7jtm\") on node \"crc\" DevicePath \"\"" Jan 21 12:36:48 crc kubenswrapper[4881]: I0121 12:36:48.188983 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ef74d66-0c28-4544-849f-27a618c07f25-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 12:36:48 crc kubenswrapper[4881]: I0121 12:36:48.188993 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ef74d66-0c28-4544-849f-27a618c07f25-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 12:36:48 crc kubenswrapper[4881]: I0121 12:36:48.938858 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-56jq6" Jan 21 12:36:48 crc kubenswrapper[4881]: I0121 12:36:48.998976 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-56jq6"] Jan 21 12:36:49 crc kubenswrapper[4881]: I0121 12:36:49.007043 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-56jq6"] Jan 21 12:36:49 crc kubenswrapper[4881]: I0121 12:36:49.324239 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ef74d66-0c28-4544-849f-27a618c07f25" path="/var/lib/kubelet/pods/8ef74d66-0c28-4544-849f-27a618c07f25/volumes" Jan 21 12:36:57 crc kubenswrapper[4881]: I0121 12:36:57.312259 4881 scope.go:117] "RemoveContainer" containerID="58cd0c5668032beb15f336f840c62d17fca5d4719530de4cd0b64bce55dc94e3" Jan 21 12:36:57 crc kubenswrapper[4881]: E0121 12:36:57.315298 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:37:09 crc kubenswrapper[4881]: I0121 12:37:09.568208 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-volume-nfs-0" podUID="8c912ca5-a82b-4083-8579-f0f6f506eebb" containerName="cinder-volume" probeResult="failure" output="Get \"http://10.217.1.6:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 12:37:11 crc kubenswrapper[4881]: I0121 12:37:11.323229 4881 scope.go:117] "RemoveContainer" containerID="58cd0c5668032beb15f336f840c62d17fca5d4719530de4cd0b64bce55dc94e3" Jan 21 12:37:11 crc kubenswrapper[4881]: E0121 12:37:11.338627 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:37:22 crc kubenswrapper[4881]: I0121 12:37:22.311038 4881 scope.go:117] "RemoveContainer" containerID="58cd0c5668032beb15f336f840c62d17fca5d4719530de4cd0b64bce55dc94e3" Jan 21 12:37:22 crc kubenswrapper[4881]: E0121 12:37:22.311956 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:37:37 crc kubenswrapper[4881]: I0121 12:37:37.311830 4881 scope.go:117] "RemoveContainer" containerID="58cd0c5668032beb15f336f840c62d17fca5d4719530de4cd0b64bce55dc94e3" Jan 21 12:37:37 crc kubenswrapper[4881]: E0121 12:37:37.313163 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:37:48 crc kubenswrapper[4881]: I0121 12:37:48.311186 4881 scope.go:117] "RemoveContainer" containerID="58cd0c5668032beb15f336f840c62d17fca5d4719530de4cd0b64bce55dc94e3" Jan 21 12:37:48 crc kubenswrapper[4881]: E0121 12:37:48.312781 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:37:51 crc kubenswrapper[4881]: I0121 12:37:51.928011 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-5qcms" podUID="d0cafd1d-5f37-499a-a531-547a137aae21" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 12:38:03 crc kubenswrapper[4881]: I0121 12:38:03.322847 4881 scope.go:117] "RemoveContainer" containerID="58cd0c5668032beb15f336f840c62d17fca5d4719530de4cd0b64bce55dc94e3" Jan 21 12:38:03 crc kubenswrapper[4881]: E0121 12:38:03.323769 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:38:17 crc kubenswrapper[4881]: I0121 12:38:17.311447 4881 scope.go:117] "RemoveContainer" containerID="58cd0c5668032beb15f336f840c62d17fca5d4719530de4cd0b64bce55dc94e3" Jan 21 12:38:17 crc kubenswrapper[4881]: E0121 12:38:17.312666 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:38:30 crc kubenswrapper[4881]: I0121 12:38:30.310516 4881 scope.go:117] "RemoveContainer" containerID="58cd0c5668032beb15f336f840c62d17fca5d4719530de4cd0b64bce55dc94e3" Jan 21 12:38:30 crc kubenswrapper[4881]: E0121 12:38:30.312641 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:38:43 crc kubenswrapper[4881]: I0121 12:38:43.319732 4881 scope.go:117] "RemoveContainer" containerID="58cd0c5668032beb15f336f840c62d17fca5d4719530de4cd0b64bce55dc94e3" Jan 21 12:38:43 crc kubenswrapper[4881]: E0121 12:38:43.320651 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:38:54 crc kubenswrapper[4881]: I0121 12:38:54.312824 4881 scope.go:117] "RemoveContainer" containerID="58cd0c5668032beb15f336f840c62d17fca5d4719530de4cd0b64bce55dc94e3" Jan 21 12:38:54 crc kubenswrapper[4881]: E0121 12:38:54.314937 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:39:05 crc kubenswrapper[4881]: I0121 12:39:05.311593 4881 scope.go:117] "RemoveContainer" containerID="58cd0c5668032beb15f336f840c62d17fca5d4719530de4cd0b64bce55dc94e3" Jan 21 12:39:05 crc kubenswrapper[4881]: E0121 12:39:05.312988 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:39:06 crc kubenswrapper[4881]: I0121 12:39:06.230654 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-s6n4b"] Jan 21 12:39:06 crc kubenswrapper[4881]: E0121 12:39:06.231596 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ef74d66-0c28-4544-849f-27a618c07f25" containerName="extract-utilities" Jan 21 12:39:06 crc kubenswrapper[4881]: I0121 12:39:06.231633 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ef74d66-0c28-4544-849f-27a618c07f25" containerName="extract-utilities" Jan 21 12:39:06 crc kubenswrapper[4881]: E0121 12:39:06.231676 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ef74d66-0c28-4544-849f-27a618c07f25" containerName="registry-server" Jan 21 12:39:06 crc kubenswrapper[4881]: I0121 12:39:06.231689 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ef74d66-0c28-4544-849f-27a618c07f25" containerName="registry-server" Jan 21 12:39:06 crc kubenswrapper[4881]: E0121 12:39:06.231733 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ef74d66-0c28-4544-849f-27a618c07f25" containerName="extract-content" Jan 21 12:39:06 crc kubenswrapper[4881]: I0121 12:39:06.231751 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ef74d66-0c28-4544-849f-27a618c07f25" containerName="extract-content" Jan 21 12:39:06 crc kubenswrapper[4881]: I0121 12:39:06.232178 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ef74d66-0c28-4544-849f-27a618c07f25" containerName="registry-server" Jan 21 12:39:06 crc kubenswrapper[4881]: I0121 12:39:06.235090 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s6n4b" Jan 21 12:39:06 crc kubenswrapper[4881]: I0121 12:39:06.245937 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s6n4b"] Jan 21 12:39:06 crc kubenswrapper[4881]: I0121 12:39:06.442129 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfjw5\" (UniqueName: \"kubernetes.io/projected/7456574a-75d3-47a1-a584-c552d4806d47-kube-api-access-wfjw5\") pod \"community-operators-s6n4b\" (UID: \"7456574a-75d3-47a1-a584-c552d4806d47\") " pod="openshift-marketplace/community-operators-s6n4b" Jan 21 12:39:06 crc kubenswrapper[4881]: I0121 12:39:06.442548 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7456574a-75d3-47a1-a584-c552d4806d47-utilities\") pod \"community-operators-s6n4b\" (UID: \"7456574a-75d3-47a1-a584-c552d4806d47\") " pod="openshift-marketplace/community-operators-s6n4b" Jan 21 12:39:06 crc kubenswrapper[4881]: I0121 12:39:06.443486 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7456574a-75d3-47a1-a584-c552d4806d47-catalog-content\") pod \"community-operators-s6n4b\" (UID: \"7456574a-75d3-47a1-a584-c552d4806d47\") " pod="openshift-marketplace/community-operators-s6n4b" Jan 21 12:39:06 crc kubenswrapper[4881]: I0121 12:39:06.544266 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7456574a-75d3-47a1-a584-c552d4806d47-utilities\") pod \"community-operators-s6n4b\" (UID: \"7456574a-75d3-47a1-a584-c552d4806d47\") " pod="openshift-marketplace/community-operators-s6n4b" Jan 21 12:39:06 crc kubenswrapper[4881]: I0121 12:39:06.544415 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7456574a-75d3-47a1-a584-c552d4806d47-catalog-content\") pod \"community-operators-s6n4b\" (UID: \"7456574a-75d3-47a1-a584-c552d4806d47\") " pod="openshift-marketplace/community-operators-s6n4b" Jan 21 12:39:06 crc kubenswrapper[4881]: I0121 12:39:06.544478 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfjw5\" (UniqueName: \"kubernetes.io/projected/7456574a-75d3-47a1-a584-c552d4806d47-kube-api-access-wfjw5\") pod \"community-operators-s6n4b\" (UID: \"7456574a-75d3-47a1-a584-c552d4806d47\") " pod="openshift-marketplace/community-operators-s6n4b" Jan 21 12:39:06 crc kubenswrapper[4881]: I0121 12:39:06.544852 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7456574a-75d3-47a1-a584-c552d4806d47-utilities\") pod \"community-operators-s6n4b\" (UID: \"7456574a-75d3-47a1-a584-c552d4806d47\") " pod="openshift-marketplace/community-operators-s6n4b" Jan 21 12:39:06 crc kubenswrapper[4881]: I0121 12:39:06.544986 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7456574a-75d3-47a1-a584-c552d4806d47-catalog-content\") pod \"community-operators-s6n4b\" (UID: \"7456574a-75d3-47a1-a584-c552d4806d47\") " pod="openshift-marketplace/community-operators-s6n4b" Jan 21 12:39:06 crc kubenswrapper[4881]: I0121 12:39:06.562544 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfjw5\" (UniqueName: \"kubernetes.io/projected/7456574a-75d3-47a1-a584-c552d4806d47-kube-api-access-wfjw5\") pod \"community-operators-s6n4b\" (UID: \"7456574a-75d3-47a1-a584-c552d4806d47\") " pod="openshift-marketplace/community-operators-s6n4b" Jan 21 12:39:06 crc kubenswrapper[4881]: I0121 12:39:06.570162 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s6n4b" Jan 21 12:39:07 crc kubenswrapper[4881]: I0121 12:39:07.146882 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s6n4b"] Jan 21 12:39:07 crc kubenswrapper[4881]: I0121 12:39:07.702748 4881 generic.go:334] "Generic (PLEG): container finished" podID="7456574a-75d3-47a1-a584-c552d4806d47" containerID="74729fa5cab0891d63e4e5947225d9594300869de530f248fcfa19b346e40c61" exitCode=0 Jan 21 12:39:07 crc kubenswrapper[4881]: I0121 12:39:07.702807 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s6n4b" event={"ID":"7456574a-75d3-47a1-a584-c552d4806d47","Type":"ContainerDied","Data":"74729fa5cab0891d63e4e5947225d9594300869de530f248fcfa19b346e40c61"} Jan 21 12:39:07 crc kubenswrapper[4881]: I0121 12:39:07.703023 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s6n4b" event={"ID":"7456574a-75d3-47a1-a584-c552d4806d47","Type":"ContainerStarted","Data":"ba52beca2b7b22a072d2fac530ed6a3181fc174a60547351bd072b0dd6060fd0"} Jan 21 12:39:08 crc kubenswrapper[4881]: I0121 12:39:08.720883 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s6n4b" event={"ID":"7456574a-75d3-47a1-a584-c552d4806d47","Type":"ContainerStarted","Data":"8f1a88a741efb50c9477fba65c0c2eb70c9c999b88931bb0d8c95e3c35ef4e92"} Jan 21 12:39:09 crc kubenswrapper[4881]: I0121 12:39:09.734601 4881 generic.go:334] "Generic (PLEG): container finished" podID="7456574a-75d3-47a1-a584-c552d4806d47" containerID="8f1a88a741efb50c9477fba65c0c2eb70c9c999b88931bb0d8c95e3c35ef4e92" exitCode=0 Jan 21 12:39:09 crc kubenswrapper[4881]: I0121 12:39:09.734675 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s6n4b" event={"ID":"7456574a-75d3-47a1-a584-c552d4806d47","Type":"ContainerDied","Data":"8f1a88a741efb50c9477fba65c0c2eb70c9c999b88931bb0d8c95e3c35ef4e92"} Jan 21 12:39:10 crc kubenswrapper[4881]: I0121 12:39:10.747899 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s6n4b" event={"ID":"7456574a-75d3-47a1-a584-c552d4806d47","Type":"ContainerStarted","Data":"2b7974a76c2489b8201ce7ddb4f20b05c989f1e01d022eb6721825ad74d4ebca"} Jan 21 12:39:10 crc kubenswrapper[4881]: I0121 12:39:10.779470 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-s6n4b" podStartSLOduration=2.29504583 podStartE2EDuration="4.779443121s" podCreationTimestamp="2026-01-21 12:39:06 +0000 UTC" firstStartedPulling="2026-01-21 12:39:07.704744586 +0000 UTC m=+6134.964701045" lastFinishedPulling="2026-01-21 12:39:10.189141867 +0000 UTC m=+6137.449098336" observedRunningTime="2026-01-21 12:39:10.772890501 +0000 UTC m=+6138.032847070" watchObservedRunningTime="2026-01-21 12:39:10.779443121 +0000 UTC m=+6138.039399630" Jan 21 12:39:16 crc kubenswrapper[4881]: I0121 12:39:16.311510 4881 scope.go:117] "RemoveContainer" containerID="58cd0c5668032beb15f336f840c62d17fca5d4719530de4cd0b64bce55dc94e3" Jan 21 12:39:16 crc kubenswrapper[4881]: E0121 12:39:16.312832 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:39:16 crc kubenswrapper[4881]: I0121 12:39:16.571023 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-s6n4b" Jan 21 12:39:16 crc kubenswrapper[4881]: I0121 12:39:16.571081 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-s6n4b" Jan 21 12:39:16 crc kubenswrapper[4881]: I0121 12:39:16.623158 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-s6n4b" Jan 21 12:39:16 crc kubenswrapper[4881]: I0121 12:39:16.862614 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-s6n4b" Jan 21 12:39:16 crc kubenswrapper[4881]: I0121 12:39:16.922493 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s6n4b"] Jan 21 12:39:18 crc kubenswrapper[4881]: I0121 12:39:18.835835 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-s6n4b" podUID="7456574a-75d3-47a1-a584-c552d4806d47" containerName="registry-server" containerID="cri-o://2b7974a76c2489b8201ce7ddb4f20b05c989f1e01d022eb6721825ad74d4ebca" gracePeriod=2 Jan 21 12:39:19 crc kubenswrapper[4881]: I0121 12:39:19.395972 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s6n4b" Jan 21 12:39:19 crc kubenswrapper[4881]: I0121 12:39:19.507876 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7456574a-75d3-47a1-a584-c552d4806d47-utilities\") pod \"7456574a-75d3-47a1-a584-c552d4806d47\" (UID: \"7456574a-75d3-47a1-a584-c552d4806d47\") " Jan 21 12:39:19 crc kubenswrapper[4881]: I0121 12:39:19.508097 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfjw5\" (UniqueName: \"kubernetes.io/projected/7456574a-75d3-47a1-a584-c552d4806d47-kube-api-access-wfjw5\") pod \"7456574a-75d3-47a1-a584-c552d4806d47\" (UID: \"7456574a-75d3-47a1-a584-c552d4806d47\") " Jan 21 12:39:19 crc kubenswrapper[4881]: I0121 12:39:19.508123 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7456574a-75d3-47a1-a584-c552d4806d47-catalog-content\") pod \"7456574a-75d3-47a1-a584-c552d4806d47\" (UID: \"7456574a-75d3-47a1-a584-c552d4806d47\") " Jan 21 12:39:19 crc kubenswrapper[4881]: I0121 12:39:19.508627 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7456574a-75d3-47a1-a584-c552d4806d47-utilities" (OuterVolumeSpecName: "utilities") pod "7456574a-75d3-47a1-a584-c552d4806d47" (UID: "7456574a-75d3-47a1-a584-c552d4806d47"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:39:19 crc kubenswrapper[4881]: I0121 12:39:19.513900 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7456574a-75d3-47a1-a584-c552d4806d47-kube-api-access-wfjw5" (OuterVolumeSpecName: "kube-api-access-wfjw5") pod "7456574a-75d3-47a1-a584-c552d4806d47" (UID: "7456574a-75d3-47a1-a584-c552d4806d47"). InnerVolumeSpecName "kube-api-access-wfjw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:39:19 crc kubenswrapper[4881]: I0121 12:39:19.590540 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7456574a-75d3-47a1-a584-c552d4806d47-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7456574a-75d3-47a1-a584-c552d4806d47" (UID: "7456574a-75d3-47a1-a584-c552d4806d47"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:39:19 crc kubenswrapper[4881]: I0121 12:39:19.610743 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfjw5\" (UniqueName: \"kubernetes.io/projected/7456574a-75d3-47a1-a584-c552d4806d47-kube-api-access-wfjw5\") on node \"crc\" DevicePath \"\"" Jan 21 12:39:19 crc kubenswrapper[4881]: I0121 12:39:19.610775 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7456574a-75d3-47a1-a584-c552d4806d47-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 12:39:19 crc kubenswrapper[4881]: I0121 12:39:19.610803 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7456574a-75d3-47a1-a584-c552d4806d47-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 12:39:19 crc kubenswrapper[4881]: I0121 12:39:19.846960 4881 generic.go:334] "Generic (PLEG): container finished" podID="7456574a-75d3-47a1-a584-c552d4806d47" containerID="2b7974a76c2489b8201ce7ddb4f20b05c989f1e01d022eb6721825ad74d4ebca" exitCode=0 Jan 21 12:39:19 crc kubenswrapper[4881]: I0121 12:39:19.847014 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s6n4b" event={"ID":"7456574a-75d3-47a1-a584-c552d4806d47","Type":"ContainerDied","Data":"2b7974a76c2489b8201ce7ddb4f20b05c989f1e01d022eb6721825ad74d4ebca"} Jan 21 12:39:19 crc kubenswrapper[4881]: I0121 12:39:19.847021 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s6n4b" Jan 21 12:39:19 crc kubenswrapper[4881]: I0121 12:39:19.847056 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s6n4b" event={"ID":"7456574a-75d3-47a1-a584-c552d4806d47","Type":"ContainerDied","Data":"ba52beca2b7b22a072d2fac530ed6a3181fc174a60547351bd072b0dd6060fd0"} Jan 21 12:39:19 crc kubenswrapper[4881]: I0121 12:39:19.847081 4881 scope.go:117] "RemoveContainer" containerID="2b7974a76c2489b8201ce7ddb4f20b05c989f1e01d022eb6721825ad74d4ebca" Jan 21 12:39:19 crc kubenswrapper[4881]: I0121 12:39:19.881030 4881 scope.go:117] "RemoveContainer" containerID="8f1a88a741efb50c9477fba65c0c2eb70c9c999b88931bb0d8c95e3c35ef4e92" Jan 21 12:39:19 crc kubenswrapper[4881]: I0121 12:39:19.908577 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s6n4b"] Jan 21 12:39:19 crc kubenswrapper[4881]: I0121 12:39:19.920247 4881 scope.go:117] "RemoveContainer" containerID="74729fa5cab0891d63e4e5947225d9594300869de530f248fcfa19b346e40c61" Jan 21 12:39:19 crc kubenswrapper[4881]: I0121 12:39:19.923983 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-s6n4b"] Jan 21 12:39:19 crc kubenswrapper[4881]: I0121 12:39:19.969836 4881 scope.go:117] "RemoveContainer" containerID="2b7974a76c2489b8201ce7ddb4f20b05c989f1e01d022eb6721825ad74d4ebca" Jan 21 12:39:19 crc kubenswrapper[4881]: E0121 12:39:19.970629 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b7974a76c2489b8201ce7ddb4f20b05c989f1e01d022eb6721825ad74d4ebca\": container with ID starting with 2b7974a76c2489b8201ce7ddb4f20b05c989f1e01d022eb6721825ad74d4ebca not found: ID does not exist" containerID="2b7974a76c2489b8201ce7ddb4f20b05c989f1e01d022eb6721825ad74d4ebca" Jan 21 12:39:19 crc kubenswrapper[4881]: I0121 12:39:19.970692 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b7974a76c2489b8201ce7ddb4f20b05c989f1e01d022eb6721825ad74d4ebca"} err="failed to get container status \"2b7974a76c2489b8201ce7ddb4f20b05c989f1e01d022eb6721825ad74d4ebca\": rpc error: code = NotFound desc = could not find container \"2b7974a76c2489b8201ce7ddb4f20b05c989f1e01d022eb6721825ad74d4ebca\": container with ID starting with 2b7974a76c2489b8201ce7ddb4f20b05c989f1e01d022eb6721825ad74d4ebca not found: ID does not exist" Jan 21 12:39:19 crc kubenswrapper[4881]: I0121 12:39:19.970731 4881 scope.go:117] "RemoveContainer" containerID="8f1a88a741efb50c9477fba65c0c2eb70c9c999b88931bb0d8c95e3c35ef4e92" Jan 21 12:39:19 crc kubenswrapper[4881]: E0121 12:39:19.971380 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f1a88a741efb50c9477fba65c0c2eb70c9c999b88931bb0d8c95e3c35ef4e92\": container with ID starting with 8f1a88a741efb50c9477fba65c0c2eb70c9c999b88931bb0d8c95e3c35ef4e92 not found: ID does not exist" containerID="8f1a88a741efb50c9477fba65c0c2eb70c9c999b88931bb0d8c95e3c35ef4e92" Jan 21 12:39:19 crc kubenswrapper[4881]: I0121 12:39:19.971453 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f1a88a741efb50c9477fba65c0c2eb70c9c999b88931bb0d8c95e3c35ef4e92"} err="failed to get container status \"8f1a88a741efb50c9477fba65c0c2eb70c9c999b88931bb0d8c95e3c35ef4e92\": rpc error: code = NotFound desc = could not find container \"8f1a88a741efb50c9477fba65c0c2eb70c9c999b88931bb0d8c95e3c35ef4e92\": container with ID starting with 8f1a88a741efb50c9477fba65c0c2eb70c9c999b88931bb0d8c95e3c35ef4e92 not found: ID does not exist" Jan 21 12:39:19 crc kubenswrapper[4881]: I0121 12:39:19.971497 4881 scope.go:117] "RemoveContainer" containerID="74729fa5cab0891d63e4e5947225d9594300869de530f248fcfa19b346e40c61" Jan 21 12:39:19 crc kubenswrapper[4881]: E0121 12:39:19.971944 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74729fa5cab0891d63e4e5947225d9594300869de530f248fcfa19b346e40c61\": container with ID starting with 74729fa5cab0891d63e4e5947225d9594300869de530f248fcfa19b346e40c61 not found: ID does not exist" containerID="74729fa5cab0891d63e4e5947225d9594300869de530f248fcfa19b346e40c61" Jan 21 12:39:19 crc kubenswrapper[4881]: I0121 12:39:19.971986 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74729fa5cab0891d63e4e5947225d9594300869de530f248fcfa19b346e40c61"} err="failed to get container status \"74729fa5cab0891d63e4e5947225d9594300869de530f248fcfa19b346e40c61\": rpc error: code = NotFound desc = could not find container \"74729fa5cab0891d63e4e5947225d9594300869de530f248fcfa19b346e40c61\": container with ID starting with 74729fa5cab0891d63e4e5947225d9594300869de530f248fcfa19b346e40c61 not found: ID does not exist" Jan 21 12:39:21 crc kubenswrapper[4881]: I0121 12:39:21.324635 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7456574a-75d3-47a1-a584-c552d4806d47" path="/var/lib/kubelet/pods/7456574a-75d3-47a1-a584-c552d4806d47/volumes" Jan 21 12:39:30 crc kubenswrapper[4881]: I0121 12:39:30.313501 4881 scope.go:117] "RemoveContainer" containerID="58cd0c5668032beb15f336f840c62d17fca5d4719530de4cd0b64bce55dc94e3" Jan 21 12:39:30 crc kubenswrapper[4881]: E0121 12:39:30.314964 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:39:42 crc kubenswrapper[4881]: I0121 12:39:42.311518 4881 scope.go:117] "RemoveContainer" containerID="58cd0c5668032beb15f336f840c62d17fca5d4719530de4cd0b64bce55dc94e3" Jan 21 12:39:42 crc kubenswrapper[4881]: E0121 12:39:42.312759 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:39:54 crc kubenswrapper[4881]: I0121 12:39:54.311530 4881 scope.go:117] "RemoveContainer" containerID="58cd0c5668032beb15f336f840c62d17fca5d4719530de4cd0b64bce55dc94e3" Jan 21 12:39:54 crc kubenswrapper[4881]: E0121 12:39:54.312341 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:40:09 crc kubenswrapper[4881]: I0121 12:40:09.311139 4881 scope.go:117] "RemoveContainer" containerID="58cd0c5668032beb15f336f840c62d17fca5d4719530de4cd0b64bce55dc94e3" Jan 21 12:40:09 crc kubenswrapper[4881]: E0121 12:40:09.311976 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:40:22 crc kubenswrapper[4881]: I0121 12:40:22.311221 4881 scope.go:117] "RemoveContainer" containerID="58cd0c5668032beb15f336f840c62d17fca5d4719530de4cd0b64bce55dc94e3" Jan 21 12:40:22 crc kubenswrapper[4881]: E0121 12:40:22.313807 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:40:35 crc kubenswrapper[4881]: I0121 12:40:35.310630 4881 scope.go:117] "RemoveContainer" containerID="58cd0c5668032beb15f336f840c62d17fca5d4719530de4cd0b64bce55dc94e3" Jan 21 12:40:35 crc kubenswrapper[4881]: E0121 12:40:35.311647 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:40:50 crc kubenswrapper[4881]: I0121 12:40:50.311266 4881 scope.go:117] "RemoveContainer" containerID="58cd0c5668032beb15f336f840c62d17fca5d4719530de4cd0b64bce55dc94e3" Jan 21 12:40:50 crc kubenswrapper[4881]: E0121 12:40:50.312012 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:41:02 crc kubenswrapper[4881]: I0121 12:41:02.311981 4881 scope.go:117] "RemoveContainer" containerID="58cd0c5668032beb15f336f840c62d17fca5d4719530de4cd0b64bce55dc94e3" Jan 21 12:41:02 crc kubenswrapper[4881]: E0121 12:41:02.313821 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:41:13 crc kubenswrapper[4881]: I0121 12:41:13.322694 4881 scope.go:117] "RemoveContainer" containerID="58cd0c5668032beb15f336f840c62d17fca5d4719530de4cd0b64bce55dc94e3" Jan 21 12:41:13 crc kubenswrapper[4881]: E0121 12:41:13.323619 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:41:25 crc kubenswrapper[4881]: I0121 12:41:25.322375 4881 scope.go:117] "RemoveContainer" containerID="58cd0c5668032beb15f336f840c62d17fca5d4719530de4cd0b64bce55dc94e3" Jan 21 12:41:25 crc kubenswrapper[4881]: E0121 12:41:25.323753 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:41:36 crc kubenswrapper[4881]: I0121 12:41:36.311561 4881 scope.go:117] "RemoveContainer" containerID="58cd0c5668032beb15f336f840c62d17fca5d4719530de4cd0b64bce55dc94e3" Jan 21 12:41:36 crc kubenswrapper[4881]: I0121 12:41:36.704700 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"171b155437f4c8383a0145071a128693d76b7a6e60a851ddb744837ea725325c"} Jan 21 12:43:34 crc kubenswrapper[4881]: I0121 12:43:34.468879 4881 scope.go:117] "RemoveContainer" containerID="d37189c03972c86a5249beff3ff66068254eecbcbd8f696c02ec91aab34478d7" Jan 21 12:43:34 crc kubenswrapper[4881]: I0121 12:43:34.495667 4881 scope.go:117] "RemoveContainer" containerID="94358427c0b7aad8c60ccf1f15d3a5bdd6fe48a1d0ce0fffd39e8e43512aae28" Jan 21 12:43:34 crc kubenswrapper[4881]: I0121 12:43:34.555265 4881 scope.go:117] "RemoveContainer" containerID="b23b18c80bd46c2b1574da5ddf36ca2de500862eaba1c7c8da6864b9043b3793" Jan 21 12:43:59 crc kubenswrapper[4881]: I0121 12:43:59.850694 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:43:59 crc kubenswrapper[4881]: I0121 12:43:59.851345 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:44:29 crc kubenswrapper[4881]: I0121 12:44:29.851630 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:44:29 crc kubenswrapper[4881]: I0121 12:44:29.853995 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:44:59 crc kubenswrapper[4881]: I0121 12:44:59.851504 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:44:59 crc kubenswrapper[4881]: I0121 12:44:59.852320 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:44:59 crc kubenswrapper[4881]: I0121 12:44:59.852455 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 12:44:59 crc kubenswrapper[4881]: I0121 12:44:59.854139 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"171b155437f4c8383a0145071a128693d76b7a6e60a851ddb744837ea725325c"} pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 12:44:59 crc kubenswrapper[4881]: I0121 12:44:59.854333 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" containerID="cri-o://171b155437f4c8383a0145071a128693d76b7a6e60a851ddb744837ea725325c" gracePeriod=600 Jan 21 12:45:00 crc kubenswrapper[4881]: I0121 12:45:00.161896 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483325-rzms8"] Jan 21 12:45:00 crc kubenswrapper[4881]: E0121 12:45:00.162677 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7456574a-75d3-47a1-a584-c552d4806d47" containerName="extract-content" Jan 21 12:45:00 crc kubenswrapper[4881]: I0121 12:45:00.162698 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="7456574a-75d3-47a1-a584-c552d4806d47" containerName="extract-content" Jan 21 12:45:00 crc kubenswrapper[4881]: E0121 12:45:00.162733 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7456574a-75d3-47a1-a584-c552d4806d47" containerName="registry-server" Jan 21 12:45:00 crc kubenswrapper[4881]: I0121 12:45:00.162740 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="7456574a-75d3-47a1-a584-c552d4806d47" containerName="registry-server" Jan 21 12:45:00 crc kubenswrapper[4881]: E0121 12:45:00.162756 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7456574a-75d3-47a1-a584-c552d4806d47" containerName="extract-utilities" Jan 21 12:45:00 crc kubenswrapper[4881]: I0121 12:45:00.162764 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="7456574a-75d3-47a1-a584-c552d4806d47" containerName="extract-utilities" Jan 21 12:45:00 crc kubenswrapper[4881]: I0121 12:45:00.162992 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="7456574a-75d3-47a1-a584-c552d4806d47" containerName="registry-server" Jan 21 12:45:00 crc kubenswrapper[4881]: I0121 12:45:00.164794 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483325-rzms8" Jan 21 12:45:00 crc kubenswrapper[4881]: I0121 12:45:00.167894 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 12:45:00 crc kubenswrapper[4881]: I0121 12:45:00.169742 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 12:45:00 crc kubenswrapper[4881]: I0121 12:45:00.188995 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483325-rzms8"] Jan 21 12:45:00 crc kubenswrapper[4881]: I0121 12:45:00.273832 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7-secret-volume\") pod \"collect-profiles-29483325-rzms8\" (UID: \"e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483325-rzms8" Jan 21 12:45:00 crc kubenswrapper[4881]: I0121 12:45:00.273904 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7jk6\" (UniqueName: \"kubernetes.io/projected/e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7-kube-api-access-h7jk6\") pod \"collect-profiles-29483325-rzms8\" (UID: \"e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483325-rzms8" Jan 21 12:45:00 crc kubenswrapper[4881]: I0121 12:45:00.274211 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7-config-volume\") pod \"collect-profiles-29483325-rzms8\" (UID: \"e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483325-rzms8" Jan 21 12:45:00 crc kubenswrapper[4881]: I0121 12:45:00.377188 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7-config-volume\") pod \"collect-profiles-29483325-rzms8\" (UID: \"e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483325-rzms8" Jan 21 12:45:00 crc kubenswrapper[4881]: I0121 12:45:00.377354 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7-secret-volume\") pod \"collect-profiles-29483325-rzms8\" (UID: \"e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483325-rzms8" Jan 21 12:45:00 crc kubenswrapper[4881]: I0121 12:45:00.377391 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7jk6\" (UniqueName: \"kubernetes.io/projected/e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7-kube-api-access-h7jk6\") pod \"collect-profiles-29483325-rzms8\" (UID: \"e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483325-rzms8" Jan 21 12:45:00 crc kubenswrapper[4881]: I0121 12:45:00.379840 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7-config-volume\") pod \"collect-profiles-29483325-rzms8\" (UID: \"e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483325-rzms8" Jan 21 12:45:00 crc kubenswrapper[4881]: I0121 12:45:00.390371 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7-secret-volume\") pod \"collect-profiles-29483325-rzms8\" (UID: \"e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483325-rzms8" Jan 21 12:45:00 crc kubenswrapper[4881]: I0121 12:45:00.410319 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7jk6\" (UniqueName: \"kubernetes.io/projected/e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7-kube-api-access-h7jk6\") pod \"collect-profiles-29483325-rzms8\" (UID: \"e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483325-rzms8" Jan 21 12:45:00 crc kubenswrapper[4881]: I0121 12:45:00.512160 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483325-rzms8" Jan 21 12:45:00 crc kubenswrapper[4881]: I0121 12:45:00.712723 4881 generic.go:334] "Generic (PLEG): container finished" podID="3687b313-1df2-4274-80db-8c758b51bf2d" containerID="171b155437f4c8383a0145071a128693d76b7a6e60a851ddb744837ea725325c" exitCode=0 Jan 21 12:45:00 crc kubenswrapper[4881]: I0121 12:45:00.712776 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerDied","Data":"171b155437f4c8383a0145071a128693d76b7a6e60a851ddb744837ea725325c"} Jan 21 12:45:00 crc kubenswrapper[4881]: I0121 12:45:00.713032 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"a4dfa7ee98cb04337edaebb516d60cd9c59428107bdb7a432a638ae4c9d89379"} Jan 21 12:45:00 crc kubenswrapper[4881]: I0121 12:45:00.713060 4881 scope.go:117] "RemoveContainer" containerID="58cd0c5668032beb15f336f840c62d17fca5d4719530de4cd0b64bce55dc94e3" Jan 21 12:45:01 crc kubenswrapper[4881]: I0121 12:45:01.030745 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483325-rzms8"] Jan 21 12:45:01 crc kubenswrapper[4881]: W0121 12:45:01.033820 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode92a1004_4ae7_4c9f_8ed8_1cb1a78dd2b7.slice/crio-488e304d3c88c7d810895dc4c77ecce8400601dc4b2a8957145c64a59aee59d1 WatchSource:0}: Error finding container 488e304d3c88c7d810895dc4c77ecce8400601dc4b2a8957145c64a59aee59d1: Status 404 returned error can't find the container with id 488e304d3c88c7d810895dc4c77ecce8400601dc4b2a8957145c64a59aee59d1 Jan 21 12:45:01 crc kubenswrapper[4881]: I0121 12:45:01.723943 4881 generic.go:334] "Generic (PLEG): container finished" podID="e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7" containerID="77513d54cf4d9f5496abf1ce9933fa0d7aa3da0530b4c165a7c1ed70ba94b89c" exitCode=0 Jan 21 12:45:01 crc kubenswrapper[4881]: I0121 12:45:01.724010 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483325-rzms8" event={"ID":"e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7","Type":"ContainerDied","Data":"77513d54cf4d9f5496abf1ce9933fa0d7aa3da0530b4c165a7c1ed70ba94b89c"} Jan 21 12:45:01 crc kubenswrapper[4881]: I0121 12:45:01.724223 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483325-rzms8" event={"ID":"e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7","Type":"ContainerStarted","Data":"488e304d3c88c7d810895dc4c77ecce8400601dc4b2a8957145c64a59aee59d1"} Jan 21 12:45:03 crc kubenswrapper[4881]: I0121 12:45:03.142670 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483325-rzms8" Jan 21 12:45:03 crc kubenswrapper[4881]: I0121 12:45:03.250229 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7-secret-volume\") pod \"e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7\" (UID: \"e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7\") " Jan 21 12:45:03 crc kubenswrapper[4881]: I0121 12:45:03.250283 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7jk6\" (UniqueName: \"kubernetes.io/projected/e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7-kube-api-access-h7jk6\") pod \"e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7\" (UID: \"e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7\") " Jan 21 12:45:03 crc kubenswrapper[4881]: I0121 12:45:03.250336 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7-config-volume\") pod \"e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7\" (UID: \"e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7\") " Jan 21 12:45:03 crc kubenswrapper[4881]: I0121 12:45:03.251749 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7-config-volume" (OuterVolumeSpecName: "config-volume") pod "e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7" (UID: "e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 12:45:03 crc kubenswrapper[4881]: I0121 12:45:03.262661 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7" (UID: "e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 12:45:03 crc kubenswrapper[4881]: I0121 12:45:03.264189 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7-kube-api-access-h7jk6" (OuterVolumeSpecName: "kube-api-access-h7jk6") pod "e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7" (UID: "e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7"). InnerVolumeSpecName "kube-api-access-h7jk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:45:03 crc kubenswrapper[4881]: I0121 12:45:03.352969 4881 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 12:45:03 crc kubenswrapper[4881]: I0121 12:45:03.353009 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7jk6\" (UniqueName: \"kubernetes.io/projected/e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7-kube-api-access-h7jk6\") on node \"crc\" DevicePath \"\"" Jan 21 12:45:03 crc kubenswrapper[4881]: I0121 12:45:03.353025 4881 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 12:45:03 crc kubenswrapper[4881]: I0121 12:45:03.754717 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483325-rzms8" event={"ID":"e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7","Type":"ContainerDied","Data":"488e304d3c88c7d810895dc4c77ecce8400601dc4b2a8957145c64a59aee59d1"} Jan 21 12:45:03 crc kubenswrapper[4881]: I0121 12:45:03.754971 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="488e304d3c88c7d810895dc4c77ecce8400601dc4b2a8957145c64a59aee59d1" Jan 21 12:45:03 crc kubenswrapper[4881]: I0121 12:45:03.755229 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483325-rzms8" Jan 21 12:45:04 crc kubenswrapper[4881]: I0121 12:45:04.245084 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483280-rl7qn"] Jan 21 12:45:04 crc kubenswrapper[4881]: I0121 12:45:04.261667 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483280-rl7qn"] Jan 21 12:45:05 crc kubenswrapper[4881]: I0121 12:45:05.329576 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e74d3023-7ad9-4e65-9627-cc8127927f6b" path="/var/lib/kubelet/pods/e74d3023-7ad9-4e65-9627-cc8127927f6b/volumes" Jan 21 12:45:34 crc kubenswrapper[4881]: I0121 12:45:34.671304 4881 scope.go:117] "RemoveContainer" containerID="f4fa32143b4e9e742c21ea98ab2bdc72498265c13850a532b1a72e716a34316a" Jan 21 12:45:53 crc kubenswrapper[4881]: I0121 12:45:53.759827 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jr6dv"] Jan 21 12:45:53 crc kubenswrapper[4881]: E0121 12:45:53.761010 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7" containerName="collect-profiles" Jan 21 12:45:53 crc kubenswrapper[4881]: I0121 12:45:53.761030 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7" containerName="collect-profiles" Jan 21 12:45:53 crc kubenswrapper[4881]: I0121 12:45:53.761322 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7" containerName="collect-profiles" Jan 21 12:45:53 crc kubenswrapper[4881]: I0121 12:45:53.763267 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jr6dv" Jan 21 12:45:53 crc kubenswrapper[4881]: I0121 12:45:53.791061 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7828a13b-c9c5-4bf7-b3e5-fcf9835417a6-catalog-content\") pod \"redhat-operators-jr6dv\" (UID: \"7828a13b-c9c5-4bf7-b3e5-fcf9835417a6\") " pod="openshift-marketplace/redhat-operators-jr6dv" Jan 21 12:45:53 crc kubenswrapper[4881]: I0121 12:45:53.791209 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7828a13b-c9c5-4bf7-b3e5-fcf9835417a6-utilities\") pod \"redhat-operators-jr6dv\" (UID: \"7828a13b-c9c5-4bf7-b3e5-fcf9835417a6\") " pod="openshift-marketplace/redhat-operators-jr6dv" Jan 21 12:45:53 crc kubenswrapper[4881]: I0121 12:45:53.791268 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxw6z\" (UniqueName: \"kubernetes.io/projected/7828a13b-c9c5-4bf7-b3e5-fcf9835417a6-kube-api-access-mxw6z\") pod \"redhat-operators-jr6dv\" (UID: \"7828a13b-c9c5-4bf7-b3e5-fcf9835417a6\") " pod="openshift-marketplace/redhat-operators-jr6dv" Jan 21 12:45:53 crc kubenswrapper[4881]: I0121 12:45:53.795627 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jr6dv"] Jan 21 12:45:53 crc kubenswrapper[4881]: I0121 12:45:53.895191 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7828a13b-c9c5-4bf7-b3e5-fcf9835417a6-catalog-content\") pod \"redhat-operators-jr6dv\" (UID: \"7828a13b-c9c5-4bf7-b3e5-fcf9835417a6\") " pod="openshift-marketplace/redhat-operators-jr6dv" Jan 21 12:45:53 crc kubenswrapper[4881]: I0121 12:45:53.895343 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7828a13b-c9c5-4bf7-b3e5-fcf9835417a6-utilities\") pod \"redhat-operators-jr6dv\" (UID: \"7828a13b-c9c5-4bf7-b3e5-fcf9835417a6\") " pod="openshift-marketplace/redhat-operators-jr6dv" Jan 21 12:45:53 crc kubenswrapper[4881]: I0121 12:45:53.895398 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxw6z\" (UniqueName: \"kubernetes.io/projected/7828a13b-c9c5-4bf7-b3e5-fcf9835417a6-kube-api-access-mxw6z\") pod \"redhat-operators-jr6dv\" (UID: \"7828a13b-c9c5-4bf7-b3e5-fcf9835417a6\") " pod="openshift-marketplace/redhat-operators-jr6dv" Jan 21 12:45:53 crc kubenswrapper[4881]: I0121 12:45:53.896067 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7828a13b-c9c5-4bf7-b3e5-fcf9835417a6-utilities\") pod \"redhat-operators-jr6dv\" (UID: \"7828a13b-c9c5-4bf7-b3e5-fcf9835417a6\") " pod="openshift-marketplace/redhat-operators-jr6dv" Jan 21 12:45:53 crc kubenswrapper[4881]: I0121 12:45:53.896449 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7828a13b-c9c5-4bf7-b3e5-fcf9835417a6-catalog-content\") pod \"redhat-operators-jr6dv\" (UID: \"7828a13b-c9c5-4bf7-b3e5-fcf9835417a6\") " pod="openshift-marketplace/redhat-operators-jr6dv" Jan 21 12:45:53 crc kubenswrapper[4881]: I0121 12:45:53.931815 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxw6z\" (UniqueName: \"kubernetes.io/projected/7828a13b-c9c5-4bf7-b3e5-fcf9835417a6-kube-api-access-mxw6z\") pod \"redhat-operators-jr6dv\" (UID: \"7828a13b-c9c5-4bf7-b3e5-fcf9835417a6\") " pod="openshift-marketplace/redhat-operators-jr6dv" Jan 21 12:45:54 crc kubenswrapper[4881]: I0121 12:45:54.101164 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jr6dv" Jan 21 12:45:54 crc kubenswrapper[4881]: I0121 12:45:54.624411 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jr6dv"] Jan 21 12:45:55 crc kubenswrapper[4881]: I0121 12:45:55.365323 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jr6dv" event={"ID":"7828a13b-c9c5-4bf7-b3e5-fcf9835417a6","Type":"ContainerStarted","Data":"2774ac01c095d3eaca53dacf6b3eab5a5a87e1e1faa5a2c821e90ca5b599bf28"} Jan 21 12:45:58 crc kubenswrapper[4881]: I0121 12:45:58.401881 4881 generic.go:334] "Generic (PLEG): container finished" podID="7828a13b-c9c5-4bf7-b3e5-fcf9835417a6" containerID="ff08bbee0e9fe86ebc38c20b8b828d04cc2bec5f3aceb31f9921a64da8bf75af" exitCode=0 Jan 21 12:45:58 crc kubenswrapper[4881]: I0121 12:45:58.401951 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jr6dv" event={"ID":"7828a13b-c9c5-4bf7-b3e5-fcf9835417a6","Type":"ContainerDied","Data":"ff08bbee0e9fe86ebc38c20b8b828d04cc2bec5f3aceb31f9921a64da8bf75af"} Jan 21 12:45:58 crc kubenswrapper[4881]: I0121 12:45:58.407722 4881 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 12:46:03 crc kubenswrapper[4881]: I0121 12:46:03.462071 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jr6dv" event={"ID":"7828a13b-c9c5-4bf7-b3e5-fcf9835417a6","Type":"ContainerStarted","Data":"fa19ae670e3e4e727e7a1290bfa09bdb19f3eed248af5fd0ee01f8baea3b1081"} Jan 21 12:46:10 crc kubenswrapper[4881]: I0121 12:46:10.613708 4881 generic.go:334] "Generic (PLEG): container finished" podID="7828a13b-c9c5-4bf7-b3e5-fcf9835417a6" containerID="fa19ae670e3e4e727e7a1290bfa09bdb19f3eed248af5fd0ee01f8baea3b1081" exitCode=0 Jan 21 12:46:10 crc kubenswrapper[4881]: I0121 12:46:10.613892 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jr6dv" event={"ID":"7828a13b-c9c5-4bf7-b3e5-fcf9835417a6","Type":"ContainerDied","Data":"fa19ae670e3e4e727e7a1290bfa09bdb19f3eed248af5fd0ee01f8baea3b1081"} Jan 21 12:46:15 crc kubenswrapper[4881]: I0121 12:46:15.665067 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jr6dv" event={"ID":"7828a13b-c9c5-4bf7-b3e5-fcf9835417a6","Type":"ContainerStarted","Data":"c7f5ad7d69a3d6952b116d16a27812356bde7f39581517bea0004391a6c274a4"} Jan 21 12:46:15 crc kubenswrapper[4881]: I0121 12:46:15.694133 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jr6dv" podStartSLOduration=6.6348925560000005 podStartE2EDuration="22.69409097s" podCreationTimestamp="2026-01-21 12:45:53 +0000 UTC" firstStartedPulling="2026-01-21 12:45:58.407263948 +0000 UTC m=+6545.667220417" lastFinishedPulling="2026-01-21 12:46:14.466462342 +0000 UTC m=+6561.726418831" observedRunningTime="2026-01-21 12:46:15.687175272 +0000 UTC m=+6562.947131751" watchObservedRunningTime="2026-01-21 12:46:15.69409097 +0000 UTC m=+6562.954047439" Jan 21 12:46:24 crc kubenswrapper[4881]: I0121 12:46:24.101834 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jr6dv" Jan 21 12:46:24 crc kubenswrapper[4881]: I0121 12:46:24.102360 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jr6dv" Jan 21 12:46:24 crc kubenswrapper[4881]: I0121 12:46:24.190804 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jr6dv" Jan 21 12:46:24 crc kubenswrapper[4881]: I0121 12:46:24.812987 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jr6dv" Jan 21 12:46:24 crc kubenswrapper[4881]: I0121 12:46:24.956736 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jr6dv"] Jan 21 12:46:26 crc kubenswrapper[4881]: I0121 12:46:26.778100 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jr6dv" podUID="7828a13b-c9c5-4bf7-b3e5-fcf9835417a6" containerName="registry-server" containerID="cri-o://c7f5ad7d69a3d6952b116d16a27812356bde7f39581517bea0004391a6c274a4" gracePeriod=2 Jan 21 12:46:27 crc kubenswrapper[4881]: I0121 12:46:27.794762 4881 generic.go:334] "Generic (PLEG): container finished" podID="7828a13b-c9c5-4bf7-b3e5-fcf9835417a6" containerID="c7f5ad7d69a3d6952b116d16a27812356bde7f39581517bea0004391a6c274a4" exitCode=0 Jan 21 12:46:27 crc kubenswrapper[4881]: I0121 12:46:27.794813 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jr6dv" event={"ID":"7828a13b-c9c5-4bf7-b3e5-fcf9835417a6","Type":"ContainerDied","Data":"c7f5ad7d69a3d6952b116d16a27812356bde7f39581517bea0004391a6c274a4"} Jan 21 12:46:27 crc kubenswrapper[4881]: I0121 12:46:27.920915 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jr6dv" Jan 21 12:46:28 crc kubenswrapper[4881]: I0121 12:46:28.056447 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7828a13b-c9c5-4bf7-b3e5-fcf9835417a6-catalog-content\") pod \"7828a13b-c9c5-4bf7-b3e5-fcf9835417a6\" (UID: \"7828a13b-c9c5-4bf7-b3e5-fcf9835417a6\") " Jan 21 12:46:28 crc kubenswrapper[4881]: I0121 12:46:28.056774 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxw6z\" (UniqueName: \"kubernetes.io/projected/7828a13b-c9c5-4bf7-b3e5-fcf9835417a6-kube-api-access-mxw6z\") pod \"7828a13b-c9c5-4bf7-b3e5-fcf9835417a6\" (UID: \"7828a13b-c9c5-4bf7-b3e5-fcf9835417a6\") " Jan 21 12:46:28 crc kubenswrapper[4881]: I0121 12:46:28.057116 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7828a13b-c9c5-4bf7-b3e5-fcf9835417a6-utilities\") pod \"7828a13b-c9c5-4bf7-b3e5-fcf9835417a6\" (UID: \"7828a13b-c9c5-4bf7-b3e5-fcf9835417a6\") " Jan 21 12:46:28 crc kubenswrapper[4881]: I0121 12:46:28.058145 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7828a13b-c9c5-4bf7-b3e5-fcf9835417a6-utilities" (OuterVolumeSpecName: "utilities") pod "7828a13b-c9c5-4bf7-b3e5-fcf9835417a6" (UID: "7828a13b-c9c5-4bf7-b3e5-fcf9835417a6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:46:28 crc kubenswrapper[4881]: I0121 12:46:28.067265 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7828a13b-c9c5-4bf7-b3e5-fcf9835417a6-kube-api-access-mxw6z" (OuterVolumeSpecName: "kube-api-access-mxw6z") pod "7828a13b-c9c5-4bf7-b3e5-fcf9835417a6" (UID: "7828a13b-c9c5-4bf7-b3e5-fcf9835417a6"). InnerVolumeSpecName "kube-api-access-mxw6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:46:28 crc kubenswrapper[4881]: I0121 12:46:28.160394 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7828a13b-c9c5-4bf7-b3e5-fcf9835417a6-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 12:46:28 crc kubenswrapper[4881]: I0121 12:46:28.160441 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxw6z\" (UniqueName: \"kubernetes.io/projected/7828a13b-c9c5-4bf7-b3e5-fcf9835417a6-kube-api-access-mxw6z\") on node \"crc\" DevicePath \"\"" Jan 21 12:46:28 crc kubenswrapper[4881]: I0121 12:46:28.191583 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7828a13b-c9c5-4bf7-b3e5-fcf9835417a6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7828a13b-c9c5-4bf7-b3e5-fcf9835417a6" (UID: "7828a13b-c9c5-4bf7-b3e5-fcf9835417a6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:46:28 crc kubenswrapper[4881]: I0121 12:46:28.264828 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7828a13b-c9c5-4bf7-b3e5-fcf9835417a6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 12:46:28 crc kubenswrapper[4881]: I0121 12:46:28.810627 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jr6dv" event={"ID":"7828a13b-c9c5-4bf7-b3e5-fcf9835417a6","Type":"ContainerDied","Data":"2774ac01c095d3eaca53dacf6b3eab5a5a87e1e1faa5a2c821e90ca5b599bf28"} Jan 21 12:46:28 crc kubenswrapper[4881]: I0121 12:46:28.810701 4881 scope.go:117] "RemoveContainer" containerID="c7f5ad7d69a3d6952b116d16a27812356bde7f39581517bea0004391a6c274a4" Jan 21 12:46:28 crc kubenswrapper[4881]: I0121 12:46:28.811826 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jr6dv" Jan 21 12:46:28 crc kubenswrapper[4881]: I0121 12:46:28.855543 4881 scope.go:117] "RemoveContainer" containerID="fa19ae670e3e4e727e7a1290bfa09bdb19f3eed248af5fd0ee01f8baea3b1081" Jan 21 12:46:28 crc kubenswrapper[4881]: I0121 12:46:28.900367 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jr6dv"] Jan 21 12:46:28 crc kubenswrapper[4881]: I0121 12:46:28.917726 4881 scope.go:117] "RemoveContainer" containerID="ff08bbee0e9fe86ebc38c20b8b828d04cc2bec5f3aceb31f9921a64da8bf75af" Jan 21 12:46:28 crc kubenswrapper[4881]: I0121 12:46:28.920165 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jr6dv"] Jan 21 12:46:29 crc kubenswrapper[4881]: I0121 12:46:29.334630 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7828a13b-c9c5-4bf7-b3e5-fcf9835417a6" path="/var/lib/kubelet/pods/7828a13b-c9c5-4bf7-b3e5-fcf9835417a6/volumes" Jan 21 12:47:29 crc kubenswrapper[4881]: I0121 12:47:29.851073 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:47:29 crc kubenswrapper[4881]: I0121 12:47:29.851892 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:47:59 crc kubenswrapper[4881]: I0121 12:47:59.851141 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:47:59 crc kubenswrapper[4881]: I0121 12:47:59.851666 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:48:29 crc kubenswrapper[4881]: I0121 12:48:29.851199 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:48:29 crc kubenswrapper[4881]: I0121 12:48:29.852082 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:48:29 crc kubenswrapper[4881]: I0121 12:48:29.852157 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 12:48:29 crc kubenswrapper[4881]: I0121 12:48:29.853093 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a4dfa7ee98cb04337edaebb516d60cd9c59428107bdb7a432a638ae4c9d89379"} pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 12:48:29 crc kubenswrapper[4881]: I0121 12:48:29.853199 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" containerID="cri-o://a4dfa7ee98cb04337edaebb516d60cd9c59428107bdb7a432a638ae4c9d89379" gracePeriod=600 Jan 21 12:48:29 crc kubenswrapper[4881]: E0121 12:48:29.978360 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:48:30 crc kubenswrapper[4881]: I0121 12:48:30.585260 4881 generic.go:334] "Generic (PLEG): container finished" podID="3687b313-1df2-4274-80db-8c758b51bf2d" containerID="a4dfa7ee98cb04337edaebb516d60cd9c59428107bdb7a432a638ae4c9d89379" exitCode=0 Jan 21 12:48:30 crc kubenswrapper[4881]: I0121 12:48:30.585329 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerDied","Data":"a4dfa7ee98cb04337edaebb516d60cd9c59428107bdb7a432a638ae4c9d89379"} Jan 21 12:48:30 crc kubenswrapper[4881]: I0121 12:48:30.585634 4881 scope.go:117] "RemoveContainer" containerID="171b155437f4c8383a0145071a128693d76b7a6e60a851ddb744837ea725325c" Jan 21 12:48:30 crc kubenswrapper[4881]: I0121 12:48:30.586529 4881 scope.go:117] "RemoveContainer" containerID="a4dfa7ee98cb04337edaebb516d60cd9c59428107bdb7a432a638ae4c9d89379" Jan 21 12:48:30 crc kubenswrapper[4881]: E0121 12:48:30.586916 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:48:45 crc kubenswrapper[4881]: I0121 12:48:45.310865 4881 scope.go:117] "RemoveContainer" containerID="a4dfa7ee98cb04337edaebb516d60cd9c59428107bdb7a432a638ae4c9d89379" Jan 21 12:48:45 crc kubenswrapper[4881]: E0121 12:48:45.311896 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:48:59 crc kubenswrapper[4881]: I0121 12:48:59.312032 4881 scope.go:117] "RemoveContainer" containerID="a4dfa7ee98cb04337edaebb516d60cd9c59428107bdb7a432a638ae4c9d89379" Jan 21 12:48:59 crc kubenswrapper[4881]: E0121 12:48:59.313029 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:49:13 crc kubenswrapper[4881]: I0121 12:49:13.318966 4881 scope.go:117] "RemoveContainer" containerID="a4dfa7ee98cb04337edaebb516d60cd9c59428107bdb7a432a638ae4c9d89379" Jan 21 12:49:13 crc kubenswrapper[4881]: E0121 12:49:13.319959 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:49:26 crc kubenswrapper[4881]: I0121 12:49:26.310893 4881 scope.go:117] "RemoveContainer" containerID="a4dfa7ee98cb04337edaebb516d60cd9c59428107bdb7a432a638ae4c9d89379" Jan 21 12:49:26 crc kubenswrapper[4881]: E0121 12:49:26.311871 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:49:39 crc kubenswrapper[4881]: I0121 12:49:39.311613 4881 scope.go:117] "RemoveContainer" containerID="a4dfa7ee98cb04337edaebb516d60cd9c59428107bdb7a432a638ae4c9d89379" Jan 21 12:49:39 crc kubenswrapper[4881]: E0121 12:49:39.313251 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:49:52 crc kubenswrapper[4881]: I0121 12:49:52.311308 4881 scope.go:117] "RemoveContainer" containerID="a4dfa7ee98cb04337edaebb516d60cd9c59428107bdb7a432a638ae4c9d89379" Jan 21 12:49:52 crc kubenswrapper[4881]: E0121 12:49:52.312214 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:50:04 crc kubenswrapper[4881]: I0121 12:50:04.313443 4881 scope.go:117] "RemoveContainer" containerID="a4dfa7ee98cb04337edaebb516d60cd9c59428107bdb7a432a638ae4c9d89379" Jan 21 12:50:04 crc kubenswrapper[4881]: E0121 12:50:04.314819 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:50:17 crc kubenswrapper[4881]: I0121 12:50:17.311146 4881 scope.go:117] "RemoveContainer" containerID="a4dfa7ee98cb04337edaebb516d60cd9c59428107bdb7a432a638ae4c9d89379" Jan 21 12:50:17 crc kubenswrapper[4881]: E0121 12:50:17.312277 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:50:28 crc kubenswrapper[4881]: I0121 12:50:28.310557 4881 scope.go:117] "RemoveContainer" containerID="a4dfa7ee98cb04337edaebb516d60cd9c59428107bdb7a432a638ae4c9d89379" Jan 21 12:50:28 crc kubenswrapper[4881]: E0121 12:50:28.311260 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:50:41 crc kubenswrapper[4881]: I0121 12:50:41.312047 4881 scope.go:117] "RemoveContainer" containerID="a4dfa7ee98cb04337edaebb516d60cd9c59428107bdb7a432a638ae4c9d89379" Jan 21 12:50:41 crc kubenswrapper[4881]: E0121 12:50:41.313315 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:50:56 crc kubenswrapper[4881]: I0121 12:50:56.310873 4881 scope.go:117] "RemoveContainer" containerID="a4dfa7ee98cb04337edaebb516d60cd9c59428107bdb7a432a638ae4c9d89379" Jan 21 12:50:56 crc kubenswrapper[4881]: E0121 12:50:56.311560 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:51:10 crc kubenswrapper[4881]: I0121 12:51:10.567651 4881 scope.go:117] "RemoveContainer" containerID="a4dfa7ee98cb04337edaebb516d60cd9c59428107bdb7a432a638ae4c9d89379" Jan 21 12:51:10 crc kubenswrapper[4881]: E0121 12:51:10.568571 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:51:24 crc kubenswrapper[4881]: I0121 12:51:24.311736 4881 scope.go:117] "RemoveContainer" containerID="a4dfa7ee98cb04337edaebb516d60cd9c59428107bdb7a432a638ae4c9d89379" Jan 21 12:51:24 crc kubenswrapper[4881]: E0121 12:51:24.314294 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:51:38 crc kubenswrapper[4881]: I0121 12:51:38.311003 4881 scope.go:117] "RemoveContainer" containerID="a4dfa7ee98cb04337edaebb516d60cd9c59428107bdb7a432a638ae4c9d89379" Jan 21 12:51:38 crc kubenswrapper[4881]: E0121 12:51:38.311862 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:51:51 crc kubenswrapper[4881]: I0121 12:51:51.311138 4881 scope.go:117] "RemoveContainer" containerID="a4dfa7ee98cb04337edaebb516d60cd9c59428107bdb7a432a638ae4c9d89379" Jan 21 12:51:51 crc kubenswrapper[4881]: E0121 12:51:51.312338 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:52:04 crc kubenswrapper[4881]: I0121 12:52:04.314555 4881 scope.go:117] "RemoveContainer" containerID="a4dfa7ee98cb04337edaebb516d60cd9c59428107bdb7a432a638ae4c9d89379" Jan 21 12:52:04 crc kubenswrapper[4881]: E0121 12:52:04.329941 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:52:15 crc kubenswrapper[4881]: I0121 12:52:15.311016 4881 scope.go:117] "RemoveContainer" containerID="a4dfa7ee98cb04337edaebb516d60cd9c59428107bdb7a432a638ae4c9d89379" Jan 21 12:52:15 crc kubenswrapper[4881]: E0121 12:52:15.311700 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:52:29 crc kubenswrapper[4881]: I0121 12:52:29.319262 4881 scope.go:117] "RemoveContainer" containerID="a4dfa7ee98cb04337edaebb516d60cd9c59428107bdb7a432a638ae4c9d89379" Jan 21 12:52:29 crc kubenswrapper[4881]: E0121 12:52:29.321321 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:52:42 crc kubenswrapper[4881]: I0121 12:52:42.311329 4881 scope.go:117] "RemoveContainer" containerID="a4dfa7ee98cb04337edaebb516d60cd9c59428107bdb7a432a638ae4c9d89379" Jan 21 12:52:42 crc kubenswrapper[4881]: E0121 12:52:42.312276 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:52:56 crc kubenswrapper[4881]: I0121 12:52:56.311373 4881 scope.go:117] "RemoveContainer" containerID="a4dfa7ee98cb04337edaebb516d60cd9c59428107bdb7a432a638ae4c9d89379" Jan 21 12:52:56 crc kubenswrapper[4881]: E0121 12:52:56.312324 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:53:07 crc kubenswrapper[4881]: I0121 12:53:07.310948 4881 scope.go:117] "RemoveContainer" containerID="a4dfa7ee98cb04337edaebb516d60cd9c59428107bdb7a432a638ae4c9d89379" Jan 21 12:53:07 crc kubenswrapper[4881]: E0121 12:53:07.312080 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:53:22 crc kubenswrapper[4881]: I0121 12:53:22.311263 4881 scope.go:117] "RemoveContainer" containerID="a4dfa7ee98cb04337edaebb516d60cd9c59428107bdb7a432a638ae4c9d89379" Jan 21 12:53:22 crc kubenswrapper[4881]: E0121 12:53:22.312086 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 12:53:34 crc kubenswrapper[4881]: I0121 12:53:34.311369 4881 scope.go:117] "RemoveContainer" containerID="a4dfa7ee98cb04337edaebb516d60cd9c59428107bdb7a432a638ae4c9d89379" Jan 21 12:53:35 crc kubenswrapper[4881]: I0121 12:53:35.152719 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"dedb540716d32e2d9c1d7422b582f5eca19a8a8f41fc5f2cec024d263d91f035"} Jan 21 12:53:36 crc kubenswrapper[4881]: I0121 12:53:36.741549 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mtgbb"] Jan 21 12:53:36 crc kubenswrapper[4881]: E0121 12:53:36.742510 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7828a13b-c9c5-4bf7-b3e5-fcf9835417a6" containerName="extract-utilities" Jan 21 12:53:36 crc kubenswrapper[4881]: I0121 12:53:36.742525 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="7828a13b-c9c5-4bf7-b3e5-fcf9835417a6" containerName="extract-utilities" Jan 21 12:53:36 crc kubenswrapper[4881]: E0121 12:53:36.742563 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7828a13b-c9c5-4bf7-b3e5-fcf9835417a6" containerName="registry-server" Jan 21 12:53:36 crc kubenswrapper[4881]: I0121 12:53:36.742571 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="7828a13b-c9c5-4bf7-b3e5-fcf9835417a6" containerName="registry-server" Jan 21 12:53:36 crc kubenswrapper[4881]: E0121 12:53:36.742584 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7828a13b-c9c5-4bf7-b3e5-fcf9835417a6" containerName="extract-content" Jan 21 12:53:36 crc kubenswrapper[4881]: I0121 12:53:36.742590 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="7828a13b-c9c5-4bf7-b3e5-fcf9835417a6" containerName="extract-content" Jan 21 12:53:36 crc kubenswrapper[4881]: I0121 12:53:36.742816 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="7828a13b-c9c5-4bf7-b3e5-fcf9835417a6" containerName="registry-server" Jan 21 12:53:36 crc kubenswrapper[4881]: I0121 12:53:36.744367 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mtgbb" Jan 21 12:53:36 crc kubenswrapper[4881]: I0121 12:53:36.756652 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mtgbb"] Jan 21 12:53:36 crc kubenswrapper[4881]: I0121 12:53:36.796230 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d-utilities\") pod \"redhat-marketplace-mtgbb\" (UID: \"d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d\") " pod="openshift-marketplace/redhat-marketplace-mtgbb" Jan 21 12:53:36 crc kubenswrapper[4881]: I0121 12:53:36.796274 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdlnm\" (UniqueName: \"kubernetes.io/projected/d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d-kube-api-access-qdlnm\") pod \"redhat-marketplace-mtgbb\" (UID: \"d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d\") " pod="openshift-marketplace/redhat-marketplace-mtgbb" Jan 21 12:53:36 crc kubenswrapper[4881]: I0121 12:53:36.796363 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d-catalog-content\") pod \"redhat-marketplace-mtgbb\" (UID: \"d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d\") " pod="openshift-marketplace/redhat-marketplace-mtgbb" Jan 21 12:53:36 crc kubenswrapper[4881]: I0121 12:53:36.899021 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d-utilities\") pod \"redhat-marketplace-mtgbb\" (UID: \"d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d\") " pod="openshift-marketplace/redhat-marketplace-mtgbb" Jan 21 12:53:36 crc kubenswrapper[4881]: I0121 12:53:36.899094 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdlnm\" (UniqueName: \"kubernetes.io/projected/d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d-kube-api-access-qdlnm\") pod \"redhat-marketplace-mtgbb\" (UID: \"d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d\") " pod="openshift-marketplace/redhat-marketplace-mtgbb" Jan 21 12:53:36 crc kubenswrapper[4881]: I0121 12:53:36.899190 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d-catalog-content\") pod \"redhat-marketplace-mtgbb\" (UID: \"d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d\") " pod="openshift-marketplace/redhat-marketplace-mtgbb" Jan 21 12:53:36 crc kubenswrapper[4881]: I0121 12:53:36.899822 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d-utilities\") pod \"redhat-marketplace-mtgbb\" (UID: \"d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d\") " pod="openshift-marketplace/redhat-marketplace-mtgbb" Jan 21 12:53:36 crc kubenswrapper[4881]: I0121 12:53:36.899840 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d-catalog-content\") pod \"redhat-marketplace-mtgbb\" (UID: \"d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d\") " pod="openshift-marketplace/redhat-marketplace-mtgbb" Jan 21 12:53:36 crc kubenswrapper[4881]: I0121 12:53:36.919719 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdlnm\" (UniqueName: \"kubernetes.io/projected/d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d-kube-api-access-qdlnm\") pod \"redhat-marketplace-mtgbb\" (UID: \"d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d\") " pod="openshift-marketplace/redhat-marketplace-mtgbb" Jan 21 12:53:37 crc kubenswrapper[4881]: I0121 12:53:37.102447 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mtgbb" Jan 21 12:53:37 crc kubenswrapper[4881]: I0121 12:53:37.660972 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mtgbb"] Jan 21 12:53:38 crc kubenswrapper[4881]: I0121 12:53:38.200542 4881 generic.go:334] "Generic (PLEG): container finished" podID="d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d" containerID="f2776602f795a7d0db90491031dc97999d1b185014b08b5f9c8ef36a6686ca71" exitCode=0 Jan 21 12:53:38 crc kubenswrapper[4881]: I0121 12:53:38.200606 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mtgbb" event={"ID":"d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d","Type":"ContainerDied","Data":"f2776602f795a7d0db90491031dc97999d1b185014b08b5f9c8ef36a6686ca71"} Jan 21 12:53:38 crc kubenswrapper[4881]: I0121 12:53:38.200900 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mtgbb" event={"ID":"d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d","Type":"ContainerStarted","Data":"58441862b2c9de7ece6e1d2b0436d4ef9e5c2e523eb21c92187a1291d8b4e708"} Jan 21 12:53:38 crc kubenswrapper[4881]: I0121 12:53:38.203471 4881 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 12:53:39 crc kubenswrapper[4881]: I0121 12:53:39.211775 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mtgbb" event={"ID":"d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d","Type":"ContainerStarted","Data":"46b8246f1ddbb9722b2104b7ec9ef1f064fac477d33848aa9a101199d9fce4e0"} Jan 21 12:53:40 crc kubenswrapper[4881]: I0121 12:53:40.225140 4881 generic.go:334] "Generic (PLEG): container finished" podID="d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d" containerID="46b8246f1ddbb9722b2104b7ec9ef1f064fac477d33848aa9a101199d9fce4e0" exitCode=0 Jan 21 12:53:40 crc kubenswrapper[4881]: I0121 12:53:40.225207 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mtgbb" event={"ID":"d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d","Type":"ContainerDied","Data":"46b8246f1ddbb9722b2104b7ec9ef1f064fac477d33848aa9a101199d9fce4e0"} Jan 21 12:53:41 crc kubenswrapper[4881]: I0121 12:53:41.237514 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mtgbb" event={"ID":"d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d","Type":"ContainerStarted","Data":"fe23b6146a83222aa15f4a6ed582038505ddc15dfc14a0c48277a51346feb485"} Jan 21 12:53:41 crc kubenswrapper[4881]: I0121 12:53:41.268490 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mtgbb" podStartSLOduration=2.760782728 podStartE2EDuration="5.268453768s" podCreationTimestamp="2026-01-21 12:53:36 +0000 UTC" firstStartedPulling="2026-01-21 12:53:38.203046886 +0000 UTC m=+7005.463003375" lastFinishedPulling="2026-01-21 12:53:40.710717956 +0000 UTC m=+7007.970674415" observedRunningTime="2026-01-21 12:53:41.259814229 +0000 UTC m=+7008.519770708" watchObservedRunningTime="2026-01-21 12:53:41.268453768 +0000 UTC m=+7008.528410237" Jan 21 12:53:47 crc kubenswrapper[4881]: I0121 12:53:47.103091 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mtgbb" Jan 21 12:53:47 crc kubenswrapper[4881]: I0121 12:53:47.103654 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mtgbb" Jan 21 12:53:47 crc kubenswrapper[4881]: I0121 12:53:47.178760 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mtgbb" Jan 21 12:53:47 crc kubenswrapper[4881]: I0121 12:53:47.352945 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mtgbb" Jan 21 12:53:47 crc kubenswrapper[4881]: I0121 12:53:47.418714 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mtgbb"] Jan 21 12:53:49 crc kubenswrapper[4881]: I0121 12:53:49.318480 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mtgbb" podUID="d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d" containerName="registry-server" containerID="cri-o://fe23b6146a83222aa15f4a6ed582038505ddc15dfc14a0c48277a51346feb485" gracePeriod=2 Jan 21 12:53:50 crc kubenswrapper[4881]: I0121 12:53:50.332829 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mtgbb" event={"ID":"d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d","Type":"ContainerDied","Data":"fe23b6146a83222aa15f4a6ed582038505ddc15dfc14a0c48277a51346feb485"} Jan 21 12:53:50 crc kubenswrapper[4881]: I0121 12:53:50.332890 4881 generic.go:334] "Generic (PLEG): container finished" podID="d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d" containerID="fe23b6146a83222aa15f4a6ed582038505ddc15dfc14a0c48277a51346feb485" exitCode=0 Jan 21 12:53:50 crc kubenswrapper[4881]: I0121 12:53:50.490111 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mtgbb" Jan 21 12:53:50 crc kubenswrapper[4881]: I0121 12:53:50.663701 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdlnm\" (UniqueName: \"kubernetes.io/projected/d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d-kube-api-access-qdlnm\") pod \"d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d\" (UID: \"d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d\") " Jan 21 12:53:50 crc kubenswrapper[4881]: I0121 12:53:50.663995 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d-utilities\") pod \"d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d\" (UID: \"d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d\") " Jan 21 12:53:50 crc kubenswrapper[4881]: I0121 12:53:50.664073 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d-catalog-content\") pod \"d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d\" (UID: \"d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d\") " Jan 21 12:53:50 crc kubenswrapper[4881]: I0121 12:53:50.664884 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d-utilities" (OuterVolumeSpecName: "utilities") pod "d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d" (UID: "d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:53:50 crc kubenswrapper[4881]: I0121 12:53:50.671133 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d-kube-api-access-qdlnm" (OuterVolumeSpecName: "kube-api-access-qdlnm") pod "d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d" (UID: "d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d"). InnerVolumeSpecName "kube-api-access-qdlnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:53:50 crc kubenswrapper[4881]: I0121 12:53:50.698483 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d" (UID: "d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:53:50 crc kubenswrapper[4881]: I0121 12:53:50.766770 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 12:53:50 crc kubenswrapper[4881]: I0121 12:53:50.766844 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdlnm\" (UniqueName: \"kubernetes.io/projected/d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d-kube-api-access-qdlnm\") on node \"crc\" DevicePath \"\"" Jan 21 12:53:50 crc kubenswrapper[4881]: I0121 12:53:50.766856 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 12:53:51 crc kubenswrapper[4881]: I0121 12:53:51.347363 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mtgbb" event={"ID":"d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d","Type":"ContainerDied","Data":"58441862b2c9de7ece6e1d2b0436d4ef9e5c2e523eb21c92187a1291d8b4e708"} Jan 21 12:53:51 crc kubenswrapper[4881]: I0121 12:53:51.347433 4881 scope.go:117] "RemoveContainer" containerID="fe23b6146a83222aa15f4a6ed582038505ddc15dfc14a0c48277a51346feb485" Jan 21 12:53:51 crc kubenswrapper[4881]: I0121 12:53:51.347473 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mtgbb" Jan 21 12:53:51 crc kubenswrapper[4881]: I0121 12:53:51.378623 4881 scope.go:117] "RemoveContainer" containerID="46b8246f1ddbb9722b2104b7ec9ef1f064fac477d33848aa9a101199d9fce4e0" Jan 21 12:53:51 crc kubenswrapper[4881]: I0121 12:53:51.379353 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mtgbb"] Jan 21 12:53:51 crc kubenswrapper[4881]: I0121 12:53:51.394623 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mtgbb"] Jan 21 12:53:51 crc kubenswrapper[4881]: I0121 12:53:51.400512 4881 scope.go:117] "RemoveContainer" containerID="f2776602f795a7d0db90491031dc97999d1b185014b08b5f9c8ef36a6686ca71" Jan 21 12:53:51 crc kubenswrapper[4881]: E0121 12:53:51.561063 4881 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd01b3ca1_cd4e_42fa_ab27_811b3d2ab26d.slice/crio-58441862b2c9de7ece6e1d2b0436d4ef9e5c2e523eb21c92187a1291d8b4e708\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd01b3ca1_cd4e_42fa_ab27_811b3d2ab26d.slice\": RecentStats: unable to find data in memory cache]" Jan 21 12:53:53 crc kubenswrapper[4881]: I0121 12:53:53.324008 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d" path="/var/lib/kubelet/pods/d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d/volumes" Jan 21 12:55:22 crc kubenswrapper[4881]: I0121 12:55:22.458846 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4k5sb"] Jan 21 12:55:22 crc kubenswrapper[4881]: E0121 12:55:22.460093 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d" containerName="registry-server" Jan 21 12:55:22 crc kubenswrapper[4881]: I0121 12:55:22.460110 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d" containerName="registry-server" Jan 21 12:55:22 crc kubenswrapper[4881]: E0121 12:55:22.460139 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d" containerName="extract-utilities" Jan 21 12:55:22 crc kubenswrapper[4881]: I0121 12:55:22.460148 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d" containerName="extract-utilities" Jan 21 12:55:22 crc kubenswrapper[4881]: E0121 12:55:22.460171 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d" containerName="extract-content" Jan 21 12:55:22 crc kubenswrapper[4881]: I0121 12:55:22.460179 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d" containerName="extract-content" Jan 21 12:55:22 crc kubenswrapper[4881]: I0121 12:55:22.460493 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="d01b3ca1-cd4e-42fa-ab27-811b3d2ab26d" containerName="registry-server" Jan 21 12:55:22 crc kubenswrapper[4881]: I0121 12:55:22.462645 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4k5sb" Jan 21 12:55:22 crc kubenswrapper[4881]: I0121 12:55:22.472185 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4k5sb"] Jan 21 12:55:22 crc kubenswrapper[4881]: I0121 12:55:22.519476 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/484fa13a-3d87-4fdb-926a-4bedccfa3140-utilities\") pod \"certified-operators-4k5sb\" (UID: \"484fa13a-3d87-4fdb-926a-4bedccfa3140\") " pod="openshift-marketplace/certified-operators-4k5sb" Jan 21 12:55:22 crc kubenswrapper[4881]: I0121 12:55:22.519820 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhcmm\" (UniqueName: \"kubernetes.io/projected/484fa13a-3d87-4fdb-926a-4bedccfa3140-kube-api-access-nhcmm\") pod \"certified-operators-4k5sb\" (UID: \"484fa13a-3d87-4fdb-926a-4bedccfa3140\") " pod="openshift-marketplace/certified-operators-4k5sb" Jan 21 12:55:22 crc kubenswrapper[4881]: I0121 12:55:22.520030 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/484fa13a-3d87-4fdb-926a-4bedccfa3140-catalog-content\") pod \"certified-operators-4k5sb\" (UID: \"484fa13a-3d87-4fdb-926a-4bedccfa3140\") " pod="openshift-marketplace/certified-operators-4k5sb" Jan 21 12:55:22 crc kubenswrapper[4881]: I0121 12:55:22.622522 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/484fa13a-3d87-4fdb-926a-4bedccfa3140-catalog-content\") pod \"certified-operators-4k5sb\" (UID: \"484fa13a-3d87-4fdb-926a-4bedccfa3140\") " pod="openshift-marketplace/certified-operators-4k5sb" Jan 21 12:55:22 crc kubenswrapper[4881]: I0121 12:55:22.622651 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/484fa13a-3d87-4fdb-926a-4bedccfa3140-utilities\") pod \"certified-operators-4k5sb\" (UID: \"484fa13a-3d87-4fdb-926a-4bedccfa3140\") " pod="openshift-marketplace/certified-operators-4k5sb" Jan 21 12:55:22 crc kubenswrapper[4881]: I0121 12:55:22.622680 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhcmm\" (UniqueName: \"kubernetes.io/projected/484fa13a-3d87-4fdb-926a-4bedccfa3140-kube-api-access-nhcmm\") pod \"certified-operators-4k5sb\" (UID: \"484fa13a-3d87-4fdb-926a-4bedccfa3140\") " pod="openshift-marketplace/certified-operators-4k5sb" Jan 21 12:55:22 crc kubenswrapper[4881]: I0121 12:55:22.623650 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/484fa13a-3d87-4fdb-926a-4bedccfa3140-catalog-content\") pod \"certified-operators-4k5sb\" (UID: \"484fa13a-3d87-4fdb-926a-4bedccfa3140\") " pod="openshift-marketplace/certified-operators-4k5sb" Jan 21 12:55:22 crc kubenswrapper[4881]: I0121 12:55:22.623886 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/484fa13a-3d87-4fdb-926a-4bedccfa3140-utilities\") pod \"certified-operators-4k5sb\" (UID: \"484fa13a-3d87-4fdb-926a-4bedccfa3140\") " pod="openshift-marketplace/certified-operators-4k5sb" Jan 21 12:55:22 crc kubenswrapper[4881]: I0121 12:55:22.646778 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhcmm\" (UniqueName: \"kubernetes.io/projected/484fa13a-3d87-4fdb-926a-4bedccfa3140-kube-api-access-nhcmm\") pod \"certified-operators-4k5sb\" (UID: \"484fa13a-3d87-4fdb-926a-4bedccfa3140\") " pod="openshift-marketplace/certified-operators-4k5sb" Jan 21 12:55:22 crc kubenswrapper[4881]: I0121 12:55:22.788668 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4k5sb" Jan 21 12:55:23 crc kubenswrapper[4881]: I0121 12:55:23.429299 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4k5sb"] Jan 21 12:55:23 crc kubenswrapper[4881]: I0121 12:55:23.525990 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4k5sb" event={"ID":"484fa13a-3d87-4fdb-926a-4bedccfa3140","Type":"ContainerStarted","Data":"11ebae3a888dec65882526f80a7bb92025588e02e26de48fd6e89454edb9d249"} Jan 21 12:55:24 crc kubenswrapper[4881]: I0121 12:55:24.538131 4881 generic.go:334] "Generic (PLEG): container finished" podID="484fa13a-3d87-4fdb-926a-4bedccfa3140" containerID="466753d23890f667d788051ffffcde33823b92cd70a8273eb27f17cf0f2b8907" exitCode=0 Jan 21 12:55:24 crc kubenswrapper[4881]: I0121 12:55:24.538260 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4k5sb" event={"ID":"484fa13a-3d87-4fdb-926a-4bedccfa3140","Type":"ContainerDied","Data":"466753d23890f667d788051ffffcde33823b92cd70a8273eb27f17cf0f2b8907"} Jan 21 12:55:24 crc kubenswrapper[4881]: I0121 12:55:24.644522 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qlpzh"] Jan 21 12:55:24 crc kubenswrapper[4881]: I0121 12:55:24.648351 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qlpzh" Jan 21 12:55:24 crc kubenswrapper[4881]: I0121 12:55:24.673337 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qlpzh"] Jan 21 12:55:24 crc kubenswrapper[4881]: I0121 12:55:24.690512 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbd14e97-6383-426c-a806-89dc0439e483-catalog-content\") pod \"community-operators-qlpzh\" (UID: \"bbd14e97-6383-426c-a806-89dc0439e483\") " pod="openshift-marketplace/community-operators-qlpzh" Jan 21 12:55:24 crc kubenswrapper[4881]: I0121 12:55:24.690573 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbd14e97-6383-426c-a806-89dc0439e483-utilities\") pod \"community-operators-qlpzh\" (UID: \"bbd14e97-6383-426c-a806-89dc0439e483\") " pod="openshift-marketplace/community-operators-qlpzh" Jan 21 12:55:24 crc kubenswrapper[4881]: I0121 12:55:24.690608 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj2xn\" (UniqueName: \"kubernetes.io/projected/bbd14e97-6383-426c-a806-89dc0439e483-kube-api-access-wj2xn\") pod \"community-operators-qlpzh\" (UID: \"bbd14e97-6383-426c-a806-89dc0439e483\") " pod="openshift-marketplace/community-operators-qlpzh" Jan 21 12:55:24 crc kubenswrapper[4881]: I0121 12:55:24.792825 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbd14e97-6383-426c-a806-89dc0439e483-catalog-content\") pod \"community-operators-qlpzh\" (UID: \"bbd14e97-6383-426c-a806-89dc0439e483\") " pod="openshift-marketplace/community-operators-qlpzh" Jan 21 12:55:24 crc kubenswrapper[4881]: I0121 12:55:24.792885 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbd14e97-6383-426c-a806-89dc0439e483-utilities\") pod \"community-operators-qlpzh\" (UID: \"bbd14e97-6383-426c-a806-89dc0439e483\") " pod="openshift-marketplace/community-operators-qlpzh" Jan 21 12:55:24 crc kubenswrapper[4881]: I0121 12:55:24.792929 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wj2xn\" (UniqueName: \"kubernetes.io/projected/bbd14e97-6383-426c-a806-89dc0439e483-kube-api-access-wj2xn\") pod \"community-operators-qlpzh\" (UID: \"bbd14e97-6383-426c-a806-89dc0439e483\") " pod="openshift-marketplace/community-operators-qlpzh" Jan 21 12:55:24 crc kubenswrapper[4881]: I0121 12:55:24.793544 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbd14e97-6383-426c-a806-89dc0439e483-catalog-content\") pod \"community-operators-qlpzh\" (UID: \"bbd14e97-6383-426c-a806-89dc0439e483\") " pod="openshift-marketplace/community-operators-qlpzh" Jan 21 12:55:24 crc kubenswrapper[4881]: I0121 12:55:24.793765 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbd14e97-6383-426c-a806-89dc0439e483-utilities\") pod \"community-operators-qlpzh\" (UID: \"bbd14e97-6383-426c-a806-89dc0439e483\") " pod="openshift-marketplace/community-operators-qlpzh" Jan 21 12:55:24 crc kubenswrapper[4881]: I0121 12:55:24.815708 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wj2xn\" (UniqueName: \"kubernetes.io/projected/bbd14e97-6383-426c-a806-89dc0439e483-kube-api-access-wj2xn\") pod \"community-operators-qlpzh\" (UID: \"bbd14e97-6383-426c-a806-89dc0439e483\") " pod="openshift-marketplace/community-operators-qlpzh" Jan 21 12:55:24 crc kubenswrapper[4881]: I0121 12:55:24.979085 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qlpzh" Jan 21 12:55:25 crc kubenswrapper[4881]: I0121 12:55:25.497415 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qlpzh"] Jan 21 12:55:25 crc kubenswrapper[4881]: I0121 12:55:25.556280 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4k5sb" event={"ID":"484fa13a-3d87-4fdb-926a-4bedccfa3140","Type":"ContainerStarted","Data":"d45e348ef263b7ba1f529124466d1dc280d898d072af30b46dae32df93da25c3"} Jan 21 12:55:25 crc kubenswrapper[4881]: I0121 12:55:25.558036 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qlpzh" event={"ID":"bbd14e97-6383-426c-a806-89dc0439e483","Type":"ContainerStarted","Data":"d2d30cea3f4802992aeeddc90e712708eb9ee514be369fa07ba0e9851856d338"} Jan 21 12:55:26 crc kubenswrapper[4881]: I0121 12:55:26.580951 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qlpzh" event={"ID":"bbd14e97-6383-426c-a806-89dc0439e483","Type":"ContainerStarted","Data":"44aa3e3470b0ca5ec311ca3411b98f9efca17f1a79eb00b5ddc10873f556ea0f"} Jan 21 12:55:27 crc kubenswrapper[4881]: I0121 12:55:27.599167 4881 generic.go:334] "Generic (PLEG): container finished" podID="bbd14e97-6383-426c-a806-89dc0439e483" containerID="44aa3e3470b0ca5ec311ca3411b98f9efca17f1a79eb00b5ddc10873f556ea0f" exitCode=0 Jan 21 12:55:27 crc kubenswrapper[4881]: I0121 12:55:27.599239 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qlpzh" event={"ID":"bbd14e97-6383-426c-a806-89dc0439e483","Type":"ContainerDied","Data":"44aa3e3470b0ca5ec311ca3411b98f9efca17f1a79eb00b5ddc10873f556ea0f"} Jan 21 12:55:27 crc kubenswrapper[4881]: I0121 12:55:27.605699 4881 generic.go:334] "Generic (PLEG): container finished" podID="484fa13a-3d87-4fdb-926a-4bedccfa3140" containerID="d45e348ef263b7ba1f529124466d1dc280d898d072af30b46dae32df93da25c3" exitCode=0 Jan 21 12:55:27 crc kubenswrapper[4881]: I0121 12:55:27.605874 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4k5sb" event={"ID":"484fa13a-3d87-4fdb-926a-4bedccfa3140","Type":"ContainerDied","Data":"d45e348ef263b7ba1f529124466d1dc280d898d072af30b46dae32df93da25c3"} Jan 21 12:55:28 crc kubenswrapper[4881]: I0121 12:55:28.618224 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4k5sb" event={"ID":"484fa13a-3d87-4fdb-926a-4bedccfa3140","Type":"ContainerStarted","Data":"08e670a5efbccbc4f21fbb4986e9798817e396e84f38095931c66cf8c87af21d"} Jan 21 12:55:28 crc kubenswrapper[4881]: I0121 12:55:28.642944 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4k5sb" podStartSLOduration=3.151789721 podStartE2EDuration="6.642923115s" podCreationTimestamp="2026-01-21 12:55:22 +0000 UTC" firstStartedPulling="2026-01-21 12:55:24.540298077 +0000 UTC m=+7111.800254546" lastFinishedPulling="2026-01-21 12:55:28.031431471 +0000 UTC m=+7115.291387940" observedRunningTime="2026-01-21 12:55:28.639733528 +0000 UTC m=+7115.899690007" watchObservedRunningTime="2026-01-21 12:55:28.642923115 +0000 UTC m=+7115.902879594" Jan 21 12:55:29 crc kubenswrapper[4881]: I0121 12:55:29.634507 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qlpzh" event={"ID":"bbd14e97-6383-426c-a806-89dc0439e483","Type":"ContainerStarted","Data":"abcf99b1686ed30d644ac8cc3108a674f3f5c33b08bd1de6310637f6c896a97b"} Jan 21 12:55:31 crc kubenswrapper[4881]: I0121 12:55:31.666358 4881 generic.go:334] "Generic (PLEG): container finished" podID="bbd14e97-6383-426c-a806-89dc0439e483" containerID="abcf99b1686ed30d644ac8cc3108a674f3f5c33b08bd1de6310637f6c896a97b" exitCode=0 Jan 21 12:55:31 crc kubenswrapper[4881]: I0121 12:55:31.666447 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qlpzh" event={"ID":"bbd14e97-6383-426c-a806-89dc0439e483","Type":"ContainerDied","Data":"abcf99b1686ed30d644ac8cc3108a674f3f5c33b08bd1de6310637f6c896a97b"} Jan 21 12:55:32 crc kubenswrapper[4881]: I0121 12:55:32.789485 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4k5sb" Jan 21 12:55:32 crc kubenswrapper[4881]: I0121 12:55:32.789813 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4k5sb" Jan 21 12:55:33 crc kubenswrapper[4881]: I0121 12:55:33.697471 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qlpzh" event={"ID":"bbd14e97-6383-426c-a806-89dc0439e483","Type":"ContainerStarted","Data":"4c2249baf75909213b45ca0d5d8deab257ea7674b612d4fa673ea256f1644b3b"} Jan 21 12:55:33 crc kubenswrapper[4881]: I0121 12:55:33.723201 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qlpzh" podStartSLOduration=4.067145268 podStartE2EDuration="9.723180422s" podCreationTimestamp="2026-01-21 12:55:24 +0000 UTC" firstStartedPulling="2026-01-21 12:55:27.603199748 +0000 UTC m=+7114.863156257" lastFinishedPulling="2026-01-21 12:55:33.259234932 +0000 UTC m=+7120.519191411" observedRunningTime="2026-01-21 12:55:33.722956356 +0000 UTC m=+7120.982912885" watchObservedRunningTime="2026-01-21 12:55:33.723180422 +0000 UTC m=+7120.983136891" Jan 21 12:55:33 crc kubenswrapper[4881]: I0121 12:55:33.851827 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-4k5sb" podUID="484fa13a-3d87-4fdb-926a-4bedccfa3140" containerName="registry-server" probeResult="failure" output=< Jan 21 12:55:33 crc kubenswrapper[4881]: timeout: failed to connect service ":50051" within 1s Jan 21 12:55:33 crc kubenswrapper[4881]: > Jan 21 12:55:34 crc kubenswrapper[4881]: I0121 12:55:34.980640 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qlpzh" Jan 21 12:55:34 crc kubenswrapper[4881]: I0121 12:55:34.981602 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qlpzh" Jan 21 12:55:36 crc kubenswrapper[4881]: I0121 12:55:36.033085 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-qlpzh" podUID="bbd14e97-6383-426c-a806-89dc0439e483" containerName="registry-server" probeResult="failure" output=< Jan 21 12:55:36 crc kubenswrapper[4881]: timeout: failed to connect service ":50051" within 1s Jan 21 12:55:36 crc kubenswrapper[4881]: > Jan 21 12:55:42 crc kubenswrapper[4881]: I0121 12:55:42.865255 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4k5sb" Jan 21 12:55:42 crc kubenswrapper[4881]: I0121 12:55:42.949362 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4k5sb" Jan 21 12:55:43 crc kubenswrapper[4881]: I0121 12:55:43.114597 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4k5sb"] Jan 21 12:55:44 crc kubenswrapper[4881]: I0121 12:55:44.831849 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4k5sb" podUID="484fa13a-3d87-4fdb-926a-4bedccfa3140" containerName="registry-server" containerID="cri-o://08e670a5efbccbc4f21fbb4986e9798817e396e84f38095931c66cf8c87af21d" gracePeriod=2 Jan 21 12:55:45 crc kubenswrapper[4881]: I0121 12:55:45.034996 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qlpzh" Jan 21 12:55:45 crc kubenswrapper[4881]: I0121 12:55:45.102559 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qlpzh" Jan 21 12:55:45 crc kubenswrapper[4881]: I0121 12:55:45.344819 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4k5sb" Jan 21 12:55:45 crc kubenswrapper[4881]: I0121 12:55:45.483014 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/484fa13a-3d87-4fdb-926a-4bedccfa3140-utilities\") pod \"484fa13a-3d87-4fdb-926a-4bedccfa3140\" (UID: \"484fa13a-3d87-4fdb-926a-4bedccfa3140\") " Jan 21 12:55:45 crc kubenswrapper[4881]: I0121 12:55:45.483400 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhcmm\" (UniqueName: \"kubernetes.io/projected/484fa13a-3d87-4fdb-926a-4bedccfa3140-kube-api-access-nhcmm\") pod \"484fa13a-3d87-4fdb-926a-4bedccfa3140\" (UID: \"484fa13a-3d87-4fdb-926a-4bedccfa3140\") " Jan 21 12:55:45 crc kubenswrapper[4881]: I0121 12:55:45.483589 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/484fa13a-3d87-4fdb-926a-4bedccfa3140-catalog-content\") pod \"484fa13a-3d87-4fdb-926a-4bedccfa3140\" (UID: \"484fa13a-3d87-4fdb-926a-4bedccfa3140\") " Jan 21 12:55:45 crc kubenswrapper[4881]: I0121 12:55:45.487030 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/484fa13a-3d87-4fdb-926a-4bedccfa3140-utilities" (OuterVolumeSpecName: "utilities") pod "484fa13a-3d87-4fdb-926a-4bedccfa3140" (UID: "484fa13a-3d87-4fdb-926a-4bedccfa3140"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:55:45 crc kubenswrapper[4881]: I0121 12:55:45.495103 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/484fa13a-3d87-4fdb-926a-4bedccfa3140-kube-api-access-nhcmm" (OuterVolumeSpecName: "kube-api-access-nhcmm") pod "484fa13a-3d87-4fdb-926a-4bedccfa3140" (UID: "484fa13a-3d87-4fdb-926a-4bedccfa3140"). InnerVolumeSpecName "kube-api-access-nhcmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:55:45 crc kubenswrapper[4881]: I0121 12:55:45.518218 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qlpzh"] Jan 21 12:55:45 crc kubenswrapper[4881]: I0121 12:55:45.557924 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/484fa13a-3d87-4fdb-926a-4bedccfa3140-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "484fa13a-3d87-4fdb-926a-4bedccfa3140" (UID: "484fa13a-3d87-4fdb-926a-4bedccfa3140"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:55:45 crc kubenswrapper[4881]: I0121 12:55:45.588091 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhcmm\" (UniqueName: \"kubernetes.io/projected/484fa13a-3d87-4fdb-926a-4bedccfa3140-kube-api-access-nhcmm\") on node \"crc\" DevicePath \"\"" Jan 21 12:55:45 crc kubenswrapper[4881]: I0121 12:55:45.588138 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/484fa13a-3d87-4fdb-926a-4bedccfa3140-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 12:55:45 crc kubenswrapper[4881]: I0121 12:55:45.588152 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/484fa13a-3d87-4fdb-926a-4bedccfa3140-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 12:55:45 crc kubenswrapper[4881]: I0121 12:55:45.845120 4881 generic.go:334] "Generic (PLEG): container finished" podID="484fa13a-3d87-4fdb-926a-4bedccfa3140" containerID="08e670a5efbccbc4f21fbb4986e9798817e396e84f38095931c66cf8c87af21d" exitCode=0 Jan 21 12:55:45 crc kubenswrapper[4881]: I0121 12:55:45.845184 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4k5sb" Jan 21 12:55:45 crc kubenswrapper[4881]: I0121 12:55:45.845295 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4k5sb" event={"ID":"484fa13a-3d87-4fdb-926a-4bedccfa3140","Type":"ContainerDied","Data":"08e670a5efbccbc4f21fbb4986e9798817e396e84f38095931c66cf8c87af21d"} Jan 21 12:55:45 crc kubenswrapper[4881]: I0121 12:55:45.845340 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4k5sb" event={"ID":"484fa13a-3d87-4fdb-926a-4bedccfa3140","Type":"ContainerDied","Data":"11ebae3a888dec65882526f80a7bb92025588e02e26de48fd6e89454edb9d249"} Jan 21 12:55:45 crc kubenswrapper[4881]: I0121 12:55:45.845364 4881 scope.go:117] "RemoveContainer" containerID="08e670a5efbccbc4f21fbb4986e9798817e396e84f38095931c66cf8c87af21d" Jan 21 12:55:45 crc kubenswrapper[4881]: I0121 12:55:45.870955 4881 scope.go:117] "RemoveContainer" containerID="d45e348ef263b7ba1f529124466d1dc280d898d072af30b46dae32df93da25c3" Jan 21 12:55:45 crc kubenswrapper[4881]: I0121 12:55:45.894953 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4k5sb"] Jan 21 12:55:45 crc kubenswrapper[4881]: I0121 12:55:45.896634 4881 scope.go:117] "RemoveContainer" containerID="466753d23890f667d788051ffffcde33823b92cd70a8273eb27f17cf0f2b8907" Jan 21 12:55:45 crc kubenswrapper[4881]: I0121 12:55:45.917926 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4k5sb"] Jan 21 12:55:45 crc kubenswrapper[4881]: I0121 12:55:45.940692 4881 scope.go:117] "RemoveContainer" containerID="08e670a5efbccbc4f21fbb4986e9798817e396e84f38095931c66cf8c87af21d" Jan 21 12:55:45 crc kubenswrapper[4881]: E0121 12:55:45.941288 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08e670a5efbccbc4f21fbb4986e9798817e396e84f38095931c66cf8c87af21d\": container with ID starting with 08e670a5efbccbc4f21fbb4986e9798817e396e84f38095931c66cf8c87af21d not found: ID does not exist" containerID="08e670a5efbccbc4f21fbb4986e9798817e396e84f38095931c66cf8c87af21d" Jan 21 12:55:45 crc kubenswrapper[4881]: I0121 12:55:45.941344 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08e670a5efbccbc4f21fbb4986e9798817e396e84f38095931c66cf8c87af21d"} err="failed to get container status \"08e670a5efbccbc4f21fbb4986e9798817e396e84f38095931c66cf8c87af21d\": rpc error: code = NotFound desc = could not find container \"08e670a5efbccbc4f21fbb4986e9798817e396e84f38095931c66cf8c87af21d\": container with ID starting with 08e670a5efbccbc4f21fbb4986e9798817e396e84f38095931c66cf8c87af21d not found: ID does not exist" Jan 21 12:55:45 crc kubenswrapper[4881]: I0121 12:55:45.941371 4881 scope.go:117] "RemoveContainer" containerID="d45e348ef263b7ba1f529124466d1dc280d898d072af30b46dae32df93da25c3" Jan 21 12:55:45 crc kubenswrapper[4881]: E0121 12:55:45.941618 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d45e348ef263b7ba1f529124466d1dc280d898d072af30b46dae32df93da25c3\": container with ID starting with d45e348ef263b7ba1f529124466d1dc280d898d072af30b46dae32df93da25c3 not found: ID does not exist" containerID="d45e348ef263b7ba1f529124466d1dc280d898d072af30b46dae32df93da25c3" Jan 21 12:55:45 crc kubenswrapper[4881]: I0121 12:55:45.941658 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d45e348ef263b7ba1f529124466d1dc280d898d072af30b46dae32df93da25c3"} err="failed to get container status \"d45e348ef263b7ba1f529124466d1dc280d898d072af30b46dae32df93da25c3\": rpc error: code = NotFound desc = could not find container \"d45e348ef263b7ba1f529124466d1dc280d898d072af30b46dae32df93da25c3\": container with ID starting with d45e348ef263b7ba1f529124466d1dc280d898d072af30b46dae32df93da25c3 not found: ID does not exist" Jan 21 12:55:45 crc kubenswrapper[4881]: I0121 12:55:45.941684 4881 scope.go:117] "RemoveContainer" containerID="466753d23890f667d788051ffffcde33823b92cd70a8273eb27f17cf0f2b8907" Jan 21 12:55:45 crc kubenswrapper[4881]: E0121 12:55:45.942207 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"466753d23890f667d788051ffffcde33823b92cd70a8273eb27f17cf0f2b8907\": container with ID starting with 466753d23890f667d788051ffffcde33823b92cd70a8273eb27f17cf0f2b8907 not found: ID does not exist" containerID="466753d23890f667d788051ffffcde33823b92cd70a8273eb27f17cf0f2b8907" Jan 21 12:55:45 crc kubenswrapper[4881]: I0121 12:55:45.942258 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"466753d23890f667d788051ffffcde33823b92cd70a8273eb27f17cf0f2b8907"} err="failed to get container status \"466753d23890f667d788051ffffcde33823b92cd70a8273eb27f17cf0f2b8907\": rpc error: code = NotFound desc = could not find container \"466753d23890f667d788051ffffcde33823b92cd70a8273eb27f17cf0f2b8907\": container with ID starting with 466753d23890f667d788051ffffcde33823b92cd70a8273eb27f17cf0f2b8907 not found: ID does not exist" Jan 21 12:55:46 crc kubenswrapper[4881]: I0121 12:55:46.859656 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qlpzh" podUID="bbd14e97-6383-426c-a806-89dc0439e483" containerName="registry-server" containerID="cri-o://4c2249baf75909213b45ca0d5d8deab257ea7674b612d4fa673ea256f1644b3b" gracePeriod=2 Jan 21 12:55:47 crc kubenswrapper[4881]: I0121 12:55:47.331929 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="484fa13a-3d87-4fdb-926a-4bedccfa3140" path="/var/lib/kubelet/pods/484fa13a-3d87-4fdb-926a-4bedccfa3140/volumes" Jan 21 12:55:47 crc kubenswrapper[4881]: I0121 12:55:47.348881 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qlpzh" Jan 21 12:55:47 crc kubenswrapper[4881]: I0121 12:55:47.360199 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbd14e97-6383-426c-a806-89dc0439e483-utilities\") pod \"bbd14e97-6383-426c-a806-89dc0439e483\" (UID: \"bbd14e97-6383-426c-a806-89dc0439e483\") " Jan 21 12:55:47 crc kubenswrapper[4881]: I0121 12:55:47.360536 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbd14e97-6383-426c-a806-89dc0439e483-catalog-content\") pod \"bbd14e97-6383-426c-a806-89dc0439e483\" (UID: \"bbd14e97-6383-426c-a806-89dc0439e483\") " Jan 21 12:55:47 crc kubenswrapper[4881]: I0121 12:55:47.360668 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj2xn\" (UniqueName: \"kubernetes.io/projected/bbd14e97-6383-426c-a806-89dc0439e483-kube-api-access-wj2xn\") pod \"bbd14e97-6383-426c-a806-89dc0439e483\" (UID: \"bbd14e97-6383-426c-a806-89dc0439e483\") " Jan 21 12:55:47 crc kubenswrapper[4881]: I0121 12:55:47.362672 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbd14e97-6383-426c-a806-89dc0439e483-utilities" (OuterVolumeSpecName: "utilities") pod "bbd14e97-6383-426c-a806-89dc0439e483" (UID: "bbd14e97-6383-426c-a806-89dc0439e483"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:55:47 crc kubenswrapper[4881]: I0121 12:55:47.373080 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbd14e97-6383-426c-a806-89dc0439e483-kube-api-access-wj2xn" (OuterVolumeSpecName: "kube-api-access-wj2xn") pod "bbd14e97-6383-426c-a806-89dc0439e483" (UID: "bbd14e97-6383-426c-a806-89dc0439e483"). InnerVolumeSpecName "kube-api-access-wj2xn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:55:47 crc kubenswrapper[4881]: I0121 12:55:47.434928 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbd14e97-6383-426c-a806-89dc0439e483-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bbd14e97-6383-426c-a806-89dc0439e483" (UID: "bbd14e97-6383-426c-a806-89dc0439e483"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:55:47 crc kubenswrapper[4881]: I0121 12:55:47.463857 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbd14e97-6383-426c-a806-89dc0439e483-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 12:55:47 crc kubenswrapper[4881]: I0121 12:55:47.464213 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wj2xn\" (UniqueName: \"kubernetes.io/projected/bbd14e97-6383-426c-a806-89dc0439e483-kube-api-access-wj2xn\") on node \"crc\" DevicePath \"\"" Jan 21 12:55:47 crc kubenswrapper[4881]: I0121 12:55:47.464289 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbd14e97-6383-426c-a806-89dc0439e483-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 12:55:47 crc kubenswrapper[4881]: I0121 12:55:47.880447 4881 generic.go:334] "Generic (PLEG): container finished" podID="bbd14e97-6383-426c-a806-89dc0439e483" containerID="4c2249baf75909213b45ca0d5d8deab257ea7674b612d4fa673ea256f1644b3b" exitCode=0 Jan 21 12:55:47 crc kubenswrapper[4881]: I0121 12:55:47.880487 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qlpzh" event={"ID":"bbd14e97-6383-426c-a806-89dc0439e483","Type":"ContainerDied","Data":"4c2249baf75909213b45ca0d5d8deab257ea7674b612d4fa673ea256f1644b3b"} Jan 21 12:55:47 crc kubenswrapper[4881]: I0121 12:55:47.880535 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qlpzh" event={"ID":"bbd14e97-6383-426c-a806-89dc0439e483","Type":"ContainerDied","Data":"d2d30cea3f4802992aeeddc90e712708eb9ee514be369fa07ba0e9851856d338"} Jan 21 12:55:47 crc kubenswrapper[4881]: I0121 12:55:47.880554 4881 scope.go:117] "RemoveContainer" containerID="4c2249baf75909213b45ca0d5d8deab257ea7674b612d4fa673ea256f1644b3b" Jan 21 12:55:47 crc kubenswrapper[4881]: I0121 12:55:47.880620 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qlpzh" Jan 21 12:55:47 crc kubenswrapper[4881]: I0121 12:55:47.908140 4881 scope.go:117] "RemoveContainer" containerID="abcf99b1686ed30d644ac8cc3108a674f3f5c33b08bd1de6310637f6c896a97b" Jan 21 12:55:47 crc kubenswrapper[4881]: I0121 12:55:47.940213 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qlpzh"] Jan 21 12:55:47 crc kubenswrapper[4881]: I0121 12:55:47.948272 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qlpzh"] Jan 21 12:55:47 crc kubenswrapper[4881]: I0121 12:55:47.963433 4881 scope.go:117] "RemoveContainer" containerID="44aa3e3470b0ca5ec311ca3411b98f9efca17f1a79eb00b5ddc10873f556ea0f" Jan 21 12:55:48 crc kubenswrapper[4881]: I0121 12:55:48.008092 4881 scope.go:117] "RemoveContainer" containerID="4c2249baf75909213b45ca0d5d8deab257ea7674b612d4fa673ea256f1644b3b" Jan 21 12:55:48 crc kubenswrapper[4881]: E0121 12:55:48.008941 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c2249baf75909213b45ca0d5d8deab257ea7674b612d4fa673ea256f1644b3b\": container with ID starting with 4c2249baf75909213b45ca0d5d8deab257ea7674b612d4fa673ea256f1644b3b not found: ID does not exist" containerID="4c2249baf75909213b45ca0d5d8deab257ea7674b612d4fa673ea256f1644b3b" Jan 21 12:55:48 crc kubenswrapper[4881]: I0121 12:55:48.009009 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c2249baf75909213b45ca0d5d8deab257ea7674b612d4fa673ea256f1644b3b"} err="failed to get container status \"4c2249baf75909213b45ca0d5d8deab257ea7674b612d4fa673ea256f1644b3b\": rpc error: code = NotFound desc = could not find container \"4c2249baf75909213b45ca0d5d8deab257ea7674b612d4fa673ea256f1644b3b\": container with ID starting with 4c2249baf75909213b45ca0d5d8deab257ea7674b612d4fa673ea256f1644b3b not found: ID does not exist" Jan 21 12:55:48 crc kubenswrapper[4881]: I0121 12:55:48.009054 4881 scope.go:117] "RemoveContainer" containerID="abcf99b1686ed30d644ac8cc3108a674f3f5c33b08bd1de6310637f6c896a97b" Jan 21 12:55:48 crc kubenswrapper[4881]: E0121 12:55:48.009461 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abcf99b1686ed30d644ac8cc3108a674f3f5c33b08bd1de6310637f6c896a97b\": container with ID starting with abcf99b1686ed30d644ac8cc3108a674f3f5c33b08bd1de6310637f6c896a97b not found: ID does not exist" containerID="abcf99b1686ed30d644ac8cc3108a674f3f5c33b08bd1de6310637f6c896a97b" Jan 21 12:55:48 crc kubenswrapper[4881]: I0121 12:55:48.009496 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abcf99b1686ed30d644ac8cc3108a674f3f5c33b08bd1de6310637f6c896a97b"} err="failed to get container status \"abcf99b1686ed30d644ac8cc3108a674f3f5c33b08bd1de6310637f6c896a97b\": rpc error: code = NotFound desc = could not find container \"abcf99b1686ed30d644ac8cc3108a674f3f5c33b08bd1de6310637f6c896a97b\": container with ID starting with abcf99b1686ed30d644ac8cc3108a674f3f5c33b08bd1de6310637f6c896a97b not found: ID does not exist" Jan 21 12:55:48 crc kubenswrapper[4881]: I0121 12:55:48.009520 4881 scope.go:117] "RemoveContainer" containerID="44aa3e3470b0ca5ec311ca3411b98f9efca17f1a79eb00b5ddc10873f556ea0f" Jan 21 12:55:48 crc kubenswrapper[4881]: E0121 12:55:48.009821 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44aa3e3470b0ca5ec311ca3411b98f9efca17f1a79eb00b5ddc10873f556ea0f\": container with ID starting with 44aa3e3470b0ca5ec311ca3411b98f9efca17f1a79eb00b5ddc10873f556ea0f not found: ID does not exist" containerID="44aa3e3470b0ca5ec311ca3411b98f9efca17f1a79eb00b5ddc10873f556ea0f" Jan 21 12:55:48 crc kubenswrapper[4881]: I0121 12:55:48.009867 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44aa3e3470b0ca5ec311ca3411b98f9efca17f1a79eb00b5ddc10873f556ea0f"} err="failed to get container status \"44aa3e3470b0ca5ec311ca3411b98f9efca17f1a79eb00b5ddc10873f556ea0f\": rpc error: code = NotFound desc = could not find container \"44aa3e3470b0ca5ec311ca3411b98f9efca17f1a79eb00b5ddc10873f556ea0f\": container with ID starting with 44aa3e3470b0ca5ec311ca3411b98f9efca17f1a79eb00b5ddc10873f556ea0f not found: ID does not exist" Jan 21 12:55:49 crc kubenswrapper[4881]: I0121 12:55:49.326347 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbd14e97-6383-426c-a806-89dc0439e483" path="/var/lib/kubelet/pods/bbd14e97-6383-426c-a806-89dc0439e483/volumes" Jan 21 12:55:59 crc kubenswrapper[4881]: I0121 12:55:59.851689 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:55:59 crc kubenswrapper[4881]: I0121 12:55:59.852265 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:56:15 crc kubenswrapper[4881]: I0121 12:56:15.820605 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-m25nf"] Jan 21 12:56:15 crc kubenswrapper[4881]: E0121 12:56:15.822242 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbd14e97-6383-426c-a806-89dc0439e483" containerName="extract-utilities" Jan 21 12:56:15 crc kubenswrapper[4881]: I0121 12:56:15.822275 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbd14e97-6383-426c-a806-89dc0439e483" containerName="extract-utilities" Jan 21 12:56:15 crc kubenswrapper[4881]: E0121 12:56:15.822304 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="484fa13a-3d87-4fdb-926a-4bedccfa3140" containerName="extract-utilities" Jan 21 12:56:15 crc kubenswrapper[4881]: I0121 12:56:15.822320 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="484fa13a-3d87-4fdb-926a-4bedccfa3140" containerName="extract-utilities" Jan 21 12:56:15 crc kubenswrapper[4881]: E0121 12:56:15.822352 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="484fa13a-3d87-4fdb-926a-4bedccfa3140" containerName="registry-server" Jan 21 12:56:15 crc kubenswrapper[4881]: I0121 12:56:15.822369 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="484fa13a-3d87-4fdb-926a-4bedccfa3140" containerName="registry-server" Jan 21 12:56:15 crc kubenswrapper[4881]: E0121 12:56:15.822408 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="484fa13a-3d87-4fdb-926a-4bedccfa3140" containerName="extract-content" Jan 21 12:56:15 crc kubenswrapper[4881]: I0121 12:56:15.822423 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="484fa13a-3d87-4fdb-926a-4bedccfa3140" containerName="extract-content" Jan 21 12:56:15 crc kubenswrapper[4881]: E0121 12:56:15.822448 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbd14e97-6383-426c-a806-89dc0439e483" containerName="registry-server" Jan 21 12:56:15 crc kubenswrapper[4881]: I0121 12:56:15.822463 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbd14e97-6383-426c-a806-89dc0439e483" containerName="registry-server" Jan 21 12:56:15 crc kubenswrapper[4881]: E0121 12:56:15.822493 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbd14e97-6383-426c-a806-89dc0439e483" containerName="extract-content" Jan 21 12:56:15 crc kubenswrapper[4881]: I0121 12:56:15.822508 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbd14e97-6383-426c-a806-89dc0439e483" containerName="extract-content" Jan 21 12:56:15 crc kubenswrapper[4881]: I0121 12:56:15.823054 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="484fa13a-3d87-4fdb-926a-4bedccfa3140" containerName="registry-server" Jan 21 12:56:15 crc kubenswrapper[4881]: I0121 12:56:15.823110 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbd14e97-6383-426c-a806-89dc0439e483" containerName="registry-server" Jan 21 12:56:15 crc kubenswrapper[4881]: I0121 12:56:15.827411 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m25nf" Jan 21 12:56:15 crc kubenswrapper[4881]: I0121 12:56:15.835499 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m25nf"] Jan 21 12:56:15 crc kubenswrapper[4881]: I0121 12:56:15.987442 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31c806cc-58cd-40b7-972b-7d4e5a500a8a-utilities\") pod \"redhat-operators-m25nf\" (UID: \"31c806cc-58cd-40b7-972b-7d4e5a500a8a\") " pod="openshift-marketplace/redhat-operators-m25nf" Jan 21 12:56:15 crc kubenswrapper[4881]: I0121 12:56:15.987882 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31c806cc-58cd-40b7-972b-7d4e5a500a8a-catalog-content\") pod \"redhat-operators-m25nf\" (UID: \"31c806cc-58cd-40b7-972b-7d4e5a500a8a\") " pod="openshift-marketplace/redhat-operators-m25nf" Jan 21 12:56:15 crc kubenswrapper[4881]: I0121 12:56:15.987922 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q85ss\" (UniqueName: \"kubernetes.io/projected/31c806cc-58cd-40b7-972b-7d4e5a500a8a-kube-api-access-q85ss\") pod \"redhat-operators-m25nf\" (UID: \"31c806cc-58cd-40b7-972b-7d4e5a500a8a\") " pod="openshift-marketplace/redhat-operators-m25nf" Jan 21 12:56:16 crc kubenswrapper[4881]: I0121 12:56:16.090415 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31c806cc-58cd-40b7-972b-7d4e5a500a8a-utilities\") pod \"redhat-operators-m25nf\" (UID: \"31c806cc-58cd-40b7-972b-7d4e5a500a8a\") " pod="openshift-marketplace/redhat-operators-m25nf" Jan 21 12:56:16 crc kubenswrapper[4881]: I0121 12:56:16.090491 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31c806cc-58cd-40b7-972b-7d4e5a500a8a-catalog-content\") pod \"redhat-operators-m25nf\" (UID: \"31c806cc-58cd-40b7-972b-7d4e5a500a8a\") " pod="openshift-marketplace/redhat-operators-m25nf" Jan 21 12:56:16 crc kubenswrapper[4881]: I0121 12:56:16.090521 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q85ss\" (UniqueName: \"kubernetes.io/projected/31c806cc-58cd-40b7-972b-7d4e5a500a8a-kube-api-access-q85ss\") pod \"redhat-operators-m25nf\" (UID: \"31c806cc-58cd-40b7-972b-7d4e5a500a8a\") " pod="openshift-marketplace/redhat-operators-m25nf" Jan 21 12:56:16 crc kubenswrapper[4881]: I0121 12:56:16.091021 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31c806cc-58cd-40b7-972b-7d4e5a500a8a-utilities\") pod \"redhat-operators-m25nf\" (UID: \"31c806cc-58cd-40b7-972b-7d4e5a500a8a\") " pod="openshift-marketplace/redhat-operators-m25nf" Jan 21 12:56:16 crc kubenswrapper[4881]: I0121 12:56:16.091425 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31c806cc-58cd-40b7-972b-7d4e5a500a8a-catalog-content\") pod \"redhat-operators-m25nf\" (UID: \"31c806cc-58cd-40b7-972b-7d4e5a500a8a\") " pod="openshift-marketplace/redhat-operators-m25nf" Jan 21 12:56:16 crc kubenswrapper[4881]: I0121 12:56:16.116757 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q85ss\" (UniqueName: \"kubernetes.io/projected/31c806cc-58cd-40b7-972b-7d4e5a500a8a-kube-api-access-q85ss\") pod \"redhat-operators-m25nf\" (UID: \"31c806cc-58cd-40b7-972b-7d4e5a500a8a\") " pod="openshift-marketplace/redhat-operators-m25nf" Jan 21 12:56:16 crc kubenswrapper[4881]: I0121 12:56:16.166254 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m25nf" Jan 21 12:56:16 crc kubenswrapper[4881]: I0121 12:56:16.706428 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m25nf"] Jan 21 12:56:16 crc kubenswrapper[4881]: W0121 12:56:16.718871 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31c806cc_58cd_40b7_972b_7d4e5a500a8a.slice/crio-6ddbba52e6459507301326b72602f0700b0b72510f5829c2bc18824b919047dc WatchSource:0}: Error finding container 6ddbba52e6459507301326b72602f0700b0b72510f5829c2bc18824b919047dc: Status 404 returned error can't find the container with id 6ddbba52e6459507301326b72602f0700b0b72510f5829c2bc18824b919047dc Jan 21 12:56:17 crc kubenswrapper[4881]: I0121 12:56:17.300898 4881 generic.go:334] "Generic (PLEG): container finished" podID="31c806cc-58cd-40b7-972b-7d4e5a500a8a" containerID="84f8677f23cd5adefb850b16e1f11b4936586bf112d2cd38c8ec2e95645b2016" exitCode=0 Jan 21 12:56:17 crc kubenswrapper[4881]: I0121 12:56:17.301087 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m25nf" event={"ID":"31c806cc-58cd-40b7-972b-7d4e5a500a8a","Type":"ContainerDied","Data":"84f8677f23cd5adefb850b16e1f11b4936586bf112d2cd38c8ec2e95645b2016"} Jan 21 12:56:17 crc kubenswrapper[4881]: I0121 12:56:17.301231 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m25nf" event={"ID":"31c806cc-58cd-40b7-972b-7d4e5a500a8a","Type":"ContainerStarted","Data":"6ddbba52e6459507301326b72602f0700b0b72510f5829c2bc18824b919047dc"} Jan 21 12:56:19 crc kubenswrapper[4881]: I0121 12:56:19.332685 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m25nf" event={"ID":"31c806cc-58cd-40b7-972b-7d4e5a500a8a","Type":"ContainerStarted","Data":"44f12e1fe5e9c07616fdf345aff4d2db44db03a9ce1d6f3c3d04a8507e177c6d"} Jan 21 12:56:24 crc kubenswrapper[4881]: I0121 12:56:24.387880 4881 generic.go:334] "Generic (PLEG): container finished" podID="31c806cc-58cd-40b7-972b-7d4e5a500a8a" containerID="44f12e1fe5e9c07616fdf345aff4d2db44db03a9ce1d6f3c3d04a8507e177c6d" exitCode=0 Jan 21 12:56:24 crc kubenswrapper[4881]: I0121 12:56:24.387983 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m25nf" event={"ID":"31c806cc-58cd-40b7-972b-7d4e5a500a8a","Type":"ContainerDied","Data":"44f12e1fe5e9c07616fdf345aff4d2db44db03a9ce1d6f3c3d04a8507e177c6d"} Jan 21 12:56:25 crc kubenswrapper[4881]: I0121 12:56:25.401419 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m25nf" event={"ID":"31c806cc-58cd-40b7-972b-7d4e5a500a8a","Type":"ContainerStarted","Data":"701baa1d1fb5a942cb71b2fb4f5f8ca5e51da8cb32fde1934ed0f0163b92777b"} Jan 21 12:56:25 crc kubenswrapper[4881]: I0121 12:56:25.423761 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-m25nf" podStartSLOduration=2.812697162 podStartE2EDuration="10.423731895s" podCreationTimestamp="2026-01-21 12:56:15 +0000 UTC" firstStartedPulling="2026-01-21 12:56:17.303004944 +0000 UTC m=+7164.562961413" lastFinishedPulling="2026-01-21 12:56:24.914039667 +0000 UTC m=+7172.173996146" observedRunningTime="2026-01-21 12:56:25.421361407 +0000 UTC m=+7172.681317916" watchObservedRunningTime="2026-01-21 12:56:25.423731895 +0000 UTC m=+7172.683688374" Jan 21 12:56:26 crc kubenswrapper[4881]: I0121 12:56:26.167561 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-m25nf" Jan 21 12:56:26 crc kubenswrapper[4881]: I0121 12:56:26.167889 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-m25nf" Jan 21 12:56:27 crc kubenswrapper[4881]: I0121 12:56:27.470976 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-m25nf" podUID="31c806cc-58cd-40b7-972b-7d4e5a500a8a" containerName="registry-server" probeResult="failure" output=< Jan 21 12:56:27 crc kubenswrapper[4881]: timeout: failed to connect service ":50051" within 1s Jan 21 12:56:27 crc kubenswrapper[4881]: > Jan 21 12:56:29 crc kubenswrapper[4881]: I0121 12:56:29.850925 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:56:29 crc kubenswrapper[4881]: I0121 12:56:29.851242 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:56:36 crc kubenswrapper[4881]: I0121 12:56:36.249637 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-m25nf" Jan 21 12:56:36 crc kubenswrapper[4881]: I0121 12:56:36.325958 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-m25nf" Jan 21 12:56:39 crc kubenswrapper[4881]: I0121 12:56:39.770626 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m25nf"] Jan 21 12:56:39 crc kubenswrapper[4881]: I0121 12:56:39.772236 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-m25nf" podUID="31c806cc-58cd-40b7-972b-7d4e5a500a8a" containerName="registry-server" containerID="cri-o://701baa1d1fb5a942cb71b2fb4f5f8ca5e51da8cb32fde1934ed0f0163b92777b" gracePeriod=2 Jan 21 12:56:40 crc kubenswrapper[4881]: I0121 12:56:40.257230 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m25nf" Jan 21 12:56:40 crc kubenswrapper[4881]: I0121 12:56:40.345573 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31c806cc-58cd-40b7-972b-7d4e5a500a8a-catalog-content\") pod \"31c806cc-58cd-40b7-972b-7d4e5a500a8a\" (UID: \"31c806cc-58cd-40b7-972b-7d4e5a500a8a\") " Jan 21 12:56:40 crc kubenswrapper[4881]: I0121 12:56:40.345688 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q85ss\" (UniqueName: \"kubernetes.io/projected/31c806cc-58cd-40b7-972b-7d4e5a500a8a-kube-api-access-q85ss\") pod \"31c806cc-58cd-40b7-972b-7d4e5a500a8a\" (UID: \"31c806cc-58cd-40b7-972b-7d4e5a500a8a\") " Jan 21 12:56:40 crc kubenswrapper[4881]: I0121 12:56:40.345802 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31c806cc-58cd-40b7-972b-7d4e5a500a8a-utilities\") pod \"31c806cc-58cd-40b7-972b-7d4e5a500a8a\" (UID: \"31c806cc-58cd-40b7-972b-7d4e5a500a8a\") " Jan 21 12:56:40 crc kubenswrapper[4881]: I0121 12:56:40.346675 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31c806cc-58cd-40b7-972b-7d4e5a500a8a-utilities" (OuterVolumeSpecName: "utilities") pod "31c806cc-58cd-40b7-972b-7d4e5a500a8a" (UID: "31c806cc-58cd-40b7-972b-7d4e5a500a8a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:56:40 crc kubenswrapper[4881]: I0121 12:56:40.354621 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31c806cc-58cd-40b7-972b-7d4e5a500a8a-kube-api-access-q85ss" (OuterVolumeSpecName: "kube-api-access-q85ss") pod "31c806cc-58cd-40b7-972b-7d4e5a500a8a" (UID: "31c806cc-58cd-40b7-972b-7d4e5a500a8a"). InnerVolumeSpecName "kube-api-access-q85ss". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:56:40 crc kubenswrapper[4881]: I0121 12:56:40.448827 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q85ss\" (UniqueName: \"kubernetes.io/projected/31c806cc-58cd-40b7-972b-7d4e5a500a8a-kube-api-access-q85ss\") on node \"crc\" DevicePath \"\"" Jan 21 12:56:40 crc kubenswrapper[4881]: I0121 12:56:40.448870 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31c806cc-58cd-40b7-972b-7d4e5a500a8a-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 12:56:40 crc kubenswrapper[4881]: I0121 12:56:40.513654 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31c806cc-58cd-40b7-972b-7d4e5a500a8a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31c806cc-58cd-40b7-972b-7d4e5a500a8a" (UID: "31c806cc-58cd-40b7-972b-7d4e5a500a8a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:56:40 crc kubenswrapper[4881]: I0121 12:56:40.550227 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31c806cc-58cd-40b7-972b-7d4e5a500a8a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 12:56:40 crc kubenswrapper[4881]: I0121 12:56:40.607390 4881 generic.go:334] "Generic (PLEG): container finished" podID="31c806cc-58cd-40b7-972b-7d4e5a500a8a" containerID="701baa1d1fb5a942cb71b2fb4f5f8ca5e51da8cb32fde1934ed0f0163b92777b" exitCode=0 Jan 21 12:56:40 crc kubenswrapper[4881]: I0121 12:56:40.607440 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m25nf" event={"ID":"31c806cc-58cd-40b7-972b-7d4e5a500a8a","Type":"ContainerDied","Data":"701baa1d1fb5a942cb71b2fb4f5f8ca5e51da8cb32fde1934ed0f0163b92777b"} Jan 21 12:56:40 crc kubenswrapper[4881]: I0121 12:56:40.607486 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m25nf" event={"ID":"31c806cc-58cd-40b7-972b-7d4e5a500a8a","Type":"ContainerDied","Data":"6ddbba52e6459507301326b72602f0700b0b72510f5829c2bc18824b919047dc"} Jan 21 12:56:40 crc kubenswrapper[4881]: I0121 12:56:40.607510 4881 scope.go:117] "RemoveContainer" containerID="701baa1d1fb5a942cb71b2fb4f5f8ca5e51da8cb32fde1934ed0f0163b92777b" Jan 21 12:56:40 crc kubenswrapper[4881]: I0121 12:56:40.607564 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m25nf" Jan 21 12:56:40 crc kubenswrapper[4881]: I0121 12:56:40.667140 4881 scope.go:117] "RemoveContainer" containerID="44f12e1fe5e9c07616fdf345aff4d2db44db03a9ce1d6f3c3d04a8507e177c6d" Jan 21 12:56:40 crc kubenswrapper[4881]: I0121 12:56:40.673647 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m25nf"] Jan 21 12:56:40 crc kubenswrapper[4881]: I0121 12:56:40.684947 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-m25nf"] Jan 21 12:56:40 crc kubenswrapper[4881]: I0121 12:56:40.697716 4881 scope.go:117] "RemoveContainer" containerID="84f8677f23cd5adefb850b16e1f11b4936586bf112d2cd38c8ec2e95645b2016" Jan 21 12:56:40 crc kubenswrapper[4881]: I0121 12:56:40.743073 4881 scope.go:117] "RemoveContainer" containerID="701baa1d1fb5a942cb71b2fb4f5f8ca5e51da8cb32fde1934ed0f0163b92777b" Jan 21 12:56:40 crc kubenswrapper[4881]: E0121 12:56:40.743520 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"701baa1d1fb5a942cb71b2fb4f5f8ca5e51da8cb32fde1934ed0f0163b92777b\": container with ID starting with 701baa1d1fb5a942cb71b2fb4f5f8ca5e51da8cb32fde1934ed0f0163b92777b not found: ID does not exist" containerID="701baa1d1fb5a942cb71b2fb4f5f8ca5e51da8cb32fde1934ed0f0163b92777b" Jan 21 12:56:40 crc kubenswrapper[4881]: I0121 12:56:40.743563 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"701baa1d1fb5a942cb71b2fb4f5f8ca5e51da8cb32fde1934ed0f0163b92777b"} err="failed to get container status \"701baa1d1fb5a942cb71b2fb4f5f8ca5e51da8cb32fde1934ed0f0163b92777b\": rpc error: code = NotFound desc = could not find container \"701baa1d1fb5a942cb71b2fb4f5f8ca5e51da8cb32fde1934ed0f0163b92777b\": container with ID starting with 701baa1d1fb5a942cb71b2fb4f5f8ca5e51da8cb32fde1934ed0f0163b92777b not found: ID does not exist" Jan 21 12:56:40 crc kubenswrapper[4881]: I0121 12:56:40.743592 4881 scope.go:117] "RemoveContainer" containerID="44f12e1fe5e9c07616fdf345aff4d2db44db03a9ce1d6f3c3d04a8507e177c6d" Jan 21 12:56:40 crc kubenswrapper[4881]: E0121 12:56:40.743947 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44f12e1fe5e9c07616fdf345aff4d2db44db03a9ce1d6f3c3d04a8507e177c6d\": container with ID starting with 44f12e1fe5e9c07616fdf345aff4d2db44db03a9ce1d6f3c3d04a8507e177c6d not found: ID does not exist" containerID="44f12e1fe5e9c07616fdf345aff4d2db44db03a9ce1d6f3c3d04a8507e177c6d" Jan 21 12:56:40 crc kubenswrapper[4881]: I0121 12:56:40.744001 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44f12e1fe5e9c07616fdf345aff4d2db44db03a9ce1d6f3c3d04a8507e177c6d"} err="failed to get container status \"44f12e1fe5e9c07616fdf345aff4d2db44db03a9ce1d6f3c3d04a8507e177c6d\": rpc error: code = NotFound desc = could not find container \"44f12e1fe5e9c07616fdf345aff4d2db44db03a9ce1d6f3c3d04a8507e177c6d\": container with ID starting with 44f12e1fe5e9c07616fdf345aff4d2db44db03a9ce1d6f3c3d04a8507e177c6d not found: ID does not exist" Jan 21 12:56:40 crc kubenswrapper[4881]: I0121 12:56:40.744038 4881 scope.go:117] "RemoveContainer" containerID="84f8677f23cd5adefb850b16e1f11b4936586bf112d2cd38c8ec2e95645b2016" Jan 21 12:56:40 crc kubenswrapper[4881]: E0121 12:56:40.744479 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84f8677f23cd5adefb850b16e1f11b4936586bf112d2cd38c8ec2e95645b2016\": container with ID starting with 84f8677f23cd5adefb850b16e1f11b4936586bf112d2cd38c8ec2e95645b2016 not found: ID does not exist" containerID="84f8677f23cd5adefb850b16e1f11b4936586bf112d2cd38c8ec2e95645b2016" Jan 21 12:56:40 crc kubenswrapper[4881]: I0121 12:56:40.744529 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84f8677f23cd5adefb850b16e1f11b4936586bf112d2cd38c8ec2e95645b2016"} err="failed to get container status \"84f8677f23cd5adefb850b16e1f11b4936586bf112d2cd38c8ec2e95645b2016\": rpc error: code = NotFound desc = could not find container \"84f8677f23cd5adefb850b16e1f11b4936586bf112d2cd38c8ec2e95645b2016\": container with ID starting with 84f8677f23cd5adefb850b16e1f11b4936586bf112d2cd38c8ec2e95645b2016 not found: ID does not exist" Jan 21 12:56:41 crc kubenswrapper[4881]: I0121 12:56:41.327314 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31c806cc-58cd-40b7-972b-7d4e5a500a8a" path="/var/lib/kubelet/pods/31c806cc-58cd-40b7-972b-7d4e5a500a8a/volumes" Jan 21 12:56:59 crc kubenswrapper[4881]: I0121 12:56:59.899020 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:56:59 crc kubenswrapper[4881]: I0121 12:56:59.899536 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:56:59 crc kubenswrapper[4881]: I0121 12:56:59.899592 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 12:56:59 crc kubenswrapper[4881]: I0121 12:56:59.900452 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dedb540716d32e2d9c1d7422b582f5eca19a8a8f41fc5f2cec024d263d91f035"} pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 12:56:59 crc kubenswrapper[4881]: I0121 12:56:59.900504 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" containerID="cri-o://dedb540716d32e2d9c1d7422b582f5eca19a8a8f41fc5f2cec024d263d91f035" gracePeriod=600 Jan 21 12:57:00 crc kubenswrapper[4881]: I0121 12:57:00.939256 4881 generic.go:334] "Generic (PLEG): container finished" podID="3687b313-1df2-4274-80db-8c758b51bf2d" containerID="dedb540716d32e2d9c1d7422b582f5eca19a8a8f41fc5f2cec024d263d91f035" exitCode=0 Jan 21 12:57:00 crc kubenswrapper[4881]: I0121 12:57:00.939334 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerDied","Data":"dedb540716d32e2d9c1d7422b582f5eca19a8a8f41fc5f2cec024d263d91f035"} Jan 21 12:57:00 crc kubenswrapper[4881]: I0121 12:57:00.940096 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"c02cffb330d8abf35eb1054cc9e801224a9ea6598c8173b5cd69e9d284d2327a"} Jan 21 12:57:00 crc kubenswrapper[4881]: I0121 12:57:00.940151 4881 scope.go:117] "RemoveContainer" containerID="a4dfa7ee98cb04337edaebb516d60cd9c59428107bdb7a432a638ae4c9d89379" Jan 21 12:58:14 crc kubenswrapper[4881]: I0121 12:58:14.437492 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-pd9dn/must-gather-wjn9v"] Jan 21 12:58:14 crc kubenswrapper[4881]: E0121 12:58:14.438398 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31c806cc-58cd-40b7-972b-7d4e5a500a8a" containerName="extract-content" Jan 21 12:58:14 crc kubenswrapper[4881]: I0121 12:58:14.438424 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="31c806cc-58cd-40b7-972b-7d4e5a500a8a" containerName="extract-content" Jan 21 12:58:14 crc kubenswrapper[4881]: E0121 12:58:14.438446 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31c806cc-58cd-40b7-972b-7d4e5a500a8a" containerName="extract-utilities" Jan 21 12:58:14 crc kubenswrapper[4881]: I0121 12:58:14.438452 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="31c806cc-58cd-40b7-972b-7d4e5a500a8a" containerName="extract-utilities" Jan 21 12:58:14 crc kubenswrapper[4881]: E0121 12:58:14.438480 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31c806cc-58cd-40b7-972b-7d4e5a500a8a" containerName="registry-server" Jan 21 12:58:14 crc kubenswrapper[4881]: I0121 12:58:14.438487 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="31c806cc-58cd-40b7-972b-7d4e5a500a8a" containerName="registry-server" Jan 21 12:58:14 crc kubenswrapper[4881]: I0121 12:58:14.438744 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="31c806cc-58cd-40b7-972b-7d4e5a500a8a" containerName="registry-server" Jan 21 12:58:14 crc kubenswrapper[4881]: I0121 12:58:14.442439 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pd9dn/must-gather-wjn9v" Jan 21 12:58:14 crc kubenswrapper[4881]: I0121 12:58:14.449434 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-pd9dn"/"kube-root-ca.crt" Jan 21 12:58:14 crc kubenswrapper[4881]: I0121 12:58:14.449738 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-pd9dn"/"openshift-service-ca.crt" Jan 21 12:58:14 crc kubenswrapper[4881]: I0121 12:58:14.450014 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-pd9dn"/"default-dockercfg-8m7sq" Jan 21 12:58:14 crc kubenswrapper[4881]: I0121 12:58:14.452619 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-pd9dn/must-gather-wjn9v"] Jan 21 12:58:14 crc kubenswrapper[4881]: I0121 12:58:14.573464 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvpvb\" (UniqueName: \"kubernetes.io/projected/ec6c7413-f699-442c-b92e-bbe40326dcb1-kube-api-access-wvpvb\") pod \"must-gather-wjn9v\" (UID: \"ec6c7413-f699-442c-b92e-bbe40326dcb1\") " pod="openshift-must-gather-pd9dn/must-gather-wjn9v" Jan 21 12:58:14 crc kubenswrapper[4881]: I0121 12:58:14.573520 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ec6c7413-f699-442c-b92e-bbe40326dcb1-must-gather-output\") pod \"must-gather-wjn9v\" (UID: \"ec6c7413-f699-442c-b92e-bbe40326dcb1\") " pod="openshift-must-gather-pd9dn/must-gather-wjn9v" Jan 21 12:58:14 crc kubenswrapper[4881]: I0121 12:58:14.676650 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvpvb\" (UniqueName: \"kubernetes.io/projected/ec6c7413-f699-442c-b92e-bbe40326dcb1-kube-api-access-wvpvb\") pod \"must-gather-wjn9v\" (UID: \"ec6c7413-f699-442c-b92e-bbe40326dcb1\") " pod="openshift-must-gather-pd9dn/must-gather-wjn9v" Jan 21 12:58:14 crc kubenswrapper[4881]: I0121 12:58:14.677028 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ec6c7413-f699-442c-b92e-bbe40326dcb1-must-gather-output\") pod \"must-gather-wjn9v\" (UID: \"ec6c7413-f699-442c-b92e-bbe40326dcb1\") " pod="openshift-must-gather-pd9dn/must-gather-wjn9v" Jan 21 12:58:14 crc kubenswrapper[4881]: I0121 12:58:14.677483 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ec6c7413-f699-442c-b92e-bbe40326dcb1-must-gather-output\") pod \"must-gather-wjn9v\" (UID: \"ec6c7413-f699-442c-b92e-bbe40326dcb1\") " pod="openshift-must-gather-pd9dn/must-gather-wjn9v" Jan 21 12:58:14 crc kubenswrapper[4881]: I0121 12:58:14.697673 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvpvb\" (UniqueName: \"kubernetes.io/projected/ec6c7413-f699-442c-b92e-bbe40326dcb1-kube-api-access-wvpvb\") pod \"must-gather-wjn9v\" (UID: \"ec6c7413-f699-442c-b92e-bbe40326dcb1\") " pod="openshift-must-gather-pd9dn/must-gather-wjn9v" Jan 21 12:58:14 crc kubenswrapper[4881]: I0121 12:58:14.771385 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pd9dn/must-gather-wjn9v" Jan 21 12:58:15 crc kubenswrapper[4881]: I0121 12:58:15.255155 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-pd9dn/must-gather-wjn9v"] Jan 21 12:58:16 crc kubenswrapper[4881]: I0121 12:58:16.205358 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pd9dn/must-gather-wjn9v" event={"ID":"ec6c7413-f699-442c-b92e-bbe40326dcb1","Type":"ContainerStarted","Data":"f8435432aef52b19bad8a8cd808ccfc704ccacec42b64e9e84020e60e34cf08a"} Jan 21 12:58:28 crc kubenswrapper[4881]: I0121 12:58:28.559622 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pd9dn/must-gather-wjn9v" event={"ID":"ec6c7413-f699-442c-b92e-bbe40326dcb1","Type":"ContainerStarted","Data":"a4e2dbbd606e451b55b6b34e41cf24c5d9baf413c001fc8ed6b035bceeebfbb1"} Jan 21 12:58:28 crc kubenswrapper[4881]: I0121 12:58:28.560391 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pd9dn/must-gather-wjn9v" event={"ID":"ec6c7413-f699-442c-b92e-bbe40326dcb1","Type":"ContainerStarted","Data":"9cd18be4060450a8e8728911060acaac370f6470f67553eea3230920b13495f5"} Jan 21 12:58:29 crc kubenswrapper[4881]: I0121 12:58:29.588593 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-pd9dn/must-gather-wjn9v" podStartSLOduration=3.207365374 podStartE2EDuration="15.588553757s" podCreationTimestamp="2026-01-21 12:58:14 +0000 UTC" firstStartedPulling="2026-01-21 12:58:15.265857029 +0000 UTC m=+7282.525813498" lastFinishedPulling="2026-01-21 12:58:27.647045412 +0000 UTC m=+7294.907001881" observedRunningTime="2026-01-21 12:58:29.584772705 +0000 UTC m=+7296.844729184" watchObservedRunningTime="2026-01-21 12:58:29.588553757 +0000 UTC m=+7296.848510226" Jan 21 12:58:32 crc kubenswrapper[4881]: I0121 12:58:32.997765 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-pd9dn/crc-debug-r7kk4"] Jan 21 12:58:33 crc kubenswrapper[4881]: I0121 12:58:33.000175 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pd9dn/crc-debug-r7kk4" Jan 21 12:58:33 crc kubenswrapper[4881]: I0121 12:58:33.177704 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/32ea9833-e257-4601-8be7-dcf0882d25ff-host\") pod \"crc-debug-r7kk4\" (UID: \"32ea9833-e257-4601-8be7-dcf0882d25ff\") " pod="openshift-must-gather-pd9dn/crc-debug-r7kk4" Jan 21 12:58:33 crc kubenswrapper[4881]: I0121 12:58:33.178045 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfvbq\" (UniqueName: \"kubernetes.io/projected/32ea9833-e257-4601-8be7-dcf0882d25ff-kube-api-access-hfvbq\") pod \"crc-debug-r7kk4\" (UID: \"32ea9833-e257-4601-8be7-dcf0882d25ff\") " pod="openshift-must-gather-pd9dn/crc-debug-r7kk4" Jan 21 12:58:33 crc kubenswrapper[4881]: I0121 12:58:33.280292 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/32ea9833-e257-4601-8be7-dcf0882d25ff-host\") pod \"crc-debug-r7kk4\" (UID: \"32ea9833-e257-4601-8be7-dcf0882d25ff\") " pod="openshift-must-gather-pd9dn/crc-debug-r7kk4" Jan 21 12:58:33 crc kubenswrapper[4881]: I0121 12:58:33.280342 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfvbq\" (UniqueName: \"kubernetes.io/projected/32ea9833-e257-4601-8be7-dcf0882d25ff-kube-api-access-hfvbq\") pod \"crc-debug-r7kk4\" (UID: \"32ea9833-e257-4601-8be7-dcf0882d25ff\") " pod="openshift-must-gather-pd9dn/crc-debug-r7kk4" Jan 21 12:58:33 crc kubenswrapper[4881]: I0121 12:58:33.280810 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/32ea9833-e257-4601-8be7-dcf0882d25ff-host\") pod \"crc-debug-r7kk4\" (UID: \"32ea9833-e257-4601-8be7-dcf0882d25ff\") " pod="openshift-must-gather-pd9dn/crc-debug-r7kk4" Jan 21 12:58:33 crc kubenswrapper[4881]: I0121 12:58:33.304648 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfvbq\" (UniqueName: \"kubernetes.io/projected/32ea9833-e257-4601-8be7-dcf0882d25ff-kube-api-access-hfvbq\") pod \"crc-debug-r7kk4\" (UID: \"32ea9833-e257-4601-8be7-dcf0882d25ff\") " pod="openshift-must-gather-pd9dn/crc-debug-r7kk4" Jan 21 12:58:33 crc kubenswrapper[4881]: I0121 12:58:33.327796 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pd9dn/crc-debug-r7kk4" Jan 21 12:58:33 crc kubenswrapper[4881]: W0121 12:58:33.368280 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32ea9833_e257_4601_8be7_dcf0882d25ff.slice/crio-5015ec3924ce05a87e752db51205b1697e3330ac046050ee395aa7729f42795a WatchSource:0}: Error finding container 5015ec3924ce05a87e752db51205b1697e3330ac046050ee395aa7729f42795a: Status 404 returned error can't find the container with id 5015ec3924ce05a87e752db51205b1697e3330ac046050ee395aa7729f42795a Jan 21 12:58:33 crc kubenswrapper[4881]: I0121 12:58:33.611021 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pd9dn/crc-debug-r7kk4" event={"ID":"32ea9833-e257-4601-8be7-dcf0882d25ff","Type":"ContainerStarted","Data":"5015ec3924ce05a87e752db51205b1697e3330ac046050ee395aa7729f42795a"} Jan 21 12:58:36 crc kubenswrapper[4881]: I0121 12:58:36.380738 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7d6f7f4cc8-c4tt4_9bc5ed6a-2607-4a28-8bd3-949b0f0c761d/barbican-api-log/0.log" Jan 21 12:58:36 crc kubenswrapper[4881]: I0121 12:58:36.395028 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7d6f7f4cc8-c4tt4_9bc5ed6a-2607-4a28-8bd3-949b0f0c761d/barbican-api/0.log" Jan 21 12:58:36 crc kubenswrapper[4881]: I0121 12:58:36.625698 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-54f549c774-rnptw_6e80f53a-8873-4c07-b738-2854d9b8b089/barbican-keystone-listener-log/0.log" Jan 21 12:58:36 crc kubenswrapper[4881]: I0121 12:58:36.635264 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-54f549c774-rnptw_6e80f53a-8873-4c07-b738-2854d9b8b089/barbican-keystone-listener/0.log" Jan 21 12:58:36 crc kubenswrapper[4881]: I0121 12:58:36.746658 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-55755579c5-csgz2_90253f07-2dfb-48b3-9b75-34a653836589/barbican-worker-log/0.log" Jan 21 12:58:36 crc kubenswrapper[4881]: I0121 12:58:36.756042 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-55755579c5-csgz2_90253f07-2dfb-48b3-9b75-34a653836589/barbican-worker/0.log" Jan 21 12:58:36 crc kubenswrapper[4881]: I0121 12:58:36.814071 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-xscl5_5930ee4f-c104-4ac5-9440-2a24d110fae5/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 12:58:37 crc kubenswrapper[4881]: I0121 12:58:37.194547 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5926a818-11da-4b6b-bae0-79e6d9e10728/ceilometer-central-agent/0.log" Jan 21 12:58:37 crc kubenswrapper[4881]: I0121 12:58:37.479906 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5926a818-11da-4b6b-bae0-79e6d9e10728/ceilometer-notification-agent/0.log" Jan 21 12:58:37 crc kubenswrapper[4881]: I0121 12:58:37.487083 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5926a818-11da-4b6b-bae0-79e6d9e10728/sg-core/0.log" Jan 21 12:58:37 crc kubenswrapper[4881]: I0121 12:58:37.512993 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_5926a818-11da-4b6b-bae0-79e6d9e10728/proxy-httpd/0.log" Jan 21 12:58:37 crc kubenswrapper[4881]: I0121 12:58:37.882025 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_ae53e440-5bd5-41e3-8339-57eebaef03d2/cinder-api-log/0.log" Jan 21 12:58:38 crc kubenswrapper[4881]: I0121 12:58:38.276088 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_ae53e440-5bd5-41e3-8339-57eebaef03d2/cinder-api/0.log" Jan 21 12:58:38 crc kubenswrapper[4881]: I0121 12:58:38.709153 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_306aceba-6a20-4b47-a19a-fb193a27e2bd/cinder-backup/0.log" Jan 21 12:58:39 crc kubenswrapper[4881]: I0121 12:58:39.204380 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_306aceba-6a20-4b47-a19a-fb193a27e2bd/probe/0.log" Jan 21 12:58:39 crc kubenswrapper[4881]: I0121 12:58:39.331203 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_ab676e77-1ab3-4cab-9960-a00babfe74fb/cinder-scheduler/0.log" Jan 21 12:58:39 crc kubenswrapper[4881]: I0121 12:58:39.398172 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_ab676e77-1ab3-4cab-9960-a00babfe74fb/probe/0.log" Jan 21 12:58:39 crc kubenswrapper[4881]: I0121 12:58:39.482186 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_8c912ca5-a82b-4083-8579-f0f6f506eebb/cinder-volume/0.log" Jan 21 12:58:39 crc kubenswrapper[4881]: I0121 12:58:39.807757 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_8c912ca5-a82b-4083-8579-f0f6f506eebb/probe/0.log" Jan 21 12:58:39 crc kubenswrapper[4881]: I0121 12:58:39.925592 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_112f53db-2aaa-4a3d-bc89-fd86952639ab/cinder-volume/0.log" Jan 21 12:58:39 crc kubenswrapper[4881]: I0121 12:58:39.990374 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_112f53db-2aaa-4a3d-bc89-fd86952639ab/probe/0.log" Jan 21 12:58:40 crc kubenswrapper[4881]: I0121 12:58:40.085751 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-4wdq6_24a093f9-cd67-48f9-a18b-48d1a79a8aa0/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 12:58:40 crc kubenswrapper[4881]: I0121 12:58:40.123139 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-c995r_f96dcee4-7734-4166-9a01-443c6ee66f86/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 12:58:40 crc kubenswrapper[4881]: I0121 12:58:40.282059 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-59596cff49-cpxcq_a08dbd57-125f-4ca2-b166-434068ee9432/dnsmasq-dns/0.log" Jan 21 12:58:40 crc kubenswrapper[4881]: I0121 12:58:40.301984 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-59596cff49-cpxcq_a08dbd57-125f-4ca2-b166-434068ee9432/init/0.log" Jan 21 12:58:40 crc kubenswrapper[4881]: I0121 12:58:40.348476 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-2gnxt_01f76bc7-59dc-4fd0-8ca8-90ce72cb6f45/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 12:58:40 crc kubenswrapper[4881]: I0121 12:58:40.362941 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_3e7b52fc-b295-475c-bef6-074b1cb2a2f5/glance-log/0.log" Jan 21 12:58:40 crc kubenswrapper[4881]: I0121 12:58:40.463232 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_3e7b52fc-b295-475c-bef6-074b1cb2a2f5/glance-httpd/0.log" Jan 21 12:58:40 crc kubenswrapper[4881]: I0121 12:58:40.479059 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_ec8e0779-1552-4ebb-88d7-95a49e734b55/glance-log/0.log" Jan 21 12:58:40 crc kubenswrapper[4881]: I0121 12:58:40.521558 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_ec8e0779-1552-4ebb-88d7-95a49e734b55/glance-httpd/0.log" Jan 21 12:58:41 crc kubenswrapper[4881]: I0121 12:58:41.931329 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-68b447d964-6llq5_07cdf1a8-aec4-42ca-a564-c91e7132663d/horizon-log/0.log" Jan 21 12:58:42 crc kubenswrapper[4881]: I0121 12:58:42.035718 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-68b447d964-6llq5_07cdf1a8-aec4-42ca-a564-c91e7132663d/horizon/0.log" Jan 21 12:58:42 crc kubenswrapper[4881]: I0121 12:58:42.069037 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-5l99l_1ef84c59-8554-4369-9f9f-877505b3b952/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 12:58:42 crc kubenswrapper[4881]: I0121 12:58:42.153677 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-6khfl_3880ebda-d882-4e35-89e7-ef739a423a7d/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 12:58:42 crc kubenswrapper[4881]: I0121 12:58:42.512358 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-857c5cc966-ggkc4_cacf36ac-8c52-43a6-9fcb-2cfc5b27a952/keystone-api/0.log" Jan 21 12:58:42 crc kubenswrapper[4881]: I0121 12:58:42.523544 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29483281-5vf4h_d4b92750-a75d-44b9-b0ba-75296371fc59/keystone-cron/0.log" Jan 21 12:58:42 crc kubenswrapper[4881]: I0121 12:58:42.774977 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_0e33ff3f-b508-4ac4-9a60-6189a65be2a6/kube-state-metrics/0.log" Jan 21 12:58:42 crc kubenswrapper[4881]: I0121 12:58:42.833306 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-sxlnq_38ac646b-177b-488d-853b-e04b22f267a4/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 12:58:47 crc kubenswrapper[4881]: I0121 12:58:47.837148 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pd9dn/crc-debug-r7kk4" event={"ID":"32ea9833-e257-4601-8be7-dcf0882d25ff","Type":"ContainerStarted","Data":"0818ec9313f2fc50a748108c2a7b4170d06db46eb9b811376ec620220e592ebc"} Jan 21 12:58:47 crc kubenswrapper[4881]: I0121 12:58:47.868680 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-pd9dn/crc-debug-r7kk4" podStartSLOduration=1.998225284 podStartE2EDuration="15.868648755s" podCreationTimestamp="2026-01-21 12:58:32 +0000 UTC" firstStartedPulling="2026-01-21 12:58:33.370769403 +0000 UTC m=+7300.630725872" lastFinishedPulling="2026-01-21 12:58:47.241192874 +0000 UTC m=+7314.501149343" observedRunningTime="2026-01-21 12:58:47.863384797 +0000 UTC m=+7315.123341276" watchObservedRunningTime="2026-01-21 12:58:47.868648755 +0000 UTC m=+7315.128605224" Jan 21 12:59:00 crc kubenswrapper[4881]: I0121 12:59:00.736027 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-dmwlt_c4a109b4-26ee-4a46-9333-989cf87c0ff7/controller/0.log" Jan 21 12:59:00 crc kubenswrapper[4881]: I0121 12:59:00.756822 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-dmwlt_c4a109b4-26ee-4a46-9333-989cf87c0ff7/kube-rbac-proxy/0.log" Jan 21 12:59:00 crc kubenswrapper[4881]: I0121 12:59:00.808276 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm54h_d055f37b-fab0-4fd0-b683-4a7974b21ad5/controller/0.log" Jan 21 12:59:04 crc kubenswrapper[4881]: I0121 12:59:04.110978 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm54h_d055f37b-fab0-4fd0-b683-4a7974b21ad5/frr/0.log" Jan 21 12:59:04 crc kubenswrapper[4881]: I0121 12:59:04.124845 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm54h_d055f37b-fab0-4fd0-b683-4a7974b21ad5/reloader/0.log" Jan 21 12:59:04 crc kubenswrapper[4881]: I0121 12:59:04.130225 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm54h_d055f37b-fab0-4fd0-b683-4a7974b21ad5/frr-metrics/0.log" Jan 21 12:59:04 crc kubenswrapper[4881]: I0121 12:59:04.150816 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm54h_d055f37b-fab0-4fd0-b683-4a7974b21ad5/kube-rbac-proxy/0.log" Jan 21 12:59:04 crc kubenswrapper[4881]: I0121 12:59:04.161391 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm54h_d055f37b-fab0-4fd0-b683-4a7974b21ad5/kube-rbac-proxy-frr/0.log" Jan 21 12:59:04 crc kubenswrapper[4881]: I0121 12:59:04.178065 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm54h_d055f37b-fab0-4fd0-b683-4a7974b21ad5/cp-frr-files/0.log" Jan 21 12:59:04 crc kubenswrapper[4881]: I0121 12:59:04.190223 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm54h_d055f37b-fab0-4fd0-b683-4a7974b21ad5/cp-reloader/0.log" Jan 21 12:59:04 crc kubenswrapper[4881]: I0121 12:59:04.203642 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm54h_d055f37b-fab0-4fd0-b683-4a7974b21ad5/cp-metrics/0.log" Jan 21 12:59:04 crc kubenswrapper[4881]: I0121 12:59:04.230056 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-tzxpk_eaaea696-21d8-4963-8364-82fa7bbb0e19/frr-k8s-webhook-server/0.log" Jan 21 12:59:04 crc kubenswrapper[4881]: I0121 12:59:04.276587 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-58bd8f8bd-8k4c9_769e47b6-bd47-489d-9b99-4f2f0e30c4fd/manager/0.log" Jan 21 12:59:04 crc kubenswrapper[4881]: I0121 12:59:04.286395 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5cd4664cfc-6lg4r_a194c95e-cbcb-4d7e-a631-d4a14989e985/webhook-server/0.log" Jan 21 12:59:05 crc kubenswrapper[4881]: I0121 12:59:05.471178 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-697j4_f265a6e2-ea90-45ea-89c0-178d25243784/speaker/0.log" Jan 21 12:59:05 crc kubenswrapper[4881]: I0121 12:59:05.478941 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-697j4_f265a6e2-ea90-45ea-89c0-178d25243784/kube-rbac-proxy/0.log" Jan 21 12:59:15 crc kubenswrapper[4881]: I0121 12:59:15.800433 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_7960c16a-de64-4154-9072-aee49e3bd573/memcached/0.log" Jan 21 12:59:15 crc kubenswrapper[4881]: I0121 12:59:15.933476 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-667d9dbbbc-pcbhd_3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9/neutron-api/0.log" Jan 21 12:59:16 crc kubenswrapper[4881]: I0121 12:59:16.019877 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-667d9dbbbc-pcbhd_3bad57e2-bab6-4a19-a223-ec9bc2c3c9f9/neutron-httpd/0.log" Jan 21 12:59:16 crc kubenswrapper[4881]: I0121 12:59:16.046081 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-fkxjp_0e428246-daf9-40a4-9049-74281259f82c/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 12:59:16 crc kubenswrapper[4881]: I0121 12:59:16.578980 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_1188227a-462c-4c61-ae6e-96b55ffacd71/nova-api-log/0.log" Jan 21 12:59:17 crc kubenswrapper[4881]: I0121 12:59:17.452895 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_1188227a-462c-4c61-ae6e-96b55ffacd71/nova-api-api/0.log" Jan 21 12:59:17 crc kubenswrapper[4881]: I0121 12:59:17.583830 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_dc5fb029-b5fa-4065-adb2-af2e634785fc/nova-cell0-conductor-conductor/0.log" Jan 21 12:59:17 crc kubenswrapper[4881]: I0121 12:59:17.685326 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_161c46d2-7b98-4a9e-a648-ce25b966f589/nova-cell1-conductor-conductor/0.log" Jan 21 12:59:17 crc kubenswrapper[4881]: I0121 12:59:17.786185 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_b9ce9000-94ef-4f6e-8bc7-97feca616b9e/nova-cell1-novncproxy-novncproxy/0.log" Jan 21 12:59:17 crc kubenswrapper[4881]: I0121 12:59:17.858973 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-t495m_bfc5a115-aedb-4364-8b0d-59b8379346cb/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 12:59:17 crc kubenswrapper[4881]: I0121 12:59:17.953009 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_ba03e9fe-3ad6-4c52-bde7-bd41fca63834/nova-metadata-log/0.log" Jan 21 12:59:20 crc kubenswrapper[4881]: I0121 12:59:20.386210 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_ba03e9fe-3ad6-4c52-bde7-bd41fca63834/nova-metadata-metadata/0.log" Jan 21 12:59:20 crc kubenswrapper[4881]: I0121 12:59:20.566702 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_6f6e9d1b-902e-450b-8202-337c04c265ba/nova-scheduler-scheduler/0.log" Jan 21 12:59:20 crc kubenswrapper[4881]: I0121 12:59:20.597059 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_cd1973a5-773b-438b-aab7-709fb828324d/galera/0.log" Jan 21 12:59:20 crc kubenswrapper[4881]: I0121 12:59:20.608316 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_cd1973a5-773b-438b-aab7-709fb828324d/mysql-bootstrap/0.log" Jan 21 12:59:20 crc kubenswrapper[4881]: I0121 12:59:20.639665 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_197dd5bf-f68a-4d9d-b75c-de87a54ed46b/galera/0.log" Jan 21 12:59:20 crc kubenswrapper[4881]: I0121 12:59:20.653717 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_197dd5bf-f68a-4d9d-b75c-de87a54ed46b/mysql-bootstrap/0.log" Jan 21 12:59:20 crc kubenswrapper[4881]: I0121 12:59:20.663853 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_b0b6ce2c-5ae8-496f-9374-d3069bc3d149/openstackclient/0.log" Jan 21 12:59:20 crc kubenswrapper[4881]: I0121 12:59:20.676036 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-5dzhr_b9bd229b-588d-477e-8501-cd976b539e3a/openstack-network-exporter/0.log" Jan 21 12:59:20 crc kubenswrapper[4881]: I0121 12:59:20.690199 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-2rtl8_9ff4a63e-40e5-4133-967e-9ba083f3603b/ovsdb-server/0.log" Jan 21 12:59:20 crc kubenswrapper[4881]: I0121 12:59:20.903858 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-2rtl8_9ff4a63e-40e5-4133-967e-9ba083f3603b/ovs-vswitchd/0.log" Jan 21 12:59:20 crc kubenswrapper[4881]: I0121 12:59:20.911434 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-2rtl8_9ff4a63e-40e5-4133-967e-9ba083f3603b/ovsdb-server-init/0.log" Jan 21 12:59:20 crc kubenswrapper[4881]: I0121 12:59:20.929586 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-s642n_256e0b4a-baac-415c-94c6-09f08fa09c7c/ovn-controller/0.log" Jan 21 12:59:20 crc kubenswrapper[4881]: I0121 12:59:20.997868 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-d4sgg_11ba18fa-d69e-4a6b-9796-e92d95d702ec/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 12:59:21 crc kubenswrapper[4881]: I0121 12:59:21.215706 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_b3882b01-10ce-4832-ae71-676a8b65b086/ovn-northd/0.log" Jan 21 12:59:21 crc kubenswrapper[4881]: I0121 12:59:21.232080 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_b3882b01-10ce-4832-ae71-676a8b65b086/openstack-network-exporter/0.log" Jan 21 12:59:21 crc kubenswrapper[4881]: I0121 12:59:21.251452 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_24136f67-aca3-4e40-b3c2-b36b7623475f/ovsdbserver-nb/0.log" Jan 21 12:59:21 crc kubenswrapper[4881]: I0121 12:59:21.261069 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_24136f67-aca3-4e40-b3c2-b36b7623475f/openstack-network-exporter/0.log" Jan 21 12:59:21 crc kubenswrapper[4881]: I0121 12:59:21.283430 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_c3884c64-25d6-42b5-a309-7eafa170719e/ovsdbserver-sb/0.log" Jan 21 12:59:21 crc kubenswrapper[4881]: I0121 12:59:21.292549 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_c3884c64-25d6-42b5-a309-7eafa170719e/openstack-network-exporter/0.log" Jan 21 12:59:21 crc kubenswrapper[4881]: I0121 12:59:21.455205 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-59bf6c8c7b-wvc46_9358f706-24c3-46c5-8490-89402a85e9a4/placement-log/0.log" Jan 21 12:59:21 crc kubenswrapper[4881]: I0121 12:59:21.597139 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-59bf6c8c7b-wvc46_9358f706-24c3-46c5-8490-89402a85e9a4/placement-api/0.log" Jan 21 12:59:21 crc kubenswrapper[4881]: I0121 12:59:21.616044 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_4a412b1e-29ac-4420-920d-6054e2c03d53/prometheus/0.log" Jan 21 12:59:21 crc kubenswrapper[4881]: I0121 12:59:21.623474 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_4a412b1e-29ac-4420-920d-6054e2c03d53/config-reloader/0.log" Jan 21 12:59:21 crc kubenswrapper[4881]: I0121 12:59:21.633843 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_4a412b1e-29ac-4420-920d-6054e2c03d53/thanos-sidecar/0.log" Jan 21 12:59:21 crc kubenswrapper[4881]: I0121 12:59:21.640890 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_4a412b1e-29ac-4420-920d-6054e2c03d53/init-config-reloader/0.log" Jan 21 12:59:21 crc kubenswrapper[4881]: I0121 12:59:21.683277 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_de7ea801-d184-48cf-a602-c82ff20892ff/rabbitmq/0.log" Jan 21 12:59:21 crc kubenswrapper[4881]: I0121 12:59:21.691085 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_de7ea801-d184-48cf-a602-c82ff20892ff/setup-container/0.log" Jan 21 12:59:21 crc kubenswrapper[4881]: I0121 12:59:21.722083 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_44bcf219-3358-4596-9d1e-88a51c415266/rabbitmq/0.log" Jan 21 12:59:21 crc kubenswrapper[4881]: I0121 12:59:21.728859 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_44bcf219-3358-4596-9d1e-88a51c415266/setup-container/0.log" Jan 21 12:59:21 crc kubenswrapper[4881]: I0121 12:59:21.776675 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_35a19b99-eed0-4383-bea5-cf43d84a5a3e/rabbitmq/0.log" Jan 21 12:59:21 crc kubenswrapper[4881]: I0121 12:59:21.781642 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_35a19b99-eed0-4383-bea5-cf43d84a5a3e/setup-container/0.log" Jan 21 12:59:21 crc kubenswrapper[4881]: I0121 12:59:21.802641 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-rdchn_828bd055-053d-43b7-b76f-746438bb9b41/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 12:59:21 crc kubenswrapper[4881]: I0121 12:59:21.813389 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-vqzdk_dd495475-04cc-47b2-ad0e-7e3b83917ece/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 12:59:21 crc kubenswrapper[4881]: I0121 12:59:21.830865 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-whk2c_4a9e212c-bc4b-4dae-9c97-cbc48686c8fc/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 12:59:21 crc kubenswrapper[4881]: I0121 12:59:21.842280 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-7xfqr_af647318-40b6-4ce3-8f5b-c3af4c8dcb0d/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 12:59:21 crc kubenswrapper[4881]: I0121 12:59:21.857153 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-dd2hk_157a809f-f6fa-43dc-b73d-380976da1312/ssh-known-hosts-edpm-deployment/0.log" Jan 21 12:59:22 crc kubenswrapper[4881]: I0121 12:59:22.091291 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7564f958f5-jmdx2_86a11f48-404e-4c5e-8ff4-5033a6411956/proxy-httpd/0.log" Jan 21 12:59:22 crc kubenswrapper[4881]: I0121 12:59:22.112948 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7564f958f5-jmdx2_86a11f48-404e-4c5e-8ff4-5033a6411956/proxy-server/0.log" Jan 21 12:59:22 crc kubenswrapper[4881]: I0121 12:59:22.130292 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-j29v8_27451133-57c8-4991-aae0-ec0a82432176/swift-ring-rebalance/0.log" Jan 21 12:59:22 crc kubenswrapper[4881]: I0121 12:59:22.176895 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_eafb725b-4d8c-44b6-8966-4c611d4897d8/account-server/0.log" Jan 21 12:59:22 crc kubenswrapper[4881]: I0121 12:59:22.221606 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_eafb725b-4d8c-44b6-8966-4c611d4897d8/account-replicator/0.log" Jan 21 12:59:22 crc kubenswrapper[4881]: I0121 12:59:22.230961 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_eafb725b-4d8c-44b6-8966-4c611d4897d8/account-auditor/0.log" Jan 21 12:59:22 crc kubenswrapper[4881]: I0121 12:59:22.239379 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_eafb725b-4d8c-44b6-8966-4c611d4897d8/account-reaper/0.log" Jan 21 12:59:22 crc kubenswrapper[4881]: I0121 12:59:22.249024 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_eafb725b-4d8c-44b6-8966-4c611d4897d8/container-server/0.log" Jan 21 12:59:22 crc kubenswrapper[4881]: I0121 12:59:22.312024 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_eafb725b-4d8c-44b6-8966-4c611d4897d8/container-replicator/0.log" Jan 21 12:59:22 crc kubenswrapper[4881]: I0121 12:59:22.329091 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_eafb725b-4d8c-44b6-8966-4c611d4897d8/container-auditor/0.log" Jan 21 12:59:22 crc kubenswrapper[4881]: I0121 12:59:22.340310 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_eafb725b-4d8c-44b6-8966-4c611d4897d8/container-updater/0.log" Jan 21 12:59:22 crc kubenswrapper[4881]: I0121 12:59:22.361927 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_eafb725b-4d8c-44b6-8966-4c611d4897d8/object-server/0.log" Jan 21 12:59:22 crc kubenswrapper[4881]: I0121 12:59:22.399836 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_eafb725b-4d8c-44b6-8966-4c611d4897d8/object-replicator/0.log" Jan 21 12:59:22 crc kubenswrapper[4881]: I0121 12:59:22.437321 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_eafb725b-4d8c-44b6-8966-4c611d4897d8/object-auditor/0.log" Jan 21 12:59:22 crc kubenswrapper[4881]: I0121 12:59:22.454443 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_eafb725b-4d8c-44b6-8966-4c611d4897d8/object-updater/0.log" Jan 21 12:59:22 crc kubenswrapper[4881]: I0121 12:59:22.466340 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_eafb725b-4d8c-44b6-8966-4c611d4897d8/object-expirer/0.log" Jan 21 12:59:22 crc kubenswrapper[4881]: I0121 12:59:22.472828 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_eafb725b-4d8c-44b6-8966-4c611d4897d8/rsync/0.log" Jan 21 12:59:22 crc kubenswrapper[4881]: I0121 12:59:22.482274 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_eafb725b-4d8c-44b6-8966-4c611d4897d8/swift-recon-cron/0.log" Jan 21 12:59:22 crc kubenswrapper[4881]: I0121 12:59:22.547212 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-hwcnr_2f9f4763-a2f6-4558-82fa-be718012fc12/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 12:59:22 crc kubenswrapper[4881]: I0121 12:59:22.800394 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_b482979e-7a9e-4b89-846c-f50400adcf1b/tempest-tests-tempest-tests-runner/0.log" Jan 21 12:59:22 crc kubenswrapper[4881]: I0121 12:59:22.818644 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-gl8zp_ec204ea7-b207-409b-8fa0-ff2847f7400a/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 12:59:23 crc kubenswrapper[4881]: I0121 12:59:23.462097 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_bf14e65c-4c95-4766-a2e2-57b040e9f192/watcher-api-log/0.log" Jan 21 12:59:28 crc kubenswrapper[4881]: I0121 12:59:28.189422 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_bf14e65c-4c95-4766-a2e2-57b040e9f192/watcher-api/0.log" Jan 21 12:59:28 crc kubenswrapper[4881]: I0121 12:59:28.386648 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-applier-0_937bcc33-ee83-4f94-ab76-84f534cfd05a/watcher-applier/0.log" Jan 21 12:59:29 crc kubenswrapper[4881]: I0121 12:59:29.799410 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-decision-engine-0_1a227ee4-7a4c-4cb6-991c-d137119a2a6e/watcher-decision-engine/0.log" Jan 21 12:59:29 crc kubenswrapper[4881]: I0121 12:59:29.851571 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:59:29 crc kubenswrapper[4881]: I0121 12:59:29.851684 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:59:35 crc kubenswrapper[4881]: I0121 12:59:35.081796 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l_1c737afe-a2ad-4075-acd6-9f73aada0e4b/extract/0.log" Jan 21 12:59:35 crc kubenswrapper[4881]: I0121 12:59:35.092616 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l_1c737afe-a2ad-4075-acd6-9f73aada0e4b/util/0.log" Jan 21 12:59:35 crc kubenswrapper[4881]: I0121 12:59:35.123899 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l_1c737afe-a2ad-4075-acd6-9f73aada0e4b/pull/0.log" Jan 21 12:59:35 crc kubenswrapper[4881]: I0121 12:59:35.224300 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-svq8w_848fd8db-3bd5-4e22-96ca-f69b181e48be/manager/0.log" Jan 21 12:59:35 crc kubenswrapper[4881]: I0121 12:59:35.289403 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-7qgck_a028dcae-6b9d-414d-8bab-652f301de541/manager/0.log" Jan 21 12:59:35 crc kubenswrapper[4881]: I0121 12:59:35.326909 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-4wmln_36e5ddfe-67a4-4721-9ef5-b9459c64bf5c/manager/0.log" Jan 21 12:59:35 crc kubenswrapper[4881]: I0121 12:59:35.408229 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-jv7cr_1f795f92-d385-49bc-bc91-5109734f4d5a/manager/0.log" Jan 21 12:59:35 crc kubenswrapper[4881]: I0121 12:59:35.418473 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-zmgll_efb259b7-934f-4bc3-b502-633472d1a1c5/manager/0.log" Jan 21 12:59:35 crc kubenswrapper[4881]: I0121 12:59:35.459504 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-bv8wz_bb9b2c3f-4f68-44fc-addf-2cf4421be015/manager/0.log" Jan 21 12:59:35 crc kubenswrapper[4881]: I0121 12:59:35.816212 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-klgq4_2fe210a4-2adf-4b55-9a43-c1c390f51b0e/manager/0.log" Jan 21 12:59:35 crc kubenswrapper[4881]: I0121 12:59:35.831835 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-5qcms_d0cafd1d-5f37-499a-a531-547a137aae21/manager/0.log" Jan 21 12:59:35 crc kubenswrapper[4881]: I0121 12:59:35.914997 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-9zp7h_ba9a1249-fc58-4809-a472-d199afa9b52b/manager/0.log" Jan 21 12:59:35 crc kubenswrapper[4881]: I0121 12:59:35.925050 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-h6dr4_b72b2323-5329-4145-9cee-b447d9e2a304/manager/0.log" Jan 21 12:59:35 crc kubenswrapper[4881]: I0121 12:59:35.969305 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-s6gm8_4c2550fe-b3eb-4eef-8ffc-ebb4a9ce1b5f/manager/0.log" Jan 21 12:59:36 crc kubenswrapper[4881]: I0121 12:59:36.021001 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-ncnww_c3b86204-5389-4b6a-bd45-fb6ee23b784e/manager/0.log" Jan 21 12:59:36 crc kubenswrapper[4881]: I0121 12:59:36.108803 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-798zt_761a1a49-e01e-4674-b1f4-da732e1def98/manager/0.log" Jan 21 12:59:36 crc kubenswrapper[4881]: I0121 12:59:36.124852 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-n7kgd_340257c4-9218-49b0-8a75-b2a4e0231fe3/manager/0.log" Jan 21 12:59:36 crc kubenswrapper[4881]: I0121 12:59:36.147368 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b8544795q_b1b17be2-e382-4916-8e53-a68c85b5bfc2/manager/0.log" Jan 21 12:59:36 crc kubenswrapper[4881]: I0121 12:59:36.304172 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-766b56994f-7hsc6_3a9a96af-4c4b-45b4-ade0-688a9029cf7b/operator/0.log" Jan 21 12:59:37 crc kubenswrapper[4881]: I0121 12:59:37.727936 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-87d6d564b-ktcf8_a55fdb43-cd6c-4415-8ef6-07f6c7da6272/manager/0.log" Jan 21 12:59:37 crc kubenswrapper[4881]: I0121 12:59:37.737416 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-7vz4j_0a051fc2-b6e4-463c-bb0a-b565d12b21b4/registry-server/0.log" Jan 21 12:59:37 crc kubenswrapper[4881]: I0121 12:59:37.792085 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-vpqw4_50cfdf18-6a7e-4b3c-bb0f-5260fc3d42eb/manager/0.log" Jan 21 12:59:37 crc kubenswrapper[4881]: I0121 12:59:37.815370 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-jh4z9_e8e6f423-a07b-4a22-9e39-efa8de22747e/manager/0.log" Jan 21 12:59:37 crc kubenswrapper[4881]: I0121 12:59:37.849520 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-76qxc_8c8feeec-377c-499a-b666-895010f8ebeb/operator/0.log" Jan 21 12:59:37 crc kubenswrapper[4881]: I0121 12:59:37.891243 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-rk8l8_8c504afd-e4e0-4676-b292-b569b638a7dd/manager/0.log" Jan 21 12:59:38 crc kubenswrapper[4881]: I0121 12:59:38.055386 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-fcht4_55ce5ee6-47f4-4874-92dc-6ab78f2ce212/manager/0.log" Jan 21 12:59:38 crc kubenswrapper[4881]: I0121 12:59:38.072642 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-tttcz_2aac430e-3ac8-4624-8575-66386b5c2df3/manager/0.log" Jan 21 12:59:38 crc kubenswrapper[4881]: I0121 12:59:38.142581 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-849fd9b886-k9t7q_1cebbaaf-6189-409a-8f25-43d7fac77f95/manager/0.log" Jan 21 12:59:42 crc kubenswrapper[4881]: I0121 12:59:42.476213 4881 generic.go:334] "Generic (PLEG): container finished" podID="32ea9833-e257-4601-8be7-dcf0882d25ff" containerID="0818ec9313f2fc50a748108c2a7b4170d06db46eb9b811376ec620220e592ebc" exitCode=0 Jan 21 12:59:42 crc kubenswrapper[4881]: I0121 12:59:42.476313 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pd9dn/crc-debug-r7kk4" event={"ID":"32ea9833-e257-4601-8be7-dcf0882d25ff","Type":"ContainerDied","Data":"0818ec9313f2fc50a748108c2a7b4170d06db46eb9b811376ec620220e592ebc"} Jan 21 12:59:43 crc kubenswrapper[4881]: I0121 12:59:43.296040 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-hfc8p_bc38f0b5-944c-40ae-aed0-50ca39ea2627/control-plane-machine-set-operator/0.log" Jan 21 12:59:43 crc kubenswrapper[4881]: I0121 12:59:43.324261 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-cclnc_8465162e-dd9f-45b4-83a6-94666ac2b87b/kube-rbac-proxy/0.log" Jan 21 12:59:43 crc kubenswrapper[4881]: I0121 12:59:43.335061 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-cclnc_8465162e-dd9f-45b4-83a6-94666ac2b87b/machine-api-operator/0.log" Jan 21 12:59:43 crc kubenswrapper[4881]: I0121 12:59:43.621252 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pd9dn/crc-debug-r7kk4" Jan 21 12:59:43 crc kubenswrapper[4881]: I0121 12:59:43.663250 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-pd9dn/crc-debug-r7kk4"] Jan 21 12:59:43 crc kubenswrapper[4881]: I0121 12:59:43.683619 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-pd9dn/crc-debug-r7kk4"] Jan 21 12:59:43 crc kubenswrapper[4881]: I0121 12:59:43.705021 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/32ea9833-e257-4601-8be7-dcf0882d25ff-host\") pod \"32ea9833-e257-4601-8be7-dcf0882d25ff\" (UID: \"32ea9833-e257-4601-8be7-dcf0882d25ff\") " Jan 21 12:59:43 crc kubenswrapper[4881]: I0121 12:59:43.705160 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfvbq\" (UniqueName: \"kubernetes.io/projected/32ea9833-e257-4601-8be7-dcf0882d25ff-kube-api-access-hfvbq\") pod \"32ea9833-e257-4601-8be7-dcf0882d25ff\" (UID: \"32ea9833-e257-4601-8be7-dcf0882d25ff\") " Jan 21 12:59:43 crc kubenswrapper[4881]: I0121 12:59:43.705289 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32ea9833-e257-4601-8be7-dcf0882d25ff-host" (OuterVolumeSpecName: "host") pod "32ea9833-e257-4601-8be7-dcf0882d25ff" (UID: "32ea9833-e257-4601-8be7-dcf0882d25ff"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 12:59:43 crc kubenswrapper[4881]: I0121 12:59:43.705995 4881 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/32ea9833-e257-4601-8be7-dcf0882d25ff-host\") on node \"crc\" DevicePath \"\"" Jan 21 12:59:43 crc kubenswrapper[4881]: I0121 12:59:43.714528 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32ea9833-e257-4601-8be7-dcf0882d25ff-kube-api-access-hfvbq" (OuterVolumeSpecName: "kube-api-access-hfvbq") pod "32ea9833-e257-4601-8be7-dcf0882d25ff" (UID: "32ea9833-e257-4601-8be7-dcf0882d25ff"). InnerVolumeSpecName "kube-api-access-hfvbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:59:43 crc kubenswrapper[4881]: I0121 12:59:43.808138 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfvbq\" (UniqueName: \"kubernetes.io/projected/32ea9833-e257-4601-8be7-dcf0882d25ff-kube-api-access-hfvbq\") on node \"crc\" DevicePath \"\"" Jan 21 12:59:44 crc kubenswrapper[4881]: I0121 12:59:44.495392 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5015ec3924ce05a87e752db51205b1697e3330ac046050ee395aa7729f42795a" Jan 21 12:59:44 crc kubenswrapper[4881]: I0121 12:59:44.495399 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pd9dn/crc-debug-r7kk4" Jan 21 12:59:44 crc kubenswrapper[4881]: I0121 12:59:44.941583 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-pd9dn/crc-debug-tvg8c"] Jan 21 12:59:44 crc kubenswrapper[4881]: E0121 12:59:44.942134 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32ea9833-e257-4601-8be7-dcf0882d25ff" containerName="container-00" Jan 21 12:59:44 crc kubenswrapper[4881]: I0121 12:59:44.942147 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="32ea9833-e257-4601-8be7-dcf0882d25ff" containerName="container-00" Jan 21 12:59:44 crc kubenswrapper[4881]: I0121 12:59:44.950187 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="32ea9833-e257-4601-8be7-dcf0882d25ff" containerName="container-00" Jan 21 12:59:44 crc kubenswrapper[4881]: I0121 12:59:44.951826 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pd9dn/crc-debug-tvg8c" Jan 21 12:59:45 crc kubenswrapper[4881]: I0121 12:59:45.029981 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg9cv\" (UniqueName: \"kubernetes.io/projected/617c663a-e61a-41e8-92f1-a847b84c7b5b-kube-api-access-kg9cv\") pod \"crc-debug-tvg8c\" (UID: \"617c663a-e61a-41e8-92f1-a847b84c7b5b\") " pod="openshift-must-gather-pd9dn/crc-debug-tvg8c" Jan 21 12:59:45 crc kubenswrapper[4881]: I0121 12:59:45.030078 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/617c663a-e61a-41e8-92f1-a847b84c7b5b-host\") pod \"crc-debug-tvg8c\" (UID: \"617c663a-e61a-41e8-92f1-a847b84c7b5b\") " pod="openshift-must-gather-pd9dn/crc-debug-tvg8c" Jan 21 12:59:45 crc kubenswrapper[4881]: I0121 12:59:45.132346 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/617c663a-e61a-41e8-92f1-a847b84c7b5b-host\") pod \"crc-debug-tvg8c\" (UID: \"617c663a-e61a-41e8-92f1-a847b84c7b5b\") " pod="openshift-must-gather-pd9dn/crc-debug-tvg8c" Jan 21 12:59:45 crc kubenswrapper[4881]: I0121 12:59:45.132583 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/617c663a-e61a-41e8-92f1-a847b84c7b5b-host\") pod \"crc-debug-tvg8c\" (UID: \"617c663a-e61a-41e8-92f1-a847b84c7b5b\") " pod="openshift-must-gather-pd9dn/crc-debug-tvg8c" Jan 21 12:59:45 crc kubenswrapper[4881]: I0121 12:59:45.132642 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kg9cv\" (UniqueName: \"kubernetes.io/projected/617c663a-e61a-41e8-92f1-a847b84c7b5b-kube-api-access-kg9cv\") pod \"crc-debug-tvg8c\" (UID: \"617c663a-e61a-41e8-92f1-a847b84c7b5b\") " pod="openshift-must-gather-pd9dn/crc-debug-tvg8c" Jan 21 12:59:45 crc kubenswrapper[4881]: I0121 12:59:45.160454 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kg9cv\" (UniqueName: \"kubernetes.io/projected/617c663a-e61a-41e8-92f1-a847b84c7b5b-kube-api-access-kg9cv\") pod \"crc-debug-tvg8c\" (UID: \"617c663a-e61a-41e8-92f1-a847b84c7b5b\") " pod="openshift-must-gather-pd9dn/crc-debug-tvg8c" Jan 21 12:59:45 crc kubenswrapper[4881]: I0121 12:59:45.284370 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pd9dn/crc-debug-tvg8c" Jan 21 12:59:45 crc kubenswrapper[4881]: I0121 12:59:45.336820 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32ea9833-e257-4601-8be7-dcf0882d25ff" path="/var/lib/kubelet/pods/32ea9833-e257-4601-8be7-dcf0882d25ff/volumes" Jan 21 12:59:45 crc kubenswrapper[4881]: W0121 12:59:45.351741 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod617c663a_e61a_41e8_92f1_a847b84c7b5b.slice/crio-afb63b538bb9c15b601a888881bf38b207d75c919b5799ce399b386d20730cc3 WatchSource:0}: Error finding container afb63b538bb9c15b601a888881bf38b207d75c919b5799ce399b386d20730cc3: Status 404 returned error can't find the container with id afb63b538bb9c15b601a888881bf38b207d75c919b5799ce399b386d20730cc3 Jan 21 12:59:45 crc kubenswrapper[4881]: I0121 12:59:45.504436 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pd9dn/crc-debug-tvg8c" event={"ID":"617c663a-e61a-41e8-92f1-a847b84c7b5b","Type":"ContainerStarted","Data":"afb63b538bb9c15b601a888881bf38b207d75c919b5799ce399b386d20730cc3"} Jan 21 12:59:46 crc kubenswrapper[4881]: I0121 12:59:46.514973 4881 generic.go:334] "Generic (PLEG): container finished" podID="617c663a-e61a-41e8-92f1-a847b84c7b5b" containerID="adc0b5280c47db093a6ec180a9e5726fbeb5b4a901615e6f06978e816e37c4a2" exitCode=0 Jan 21 12:59:46 crc kubenswrapper[4881]: I0121 12:59:46.515183 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pd9dn/crc-debug-tvg8c" event={"ID":"617c663a-e61a-41e8-92f1-a847b84c7b5b","Type":"ContainerDied","Data":"adc0b5280c47db093a6ec180a9e5726fbeb5b4a901615e6f06978e816e37c4a2"} Jan 21 12:59:47 crc kubenswrapper[4881]: I0121 12:59:47.679966 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pd9dn/crc-debug-tvg8c" Jan 21 12:59:47 crc kubenswrapper[4881]: I0121 12:59:47.790011 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/617c663a-e61a-41e8-92f1-a847b84c7b5b-host\") pod \"617c663a-e61a-41e8-92f1-a847b84c7b5b\" (UID: \"617c663a-e61a-41e8-92f1-a847b84c7b5b\") " Jan 21 12:59:47 crc kubenswrapper[4881]: I0121 12:59:47.790099 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/617c663a-e61a-41e8-92f1-a847b84c7b5b-host" (OuterVolumeSpecName: "host") pod "617c663a-e61a-41e8-92f1-a847b84c7b5b" (UID: "617c663a-e61a-41e8-92f1-a847b84c7b5b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 12:59:47 crc kubenswrapper[4881]: I0121 12:59:47.790195 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kg9cv\" (UniqueName: \"kubernetes.io/projected/617c663a-e61a-41e8-92f1-a847b84c7b5b-kube-api-access-kg9cv\") pod \"617c663a-e61a-41e8-92f1-a847b84c7b5b\" (UID: \"617c663a-e61a-41e8-92f1-a847b84c7b5b\") " Jan 21 12:59:47 crc kubenswrapper[4881]: I0121 12:59:47.790712 4881 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/617c663a-e61a-41e8-92f1-a847b84c7b5b-host\") on node \"crc\" DevicePath \"\"" Jan 21 12:59:47 crc kubenswrapper[4881]: I0121 12:59:47.804990 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/617c663a-e61a-41e8-92f1-a847b84c7b5b-kube-api-access-kg9cv" (OuterVolumeSpecName: "kube-api-access-kg9cv") pod "617c663a-e61a-41e8-92f1-a847b84c7b5b" (UID: "617c663a-e61a-41e8-92f1-a847b84c7b5b"). InnerVolumeSpecName "kube-api-access-kg9cv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:59:47 crc kubenswrapper[4881]: I0121 12:59:47.892519 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kg9cv\" (UniqueName: \"kubernetes.io/projected/617c663a-e61a-41e8-92f1-a847b84c7b5b-kube-api-access-kg9cv\") on node \"crc\" DevicePath \"\"" Jan 21 12:59:48 crc kubenswrapper[4881]: I0121 12:59:48.534020 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pd9dn/crc-debug-tvg8c" event={"ID":"617c663a-e61a-41e8-92f1-a847b84c7b5b","Type":"ContainerDied","Data":"afb63b538bb9c15b601a888881bf38b207d75c919b5799ce399b386d20730cc3"} Jan 21 12:59:48 crc kubenswrapper[4881]: I0121 12:59:48.534075 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afb63b538bb9c15b601a888881bf38b207d75c919b5799ce399b386d20730cc3" Jan 21 12:59:48 crc kubenswrapper[4881]: I0121 12:59:48.534102 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pd9dn/crc-debug-tvg8c" Jan 21 12:59:48 crc kubenswrapper[4881]: I0121 12:59:48.860756 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-pd9dn/crc-debug-tvg8c"] Jan 21 12:59:48 crc kubenswrapper[4881]: I0121 12:59:48.870333 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-pd9dn/crc-debug-tvg8c"] Jan 21 12:59:49 crc kubenswrapper[4881]: I0121 12:59:49.329937 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="617c663a-e61a-41e8-92f1-a847b84c7b5b" path="/var/lib/kubelet/pods/617c663a-e61a-41e8-92f1-a847b84c7b5b/volumes" Jan 21 12:59:50 crc kubenswrapper[4881]: I0121 12:59:50.071002 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-pd9dn/crc-debug-56wrj"] Jan 21 12:59:50 crc kubenswrapper[4881]: E0121 12:59:50.072484 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="617c663a-e61a-41e8-92f1-a847b84c7b5b" containerName="container-00" Jan 21 12:59:50 crc kubenswrapper[4881]: I0121 12:59:50.072507 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="617c663a-e61a-41e8-92f1-a847b84c7b5b" containerName="container-00" Jan 21 12:59:50 crc kubenswrapper[4881]: I0121 12:59:50.072738 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="617c663a-e61a-41e8-92f1-a847b84c7b5b" containerName="container-00" Jan 21 12:59:50 crc kubenswrapper[4881]: I0121 12:59:50.073540 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pd9dn/crc-debug-56wrj" Jan 21 12:59:50 crc kubenswrapper[4881]: I0121 12:59:50.142631 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ecc6c59e-4b85-45d4-a592-46e269e622ee-host\") pod \"crc-debug-56wrj\" (UID: \"ecc6c59e-4b85-45d4-a592-46e269e622ee\") " pod="openshift-must-gather-pd9dn/crc-debug-56wrj" Jan 21 12:59:50 crc kubenswrapper[4881]: I0121 12:59:50.142735 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5ttc\" (UniqueName: \"kubernetes.io/projected/ecc6c59e-4b85-45d4-a592-46e269e622ee-kube-api-access-p5ttc\") pod \"crc-debug-56wrj\" (UID: \"ecc6c59e-4b85-45d4-a592-46e269e622ee\") " pod="openshift-must-gather-pd9dn/crc-debug-56wrj" Jan 21 12:59:50 crc kubenswrapper[4881]: I0121 12:59:50.245106 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ecc6c59e-4b85-45d4-a592-46e269e622ee-host\") pod \"crc-debug-56wrj\" (UID: \"ecc6c59e-4b85-45d4-a592-46e269e622ee\") " pod="openshift-must-gather-pd9dn/crc-debug-56wrj" Jan 21 12:59:50 crc kubenswrapper[4881]: I0121 12:59:50.245188 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5ttc\" (UniqueName: \"kubernetes.io/projected/ecc6c59e-4b85-45d4-a592-46e269e622ee-kube-api-access-p5ttc\") pod \"crc-debug-56wrj\" (UID: \"ecc6c59e-4b85-45d4-a592-46e269e622ee\") " pod="openshift-must-gather-pd9dn/crc-debug-56wrj" Jan 21 12:59:50 crc kubenswrapper[4881]: I0121 12:59:50.245208 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ecc6c59e-4b85-45d4-a592-46e269e622ee-host\") pod \"crc-debug-56wrj\" (UID: \"ecc6c59e-4b85-45d4-a592-46e269e622ee\") " pod="openshift-must-gather-pd9dn/crc-debug-56wrj" Jan 21 12:59:50 crc kubenswrapper[4881]: I0121 12:59:50.271999 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5ttc\" (UniqueName: \"kubernetes.io/projected/ecc6c59e-4b85-45d4-a592-46e269e622ee-kube-api-access-p5ttc\") pod \"crc-debug-56wrj\" (UID: \"ecc6c59e-4b85-45d4-a592-46e269e622ee\") " pod="openshift-must-gather-pd9dn/crc-debug-56wrj" Jan 21 12:59:50 crc kubenswrapper[4881]: I0121 12:59:50.395996 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pd9dn/crc-debug-56wrj" Jan 21 12:59:50 crc kubenswrapper[4881]: W0121 12:59:50.428607 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podecc6c59e_4b85_45d4_a592_46e269e622ee.slice/crio-49b0e7d6ab3de89535e08864f3dc88a4d76792539d3db9ddb7ab991ef1e1229d WatchSource:0}: Error finding container 49b0e7d6ab3de89535e08864f3dc88a4d76792539d3db9ddb7ab991ef1e1229d: Status 404 returned error can't find the container with id 49b0e7d6ab3de89535e08864f3dc88a4d76792539d3db9ddb7ab991ef1e1229d Jan 21 12:59:50 crc kubenswrapper[4881]: I0121 12:59:50.551323 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pd9dn/crc-debug-56wrj" event={"ID":"ecc6c59e-4b85-45d4-a592-46e269e622ee","Type":"ContainerStarted","Data":"49b0e7d6ab3de89535e08864f3dc88a4d76792539d3db9ddb7ab991ef1e1229d"} Jan 21 12:59:51 crc kubenswrapper[4881]: I0121 12:59:51.844596 4881 generic.go:334] "Generic (PLEG): container finished" podID="ecc6c59e-4b85-45d4-a592-46e269e622ee" containerID="d7393ff190dc0d36007c0eef8e475ccef4c110168796bf46e5bdb722b58eff4e" exitCode=0 Jan 21 12:59:51 crc kubenswrapper[4881]: I0121 12:59:51.844810 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-pd9dn/crc-debug-56wrj" event={"ID":"ecc6c59e-4b85-45d4-a592-46e269e622ee","Type":"ContainerDied","Data":"d7393ff190dc0d36007c0eef8e475ccef4c110168796bf46e5bdb722b58eff4e"} Jan 21 12:59:51 crc kubenswrapper[4881]: I0121 12:59:51.901971 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-pd9dn/crc-debug-56wrj"] Jan 21 12:59:51 crc kubenswrapper[4881]: I0121 12:59:51.911590 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-pd9dn/crc-debug-56wrj"] Jan 21 12:59:52 crc kubenswrapper[4881]: I0121 12:59:52.482014 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-h2ttp_faf7e95d-07e7-4d3d-936b-26b187fc0b0c/cert-manager-controller/0.log" Jan 21 12:59:52 crc kubenswrapper[4881]: I0121 12:59:52.506507 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-cdm4s_1d8014cf-8827-449d-b5fa-d0c098cc377e/cert-manager-cainjector/0.log" Jan 21 12:59:52 crc kubenswrapper[4881]: I0121 12:59:52.517010 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-csqtv_2aeab03b-23ac-4cc2-8f0f-db1111ef2cc4/cert-manager-webhook/0.log" Jan 21 12:59:52 crc kubenswrapper[4881]: I0121 12:59:52.981906 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pd9dn/crc-debug-56wrj" Jan 21 12:59:53 crc kubenswrapper[4881]: I0121 12:59:53.168340 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ecc6c59e-4b85-45d4-a592-46e269e622ee-host\") pod \"ecc6c59e-4b85-45d4-a592-46e269e622ee\" (UID: \"ecc6c59e-4b85-45d4-a592-46e269e622ee\") " Jan 21 12:59:53 crc kubenswrapper[4881]: I0121 12:59:53.168415 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5ttc\" (UniqueName: \"kubernetes.io/projected/ecc6c59e-4b85-45d4-a592-46e269e622ee-kube-api-access-p5ttc\") pod \"ecc6c59e-4b85-45d4-a592-46e269e622ee\" (UID: \"ecc6c59e-4b85-45d4-a592-46e269e622ee\") " Jan 21 12:59:53 crc kubenswrapper[4881]: I0121 12:59:53.168478 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ecc6c59e-4b85-45d4-a592-46e269e622ee-host" (OuterVolumeSpecName: "host") pod "ecc6c59e-4b85-45d4-a592-46e269e622ee" (UID: "ecc6c59e-4b85-45d4-a592-46e269e622ee"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 12:59:53 crc kubenswrapper[4881]: I0121 12:59:53.169141 4881 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ecc6c59e-4b85-45d4-a592-46e269e622ee-host\") on node \"crc\" DevicePath \"\"" Jan 21 12:59:53 crc kubenswrapper[4881]: I0121 12:59:53.182050 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecc6c59e-4b85-45d4-a592-46e269e622ee-kube-api-access-p5ttc" (OuterVolumeSpecName: "kube-api-access-p5ttc") pod "ecc6c59e-4b85-45d4-a592-46e269e622ee" (UID: "ecc6c59e-4b85-45d4-a592-46e269e622ee"). InnerVolumeSpecName "kube-api-access-p5ttc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:59:53 crc kubenswrapper[4881]: I0121 12:59:53.271035 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5ttc\" (UniqueName: \"kubernetes.io/projected/ecc6c59e-4b85-45d4-a592-46e269e622ee-kube-api-access-p5ttc\") on node \"crc\" DevicePath \"\"" Jan 21 12:59:53 crc kubenswrapper[4881]: I0121 12:59:53.323919 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecc6c59e-4b85-45d4-a592-46e269e622ee" path="/var/lib/kubelet/pods/ecc6c59e-4b85-45d4-a592-46e269e622ee/volumes" Jan 21 12:59:53 crc kubenswrapper[4881]: I0121 12:59:53.872432 4881 scope.go:117] "RemoveContainer" containerID="d7393ff190dc0d36007c0eef8e475ccef4c110168796bf46e5bdb722b58eff4e" Jan 21 12:59:53 crc kubenswrapper[4881]: I0121 12:59:53.872484 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-pd9dn/crc-debug-56wrj" Jan 21 12:59:58 crc kubenswrapper[4881]: I0121 12:59:58.526060 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-lgdjc_fcdadd73-568f-4ae0-a7bb-9330b2feb835/nmstate-console-plugin/0.log" Jan 21 12:59:58 crc kubenswrapper[4881]: I0121 12:59:58.580150 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-b9rcw_5c705c83-efa0-436f-a0b5-9164dbb6b1df/nmstate-handler/0.log" Jan 21 12:59:58 crc kubenswrapper[4881]: I0121 12:59:58.589894 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-ft48b_f68408aa-3450-42af-a6f8-b5260973f272/nmstate-metrics/0.log" Jan 21 12:59:58 crc kubenswrapper[4881]: I0121 12:59:58.599368 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-ft48b_f68408aa-3450-42af-a6f8-b5260973f272/kube-rbac-proxy/0.log" Jan 21 12:59:58 crc kubenswrapper[4881]: I0121 12:59:58.612795 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-zlxs9_14878b0e-37cc-4c03-89df-ba23d94589a0/nmstate-operator/0.log" Jan 21 12:59:58 crc kubenswrapper[4881]: I0121 12:59:58.647729 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-qmv5k_b6262b8c-2531-4008-9bb8-c3beeb66a3ed/nmstate-webhook/0.log" Jan 21 12:59:59 crc kubenswrapper[4881]: I0121 12:59:59.850639 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:59:59 crc kubenswrapper[4881]: I0121 12:59:59.851211 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:00:00 crc kubenswrapper[4881]: I0121 13:00:00.187923 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483340-9mvx4"] Jan 21 13:00:00 crc kubenswrapper[4881]: E0121 13:00:00.188457 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecc6c59e-4b85-45d4-a592-46e269e622ee" containerName="container-00" Jan 21 13:00:00 crc kubenswrapper[4881]: I0121 13:00:00.188477 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecc6c59e-4b85-45d4-a592-46e269e622ee" containerName="container-00" Jan 21 13:00:00 crc kubenswrapper[4881]: I0121 13:00:00.188699 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecc6c59e-4b85-45d4-a592-46e269e622ee" containerName="container-00" Jan 21 13:00:00 crc kubenswrapper[4881]: I0121 13:00:00.189567 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-9mvx4" Jan 21 13:00:00 crc kubenswrapper[4881]: I0121 13:00:00.195259 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 13:00:00 crc kubenswrapper[4881]: I0121 13:00:00.200302 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 13:00:00 crc kubenswrapper[4881]: I0121 13:00:00.201357 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483340-9mvx4"] Jan 21 13:00:00 crc kubenswrapper[4881]: I0121 13:00:00.345003 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z78wd\" (UniqueName: \"kubernetes.io/projected/a3d03c94-fe93-4321-a2a8-44fc4e42cecf-kube-api-access-z78wd\") pod \"collect-profiles-29483340-9mvx4\" (UID: \"a3d03c94-fe93-4321-a2a8-44fc4e42cecf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-9mvx4" Jan 21 13:00:00 crc kubenswrapper[4881]: I0121 13:00:00.345264 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a3d03c94-fe93-4321-a2a8-44fc4e42cecf-secret-volume\") pod \"collect-profiles-29483340-9mvx4\" (UID: \"a3d03c94-fe93-4321-a2a8-44fc4e42cecf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-9mvx4" Jan 21 13:00:00 crc kubenswrapper[4881]: I0121 13:00:00.345555 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3d03c94-fe93-4321-a2a8-44fc4e42cecf-config-volume\") pod \"collect-profiles-29483340-9mvx4\" (UID: \"a3d03c94-fe93-4321-a2a8-44fc4e42cecf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-9mvx4" Jan 21 13:00:00 crc kubenswrapper[4881]: I0121 13:00:00.447661 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z78wd\" (UniqueName: \"kubernetes.io/projected/a3d03c94-fe93-4321-a2a8-44fc4e42cecf-kube-api-access-z78wd\") pod \"collect-profiles-29483340-9mvx4\" (UID: \"a3d03c94-fe93-4321-a2a8-44fc4e42cecf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-9mvx4" Jan 21 13:00:00 crc kubenswrapper[4881]: I0121 13:00:00.447779 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a3d03c94-fe93-4321-a2a8-44fc4e42cecf-secret-volume\") pod \"collect-profiles-29483340-9mvx4\" (UID: \"a3d03c94-fe93-4321-a2a8-44fc4e42cecf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-9mvx4" Jan 21 13:00:00 crc kubenswrapper[4881]: I0121 13:00:00.447852 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3d03c94-fe93-4321-a2a8-44fc4e42cecf-config-volume\") pod \"collect-profiles-29483340-9mvx4\" (UID: \"a3d03c94-fe93-4321-a2a8-44fc4e42cecf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-9mvx4" Jan 21 13:00:00 crc kubenswrapper[4881]: I0121 13:00:00.448945 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3d03c94-fe93-4321-a2a8-44fc4e42cecf-config-volume\") pod \"collect-profiles-29483340-9mvx4\" (UID: \"a3d03c94-fe93-4321-a2a8-44fc4e42cecf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-9mvx4" Jan 21 13:00:00 crc kubenswrapper[4881]: I0121 13:00:00.453817 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a3d03c94-fe93-4321-a2a8-44fc4e42cecf-secret-volume\") pod \"collect-profiles-29483340-9mvx4\" (UID: \"a3d03c94-fe93-4321-a2a8-44fc4e42cecf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-9mvx4" Jan 21 13:00:00 crc kubenswrapper[4881]: I0121 13:00:00.467507 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z78wd\" (UniqueName: \"kubernetes.io/projected/a3d03c94-fe93-4321-a2a8-44fc4e42cecf-kube-api-access-z78wd\") pod \"collect-profiles-29483340-9mvx4\" (UID: \"a3d03c94-fe93-4321-a2a8-44fc4e42cecf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-9mvx4" Jan 21 13:00:00 crc kubenswrapper[4881]: I0121 13:00:00.519750 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-9mvx4" Jan 21 13:00:01 crc kubenswrapper[4881]: I0121 13:00:01.102190 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483340-9mvx4"] Jan 21 13:00:01 crc kubenswrapper[4881]: E0121 13:00:01.945450 4881 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3d03c94_fe93_4321_a2a8_44fc4e42cecf.slice/crio-c991ea82acb208ee5146cd2f274afea24486b30d08f10d3df4a9a9be6e57a12c.scope\": RecentStats: unable to find data in memory cache]" Jan 21 13:00:01 crc kubenswrapper[4881]: I0121 13:00:01.951943 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-9mvx4" event={"ID":"a3d03c94-fe93-4321-a2a8-44fc4e42cecf","Type":"ContainerStarted","Data":"c991ea82acb208ee5146cd2f274afea24486b30d08f10d3df4a9a9be6e57a12c"} Jan 21 13:00:01 crc kubenswrapper[4881]: I0121 13:00:01.951987 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-9mvx4" event={"ID":"a3d03c94-fe93-4321-a2a8-44fc4e42cecf","Type":"ContainerStarted","Data":"ca539054649ad7498aa368328f6ff7d3f04b6d41dd101ce5698d9930259deeae"} Jan 21 13:00:01 crc kubenswrapper[4881]: I0121 13:00:01.979410 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-9mvx4" podStartSLOduration=1.9793712289999998 podStartE2EDuration="1.979371229s" podCreationTimestamp="2026-01-21 13:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:00:01.973671361 +0000 UTC m=+7389.233627830" watchObservedRunningTime="2026-01-21 13:00:01.979371229 +0000 UTC m=+7389.239327698" Jan 21 13:00:02 crc kubenswrapper[4881]: I0121 13:00:02.965671 4881 generic.go:334] "Generic (PLEG): container finished" podID="a3d03c94-fe93-4321-a2a8-44fc4e42cecf" containerID="c991ea82acb208ee5146cd2f274afea24486b30d08f10d3df4a9a9be6e57a12c" exitCode=0 Jan 21 13:00:02 crc kubenswrapper[4881]: I0121 13:00:02.965754 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-9mvx4" event={"ID":"a3d03c94-fe93-4321-a2a8-44fc4e42cecf","Type":"ContainerDied","Data":"c991ea82acb208ee5146cd2f274afea24486b30d08f10d3df4a9a9be6e57a12c"} Jan 21 13:00:04 crc kubenswrapper[4881]: I0121 13:00:04.427474 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-9mvx4" Jan 21 13:00:04 crc kubenswrapper[4881]: I0121 13:00:04.469912 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a3d03c94-fe93-4321-a2a8-44fc4e42cecf-secret-volume\") pod \"a3d03c94-fe93-4321-a2a8-44fc4e42cecf\" (UID: \"a3d03c94-fe93-4321-a2a8-44fc4e42cecf\") " Jan 21 13:00:04 crc kubenswrapper[4881]: I0121 13:00:04.471698 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3d03c94-fe93-4321-a2a8-44fc4e42cecf-config-volume\") pod \"a3d03c94-fe93-4321-a2a8-44fc4e42cecf\" (UID: \"a3d03c94-fe93-4321-a2a8-44fc4e42cecf\") " Jan 21 13:00:04 crc kubenswrapper[4881]: I0121 13:00:04.473751 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z78wd\" (UniqueName: \"kubernetes.io/projected/a3d03c94-fe93-4321-a2a8-44fc4e42cecf-kube-api-access-z78wd\") pod \"a3d03c94-fe93-4321-a2a8-44fc4e42cecf\" (UID: \"a3d03c94-fe93-4321-a2a8-44fc4e42cecf\") " Jan 21 13:00:04 crc kubenswrapper[4881]: I0121 13:00:04.475237 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3d03c94-fe93-4321-a2a8-44fc4e42cecf-config-volume" (OuterVolumeSpecName: "config-volume") pod "a3d03c94-fe93-4321-a2a8-44fc4e42cecf" (UID: "a3d03c94-fe93-4321-a2a8-44fc4e42cecf"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:00:04 crc kubenswrapper[4881]: I0121 13:00:04.479761 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3d03c94-fe93-4321-a2a8-44fc4e42cecf-kube-api-access-z78wd" (OuterVolumeSpecName: "kube-api-access-z78wd") pod "a3d03c94-fe93-4321-a2a8-44fc4e42cecf" (UID: "a3d03c94-fe93-4321-a2a8-44fc4e42cecf"). InnerVolumeSpecName "kube-api-access-z78wd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:00:04 crc kubenswrapper[4881]: I0121 13:00:04.497883 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3d03c94-fe93-4321-a2a8-44fc4e42cecf-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a3d03c94-fe93-4321-a2a8-44fc4e42cecf" (UID: "a3d03c94-fe93-4321-a2a8-44fc4e42cecf"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:00:04 crc kubenswrapper[4881]: I0121 13:00:04.577366 4881 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a3d03c94-fe93-4321-a2a8-44fc4e42cecf-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 13:00:04 crc kubenswrapper[4881]: I0121 13:00:04.577842 4881 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a3d03c94-fe93-4321-a2a8-44fc4e42cecf-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 13:00:04 crc kubenswrapper[4881]: I0121 13:00:04.577856 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z78wd\" (UniqueName: \"kubernetes.io/projected/a3d03c94-fe93-4321-a2a8-44fc4e42cecf-kube-api-access-z78wd\") on node \"crc\" DevicePath \"\"" Jan 21 13:00:04 crc kubenswrapper[4881]: I0121 13:00:04.986988 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-9mvx4" event={"ID":"a3d03c94-fe93-4321-a2a8-44fc4e42cecf","Type":"ContainerDied","Data":"ca539054649ad7498aa368328f6ff7d3f04b6d41dd101ce5698d9930259deeae"} Jan 21 13:00:04 crc kubenswrapper[4881]: I0121 13:00:04.987043 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca539054649ad7498aa368328f6ff7d3f04b6d41dd101ce5698d9930259deeae" Jan 21 13:00:04 crc kubenswrapper[4881]: I0121 13:00:04.987067 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483340-9mvx4" Jan 21 13:00:05 crc kubenswrapper[4881]: I0121 13:00:05.037428 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-rp92p_999c36a2-9f08-4da1-b14a-859ac888ae38/prometheus-operator/0.log" Jan 21 13:00:05 crc kubenswrapper[4881]: I0121 13:00:05.077932 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483295-8zv6c"] Jan 21 13:00:05 crc kubenswrapper[4881]: I0121 13:00:05.085600 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-75db897d97-h5vzg_c2181303-fd96-43e5-b6f2-158cca65c0b4/prometheus-operator-admission-webhook/0.log" Jan 21 13:00:05 crc kubenswrapper[4881]: I0121 13:00:05.097238 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483295-8zv6c"] Jan 21 13:00:05 crc kubenswrapper[4881]: I0121 13:00:05.097911 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-75db897d97-n5xvb_952218f5-7dfc-40d5-a1df-2c462e1e4dcc/prometheus-operator-admission-webhook/0.log" Jan 21 13:00:05 crc kubenswrapper[4881]: I0121 13:00:05.142653 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-tfzsc_19be64a6-6795-4219-8d58-47f744ef8e13/operator/0.log" Jan 21 13:00:05 crc kubenswrapper[4881]: I0121 13:00:05.154435 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-6srxm_1cfbfa78-5e7c-4a57-9d98-e11fb36d0f50/perses-operator/0.log" Jan 21 13:00:05 crc kubenswrapper[4881]: I0121 13:00:05.402985 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22846423-24bd-4d85-b2da-a5c75401cd25" path="/var/lib/kubelet/pods/22846423-24bd-4d85-b2da-a5c75401cd25/volumes" Jan 21 13:00:11 crc kubenswrapper[4881]: I0121 13:00:11.300822 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-dmwlt_c4a109b4-26ee-4a46-9333-989cf87c0ff7/controller/0.log" Jan 21 13:00:11 crc kubenswrapper[4881]: I0121 13:00:11.308567 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-dmwlt_c4a109b4-26ee-4a46-9333-989cf87c0ff7/kube-rbac-proxy/0.log" Jan 21 13:00:11 crc kubenswrapper[4881]: I0121 13:00:11.334656 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm54h_d055f37b-fab0-4fd0-b683-4a7974b21ad5/controller/0.log" Jan 21 13:00:13 crc kubenswrapper[4881]: I0121 13:00:13.200755 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm54h_d055f37b-fab0-4fd0-b683-4a7974b21ad5/frr/0.log" Jan 21 13:00:13 crc kubenswrapper[4881]: I0121 13:00:13.212322 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm54h_d055f37b-fab0-4fd0-b683-4a7974b21ad5/reloader/0.log" Jan 21 13:00:13 crc kubenswrapper[4881]: I0121 13:00:13.217191 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm54h_d055f37b-fab0-4fd0-b683-4a7974b21ad5/frr-metrics/0.log" Jan 21 13:00:13 crc kubenswrapper[4881]: I0121 13:00:13.223303 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm54h_d055f37b-fab0-4fd0-b683-4a7974b21ad5/kube-rbac-proxy/0.log" Jan 21 13:00:13 crc kubenswrapper[4881]: I0121 13:00:13.232182 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm54h_d055f37b-fab0-4fd0-b683-4a7974b21ad5/kube-rbac-proxy-frr/0.log" Jan 21 13:00:13 crc kubenswrapper[4881]: I0121 13:00:13.238758 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm54h_d055f37b-fab0-4fd0-b683-4a7974b21ad5/cp-frr-files/0.log" Jan 21 13:00:13 crc kubenswrapper[4881]: I0121 13:00:13.245640 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm54h_d055f37b-fab0-4fd0-b683-4a7974b21ad5/cp-reloader/0.log" Jan 21 13:00:13 crc kubenswrapper[4881]: I0121 13:00:13.252355 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm54h_d055f37b-fab0-4fd0-b683-4a7974b21ad5/cp-metrics/0.log" Jan 21 13:00:13 crc kubenswrapper[4881]: I0121 13:00:13.267798 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-tzxpk_eaaea696-21d8-4963-8364-82fa7bbb0e19/frr-k8s-webhook-server/0.log" Jan 21 13:00:13 crc kubenswrapper[4881]: I0121 13:00:13.292727 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-58bd8f8bd-8k4c9_769e47b6-bd47-489d-9b99-4f2f0e30c4fd/manager/0.log" Jan 21 13:00:13 crc kubenswrapper[4881]: I0121 13:00:13.301419 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5cd4664cfc-6lg4r_a194c95e-cbcb-4d7e-a631-d4a14989e985/webhook-server/0.log" Jan 21 13:00:13 crc kubenswrapper[4881]: I0121 13:00:13.699063 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-697j4_f265a6e2-ea90-45ea-89c0-178d25243784/speaker/0.log" Jan 21 13:00:13 crc kubenswrapper[4881]: I0121 13:00:13.709972 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-697j4_f265a6e2-ea90-45ea-89c0-178d25243784/kube-rbac-proxy/0.log" Jan 21 13:00:17 crc kubenswrapper[4881]: I0121 13:00:17.492673 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6_5c9dc897-764d-4f6c-ade8-99d7aa2d8d60/extract/0.log" Jan 21 13:00:17 crc kubenswrapper[4881]: I0121 13:00:17.503120 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6_5c9dc897-764d-4f6c-ade8-99d7aa2d8d60/util/0.log" Jan 21 13:00:17 crc kubenswrapper[4881]: I0121 13:00:17.513305 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcp2lp6_5c9dc897-764d-4f6c-ade8-99d7aa2d8d60/pull/0.log" Jan 21 13:00:17 crc kubenswrapper[4881]: I0121 13:00:17.524558 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq_1bb22c78-c1fd-422e-900a-52c4b73fb451/extract/0.log" Jan 21 13:00:17 crc kubenswrapper[4881]: I0121 13:00:17.536747 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq_1bb22c78-c1fd-422e-900a-52c4b73fb451/util/0.log" Jan 21 13:00:17 crc kubenswrapper[4881]: I0121 13:00:17.559756 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713bpbkq_1bb22c78-c1fd-422e-900a-52c4b73fb451/pull/0.log" Jan 21 13:00:17 crc kubenswrapper[4881]: I0121 13:00:17.577546 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x_31ed4736-a43c-4891-aeb4-e09d573a30b3/extract/0.log" Jan 21 13:00:17 crc kubenswrapper[4881]: I0121 13:00:17.588616 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x_31ed4736-a43c-4891-aeb4-e09d573a30b3/util/0.log" Jan 21 13:00:17 crc kubenswrapper[4881]: I0121 13:00:17.596320 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dld2x_31ed4736-a43c-4891-aeb4-e09d573a30b3/pull/0.log" Jan 21 13:00:18 crc kubenswrapper[4881]: I0121 13:00:18.805458 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7wxr8_6e9defc7-ad37-4742-b149-cb71d7ea177a/registry-server/0.log" Jan 21 13:00:18 crc kubenswrapper[4881]: I0121 13:00:18.812535 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7wxr8_6e9defc7-ad37-4742-b149-cb71d7ea177a/extract-utilities/0.log" Jan 21 13:00:18 crc kubenswrapper[4881]: I0121 13:00:18.819172 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-7wxr8_6e9defc7-ad37-4742-b149-cb71d7ea177a/extract-content/0.log" Jan 21 13:00:19 crc kubenswrapper[4881]: I0121 13:00:19.956600 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bn24k_cb2faf64-08ef-4413-84f0-10e88dcb7a8f/registry-server/0.log" Jan 21 13:00:19 crc kubenswrapper[4881]: I0121 13:00:19.962613 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bn24k_cb2faf64-08ef-4413-84f0-10e88dcb7a8f/extract-utilities/0.log" Jan 21 13:00:19 crc kubenswrapper[4881]: I0121 13:00:19.969247 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bn24k_cb2faf64-08ef-4413-84f0-10e88dcb7a8f/extract-content/0.log" Jan 21 13:00:19 crc kubenswrapper[4881]: I0121 13:00:19.983745 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-vrcvz_98f0e6fe-f27f-4d75-9149-6238b2220849/marketplace-operator/0.log" Jan 21 13:00:20 crc kubenswrapper[4881]: I0121 13:00:20.294389 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rs9gj_c6d87675-513f-412d-a34c-d789cce5b4e8/registry-server/0.log" Jan 21 13:00:20 crc kubenswrapper[4881]: I0121 13:00:20.301477 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rs9gj_c6d87675-513f-412d-a34c-d789cce5b4e8/extract-utilities/0.log" Jan 21 13:00:20 crc kubenswrapper[4881]: I0121 13:00:20.307604 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rs9gj_c6d87675-513f-412d-a34c-d789cce5b4e8/extract-content/0.log" Jan 21 13:00:21 crc kubenswrapper[4881]: I0121 13:00:21.336159 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kfzl8_8ab3938c-6614-4877-a94c-75b90f339523/registry-server/0.log" Jan 21 13:00:21 crc kubenswrapper[4881]: I0121 13:00:21.341898 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kfzl8_8ab3938c-6614-4877-a94c-75b90f339523/extract-utilities/0.log" Jan 21 13:00:21 crc kubenswrapper[4881]: I0121 13:00:21.349934 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-kfzl8_8ab3938c-6614-4877-a94c-75b90f339523/extract-content/0.log" Jan 21 13:00:24 crc kubenswrapper[4881]: I0121 13:00:24.348993 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-rp92p_999c36a2-9f08-4da1-b14a-859ac888ae38/prometheus-operator/0.log" Jan 21 13:00:24 crc kubenswrapper[4881]: I0121 13:00:24.377182 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-75db897d97-h5vzg_c2181303-fd96-43e5-b6f2-158cca65c0b4/prometheus-operator-admission-webhook/0.log" Jan 21 13:00:24 crc kubenswrapper[4881]: I0121 13:00:24.386584 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-75db897d97-n5xvb_952218f5-7dfc-40d5-a1df-2c462e1e4dcc/prometheus-operator-admission-webhook/0.log" Jan 21 13:00:24 crc kubenswrapper[4881]: I0121 13:00:24.426221 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-tfzsc_19be64a6-6795-4219-8d58-47f744ef8e13/operator/0.log" Jan 21 13:00:24 crc kubenswrapper[4881]: I0121 13:00:24.435941 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-6srxm_1cfbfa78-5e7c-4a57-9d98-e11fb36d0f50/perses-operator/0.log" Jan 21 13:00:29 crc kubenswrapper[4881]: I0121 13:00:29.851921 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:00:29 crc kubenswrapper[4881]: I0121 13:00:29.852573 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:00:29 crc kubenswrapper[4881]: I0121 13:00:29.852640 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 13:00:29 crc kubenswrapper[4881]: I0121 13:00:29.853698 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c02cffb330d8abf35eb1054cc9e801224a9ea6598c8173b5cd69e9d284d2327a"} pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 13:00:29 crc kubenswrapper[4881]: I0121 13:00:29.853840 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" containerID="cri-o://c02cffb330d8abf35eb1054cc9e801224a9ea6598c8173b5cd69e9d284d2327a" gracePeriod=600 Jan 21 13:00:29 crc kubenswrapper[4881]: E0121 13:00:29.982088 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:00:30 crc kubenswrapper[4881]: I0121 13:00:30.331615 4881 generic.go:334] "Generic (PLEG): container finished" podID="3687b313-1df2-4274-80db-8c758b51bf2d" containerID="c02cffb330d8abf35eb1054cc9e801224a9ea6598c8173b5cd69e9d284d2327a" exitCode=0 Jan 21 13:00:30 crc kubenswrapper[4881]: I0121 13:00:30.331678 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerDied","Data":"c02cffb330d8abf35eb1054cc9e801224a9ea6598c8173b5cd69e9d284d2327a"} Jan 21 13:00:30 crc kubenswrapper[4881]: I0121 13:00:30.331729 4881 scope.go:117] "RemoveContainer" containerID="dedb540716d32e2d9c1d7422b582f5eca19a8a8f41fc5f2cec024d263d91f035" Jan 21 13:00:30 crc kubenswrapper[4881]: I0121 13:00:30.332711 4881 scope.go:117] "RemoveContainer" containerID="c02cffb330d8abf35eb1054cc9e801224a9ea6598c8173b5cd69e9d284d2327a" Jan 21 13:00:30 crc kubenswrapper[4881]: E0121 13:00:30.333203 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:00:35 crc kubenswrapper[4881]: I0121 13:00:35.140816 4881 scope.go:117] "RemoveContainer" containerID="bf9af12b6f88ac7a2c2f3b75d58737d697a4cfe360d0edd4e874140a2c1b67eb" Jan 21 13:00:45 crc kubenswrapper[4881]: I0121 13:00:45.310710 4881 scope.go:117] "RemoveContainer" containerID="c02cffb330d8abf35eb1054cc9e801224a9ea6598c8173b5cd69e9d284d2327a" Jan 21 13:00:45 crc kubenswrapper[4881]: E0121 13:00:45.311660 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:00:57 crc kubenswrapper[4881]: I0121 13:00:57.315292 4881 scope.go:117] "RemoveContainer" containerID="c02cffb330d8abf35eb1054cc9e801224a9ea6598c8173b5cd69e9d284d2327a" Jan 21 13:00:57 crc kubenswrapper[4881]: E0121 13:00:57.316102 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:01:00 crc kubenswrapper[4881]: I0121 13:01:00.154510 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29483341-vfrqn"] Jan 21 13:01:00 crc kubenswrapper[4881]: E0121 13:01:00.155532 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3d03c94-fe93-4321-a2a8-44fc4e42cecf" containerName="collect-profiles" Jan 21 13:01:00 crc kubenswrapper[4881]: I0121 13:01:00.155546 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3d03c94-fe93-4321-a2a8-44fc4e42cecf" containerName="collect-profiles" Jan 21 13:01:00 crc kubenswrapper[4881]: I0121 13:01:00.155802 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3d03c94-fe93-4321-a2a8-44fc4e42cecf" containerName="collect-profiles" Jan 21 13:01:00 crc kubenswrapper[4881]: I0121 13:01:00.156797 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483341-vfrqn" Jan 21 13:01:00 crc kubenswrapper[4881]: I0121 13:01:00.172508 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29483341-vfrqn"] Jan 21 13:01:00 crc kubenswrapper[4881]: I0121 13:01:00.322832 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn59g\" (UniqueName: \"kubernetes.io/projected/31661525-070b-49cf-aacb-1c845c697019-kube-api-access-dn59g\") pod \"keystone-cron-29483341-vfrqn\" (UID: \"31661525-070b-49cf-aacb-1c845c697019\") " pod="openstack/keystone-cron-29483341-vfrqn" Jan 21 13:01:00 crc kubenswrapper[4881]: I0121 13:01:00.323417 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31661525-070b-49cf-aacb-1c845c697019-config-data\") pod \"keystone-cron-29483341-vfrqn\" (UID: \"31661525-070b-49cf-aacb-1c845c697019\") " pod="openstack/keystone-cron-29483341-vfrqn" Jan 21 13:01:00 crc kubenswrapper[4881]: I0121 13:01:00.324857 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31661525-070b-49cf-aacb-1c845c697019-combined-ca-bundle\") pod \"keystone-cron-29483341-vfrqn\" (UID: \"31661525-070b-49cf-aacb-1c845c697019\") " pod="openstack/keystone-cron-29483341-vfrqn" Jan 21 13:01:00 crc kubenswrapper[4881]: I0121 13:01:00.325327 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/31661525-070b-49cf-aacb-1c845c697019-fernet-keys\") pod \"keystone-cron-29483341-vfrqn\" (UID: \"31661525-070b-49cf-aacb-1c845c697019\") " pod="openstack/keystone-cron-29483341-vfrqn" Jan 21 13:01:00 crc kubenswrapper[4881]: I0121 13:01:00.427717 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31661525-070b-49cf-aacb-1c845c697019-config-data\") pod \"keystone-cron-29483341-vfrqn\" (UID: \"31661525-070b-49cf-aacb-1c845c697019\") " pod="openstack/keystone-cron-29483341-vfrqn" Jan 21 13:01:00 crc kubenswrapper[4881]: I0121 13:01:00.427851 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31661525-070b-49cf-aacb-1c845c697019-combined-ca-bundle\") pod \"keystone-cron-29483341-vfrqn\" (UID: \"31661525-070b-49cf-aacb-1c845c697019\") " pod="openstack/keystone-cron-29483341-vfrqn" Jan 21 13:01:00 crc kubenswrapper[4881]: I0121 13:01:00.428031 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/31661525-070b-49cf-aacb-1c845c697019-fernet-keys\") pod \"keystone-cron-29483341-vfrqn\" (UID: \"31661525-070b-49cf-aacb-1c845c697019\") " pod="openstack/keystone-cron-29483341-vfrqn" Jan 21 13:01:00 crc kubenswrapper[4881]: I0121 13:01:00.428133 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dn59g\" (UniqueName: \"kubernetes.io/projected/31661525-070b-49cf-aacb-1c845c697019-kube-api-access-dn59g\") pod \"keystone-cron-29483341-vfrqn\" (UID: \"31661525-070b-49cf-aacb-1c845c697019\") " pod="openstack/keystone-cron-29483341-vfrqn" Jan 21 13:01:00 crc kubenswrapper[4881]: I0121 13:01:00.437532 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31661525-070b-49cf-aacb-1c845c697019-combined-ca-bundle\") pod \"keystone-cron-29483341-vfrqn\" (UID: \"31661525-070b-49cf-aacb-1c845c697019\") " pod="openstack/keystone-cron-29483341-vfrqn" Jan 21 13:01:00 crc kubenswrapper[4881]: I0121 13:01:00.438896 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/31661525-070b-49cf-aacb-1c845c697019-fernet-keys\") pod \"keystone-cron-29483341-vfrqn\" (UID: \"31661525-070b-49cf-aacb-1c845c697019\") " pod="openstack/keystone-cron-29483341-vfrqn" Jan 21 13:01:00 crc kubenswrapper[4881]: I0121 13:01:00.441357 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31661525-070b-49cf-aacb-1c845c697019-config-data\") pod \"keystone-cron-29483341-vfrqn\" (UID: \"31661525-070b-49cf-aacb-1c845c697019\") " pod="openstack/keystone-cron-29483341-vfrqn" Jan 21 13:01:00 crc kubenswrapper[4881]: I0121 13:01:00.462810 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dn59g\" (UniqueName: \"kubernetes.io/projected/31661525-070b-49cf-aacb-1c845c697019-kube-api-access-dn59g\") pod \"keystone-cron-29483341-vfrqn\" (UID: \"31661525-070b-49cf-aacb-1c845c697019\") " pod="openstack/keystone-cron-29483341-vfrqn" Jan 21 13:01:00 crc kubenswrapper[4881]: I0121 13:01:00.488151 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483341-vfrqn" Jan 21 13:01:01 crc kubenswrapper[4881]: I0121 13:01:01.020853 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29483341-vfrqn"] Jan 21 13:01:01 crc kubenswrapper[4881]: I0121 13:01:01.680521 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483341-vfrqn" event={"ID":"31661525-070b-49cf-aacb-1c845c697019","Type":"ContainerStarted","Data":"ba793499a48deef1e2360f820f6470dfc6c8e5503512124453c90760305db802"} Jan 21 13:01:01 crc kubenswrapper[4881]: I0121 13:01:01.680840 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483341-vfrqn" event={"ID":"31661525-070b-49cf-aacb-1c845c697019","Type":"ContainerStarted","Data":"4f1e0945a56b36d21713ae3bdeed7a4a2e74eb2ddbd92c68658409cb2bfbca03"} Jan 21 13:01:01 crc kubenswrapper[4881]: I0121 13:01:01.705878 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29483341-vfrqn" podStartSLOduration=1.705850256 podStartE2EDuration="1.705850256s" podCreationTimestamp="2026-01-21 13:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 13:01:01.697135715 +0000 UTC m=+7448.957092184" watchObservedRunningTime="2026-01-21 13:01:01.705850256 +0000 UTC m=+7448.965806725" Jan 21 13:01:06 crc kubenswrapper[4881]: I0121 13:01:06.732029 4881 generic.go:334] "Generic (PLEG): container finished" podID="31661525-070b-49cf-aacb-1c845c697019" containerID="ba793499a48deef1e2360f820f6470dfc6c8e5503512124453c90760305db802" exitCode=0 Jan 21 13:01:06 crc kubenswrapper[4881]: I0121 13:01:06.732119 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483341-vfrqn" event={"ID":"31661525-070b-49cf-aacb-1c845c697019","Type":"ContainerDied","Data":"ba793499a48deef1e2360f820f6470dfc6c8e5503512124453c90760305db802"} Jan 21 13:01:08 crc kubenswrapper[4881]: I0121 13:01:08.208950 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483341-vfrqn" Jan 21 13:01:08 crc kubenswrapper[4881]: I0121 13:01:08.319039 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31661525-070b-49cf-aacb-1c845c697019-combined-ca-bundle\") pod \"31661525-070b-49cf-aacb-1c845c697019\" (UID: \"31661525-070b-49cf-aacb-1c845c697019\") " Jan 21 13:01:08 crc kubenswrapper[4881]: I0121 13:01:08.319110 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31661525-070b-49cf-aacb-1c845c697019-config-data\") pod \"31661525-070b-49cf-aacb-1c845c697019\" (UID: \"31661525-070b-49cf-aacb-1c845c697019\") " Jan 21 13:01:08 crc kubenswrapper[4881]: I0121 13:01:08.319280 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dn59g\" (UniqueName: \"kubernetes.io/projected/31661525-070b-49cf-aacb-1c845c697019-kube-api-access-dn59g\") pod \"31661525-070b-49cf-aacb-1c845c697019\" (UID: \"31661525-070b-49cf-aacb-1c845c697019\") " Jan 21 13:01:08 crc kubenswrapper[4881]: I0121 13:01:08.319420 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/31661525-070b-49cf-aacb-1c845c697019-fernet-keys\") pod \"31661525-070b-49cf-aacb-1c845c697019\" (UID: \"31661525-070b-49cf-aacb-1c845c697019\") " Jan 21 13:01:08 crc kubenswrapper[4881]: I0121 13:01:08.340199 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31661525-070b-49cf-aacb-1c845c697019-kube-api-access-dn59g" (OuterVolumeSpecName: "kube-api-access-dn59g") pod "31661525-070b-49cf-aacb-1c845c697019" (UID: "31661525-070b-49cf-aacb-1c845c697019"). InnerVolumeSpecName "kube-api-access-dn59g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:01:08 crc kubenswrapper[4881]: I0121 13:01:08.340361 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31661525-070b-49cf-aacb-1c845c697019-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "31661525-070b-49cf-aacb-1c845c697019" (UID: "31661525-070b-49cf-aacb-1c845c697019"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:01:08 crc kubenswrapper[4881]: I0121 13:01:08.354966 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31661525-070b-49cf-aacb-1c845c697019-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "31661525-070b-49cf-aacb-1c845c697019" (UID: "31661525-070b-49cf-aacb-1c845c697019"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:01:08 crc kubenswrapper[4881]: I0121 13:01:08.383090 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31661525-070b-49cf-aacb-1c845c697019-config-data" (OuterVolumeSpecName: "config-data") pod "31661525-070b-49cf-aacb-1c845c697019" (UID: "31661525-070b-49cf-aacb-1c845c697019"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:01:08 crc kubenswrapper[4881]: I0121 13:01:08.422822 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dn59g\" (UniqueName: \"kubernetes.io/projected/31661525-070b-49cf-aacb-1c845c697019-kube-api-access-dn59g\") on node \"crc\" DevicePath \"\"" Jan 21 13:01:08 crc kubenswrapper[4881]: I0121 13:01:08.422865 4881 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/31661525-070b-49cf-aacb-1c845c697019-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 13:01:08 crc kubenswrapper[4881]: I0121 13:01:08.422875 4881 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31661525-070b-49cf-aacb-1c845c697019-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 13:01:08 crc kubenswrapper[4881]: I0121 13:01:08.422884 4881 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31661525-070b-49cf-aacb-1c845c697019-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 13:01:08 crc kubenswrapper[4881]: I0121 13:01:08.755855 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483341-vfrqn" event={"ID":"31661525-070b-49cf-aacb-1c845c697019","Type":"ContainerDied","Data":"4f1e0945a56b36d21713ae3bdeed7a4a2e74eb2ddbd92c68658409cb2bfbca03"} Jan 21 13:01:08 crc kubenswrapper[4881]: I0121 13:01:08.755891 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483341-vfrqn" Jan 21 13:01:08 crc kubenswrapper[4881]: I0121 13:01:08.755901 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f1e0945a56b36d21713ae3bdeed7a4a2e74eb2ddbd92c68658409cb2bfbca03" Jan 21 13:01:10 crc kubenswrapper[4881]: I0121 13:01:10.311166 4881 scope.go:117] "RemoveContainer" containerID="c02cffb330d8abf35eb1054cc9e801224a9ea6598c8173b5cd69e9d284d2327a" Jan 21 13:01:10 crc kubenswrapper[4881]: E0121 13:01:10.311852 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:01:21 crc kubenswrapper[4881]: I0121 13:01:21.310899 4881 scope.go:117] "RemoveContainer" containerID="c02cffb330d8abf35eb1054cc9e801224a9ea6598c8173b5cd69e9d284d2327a" Jan 21 13:01:21 crc kubenswrapper[4881]: E0121 13:01:21.311646 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:01:34 crc kubenswrapper[4881]: I0121 13:01:34.313201 4881 scope.go:117] "RemoveContainer" containerID="c02cffb330d8abf35eb1054cc9e801224a9ea6598c8173b5cd69e9d284d2327a" Jan 21 13:01:34 crc kubenswrapper[4881]: E0121 13:01:34.313910 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:01:45 crc kubenswrapper[4881]: I0121 13:01:45.312933 4881 scope.go:117] "RemoveContainer" containerID="c02cffb330d8abf35eb1054cc9e801224a9ea6598c8173b5cd69e9d284d2327a" Jan 21 13:01:45 crc kubenswrapper[4881]: E0121 13:01:45.313840 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:01:52 crc kubenswrapper[4881]: I0121 13:01:52.445090 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-rp92p_999c36a2-9f08-4da1-b14a-859ac888ae38/prometheus-operator/0.log" Jan 21 13:01:52 crc kubenswrapper[4881]: I0121 13:01:52.464808 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-75db897d97-h5vzg_c2181303-fd96-43e5-b6f2-158cca65c0b4/prometheus-operator-admission-webhook/0.log" Jan 21 13:01:52 crc kubenswrapper[4881]: I0121 13:01:52.479544 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-75db897d97-n5xvb_952218f5-7dfc-40d5-a1df-2c462e1e4dcc/prometheus-operator-admission-webhook/0.log" Jan 21 13:01:52 crc kubenswrapper[4881]: I0121 13:01:52.515570 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-tfzsc_19be64a6-6795-4219-8d58-47f744ef8e13/operator/0.log" Jan 21 13:01:52 crc kubenswrapper[4881]: I0121 13:01:52.530466 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-6srxm_1cfbfa78-5e7c-4a57-9d98-e11fb36d0f50/perses-operator/0.log" Jan 21 13:01:52 crc kubenswrapper[4881]: I0121 13:01:52.742998 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-h2ttp_faf7e95d-07e7-4d3d-936b-26b187fc0b0c/cert-manager-controller/0.log" Jan 21 13:01:52 crc kubenswrapper[4881]: I0121 13:01:52.756735 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-cdm4s_1d8014cf-8827-449d-b5fa-d0c098cc377e/cert-manager-cainjector/0.log" Jan 21 13:01:52 crc kubenswrapper[4881]: I0121 13:01:52.767194 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-csqtv_2aeab03b-23ac-4cc2-8f0f-db1111ef2cc4/cert-manager-webhook/0.log" Jan 21 13:01:53 crc kubenswrapper[4881]: I0121 13:01:53.713183 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-dmwlt_c4a109b4-26ee-4a46-9333-989cf87c0ff7/controller/0.log" Jan 21 13:01:53 crc kubenswrapper[4881]: I0121 13:01:53.719493 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-dmwlt_c4a109b4-26ee-4a46-9333-989cf87c0ff7/kube-rbac-proxy/0.log" Jan 21 13:01:53 crc kubenswrapper[4881]: I0121 13:01:53.741961 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm54h_d055f37b-fab0-4fd0-b683-4a7974b21ad5/controller/0.log" Jan 21 13:01:53 crc kubenswrapper[4881]: I0121 13:01:53.743643 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l_1c737afe-a2ad-4075-acd6-9f73aada0e4b/extract/0.log" Jan 21 13:01:53 crc kubenswrapper[4881]: I0121 13:01:53.751161 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l_1c737afe-a2ad-4075-acd6-9f73aada0e4b/util/0.log" Jan 21 13:01:53 crc kubenswrapper[4881]: I0121 13:01:53.762771 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l_1c737afe-a2ad-4075-acd6-9f73aada0e4b/pull/0.log" Jan 21 13:01:53 crc kubenswrapper[4881]: I0121 13:01:53.968722 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-svq8w_848fd8db-3bd5-4e22-96ca-f69b181e48be/manager/0.log" Jan 21 13:01:54 crc kubenswrapper[4881]: I0121 13:01:54.064087 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-7qgck_a028dcae-6b9d-414d-8bab-652f301de541/manager/0.log" Jan 21 13:01:54 crc kubenswrapper[4881]: I0121 13:01:54.076593 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-4wmln_36e5ddfe-67a4-4721-9ef5-b9459c64bf5c/manager/0.log" Jan 21 13:01:54 crc kubenswrapper[4881]: I0121 13:01:54.193728 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-jv7cr_1f795f92-d385-49bc-bc91-5109734f4d5a/manager/0.log" Jan 21 13:01:54 crc kubenswrapper[4881]: I0121 13:01:54.205142 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-zmgll_efb259b7-934f-4bc3-b502-633472d1a1c5/manager/0.log" Jan 21 13:01:54 crc kubenswrapper[4881]: I0121 13:01:54.254500 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-bv8wz_bb9b2c3f-4f68-44fc-addf-2cf4421be015/manager/0.log" Jan 21 13:01:54 crc kubenswrapper[4881]: I0121 13:01:54.869428 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-klgq4_2fe210a4-2adf-4b55-9a43-c1c390f51b0e/manager/0.log" Jan 21 13:01:54 crc kubenswrapper[4881]: I0121 13:01:54.885657 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-5qcms_d0cafd1d-5f37-499a-a531-547a137aae21/manager/0.log" Jan 21 13:01:55 crc kubenswrapper[4881]: I0121 13:01:55.082190 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-9zp7h_ba9a1249-fc58-4809-a472-d199afa9b52b/manager/0.log" Jan 21 13:01:55 crc kubenswrapper[4881]: I0121 13:01:55.097897 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-h6dr4_b72b2323-5329-4145-9cee-b447d9e2a304/manager/0.log" Jan 21 13:01:55 crc kubenswrapper[4881]: I0121 13:01:55.159207 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-s6gm8_4c2550fe-b3eb-4eef-8ffc-ebb4a9ce1b5f/manager/0.log" Jan 21 13:01:55 crc kubenswrapper[4881]: I0121 13:01:55.242226 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-ncnww_c3b86204-5389-4b6a-bd45-fb6ee23b784e/manager/0.log" Jan 21 13:01:55 crc kubenswrapper[4881]: I0121 13:01:55.451564 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-798zt_761a1a49-e01e-4674-b1f4-da732e1def98/manager/0.log" Jan 21 13:01:55 crc kubenswrapper[4881]: I0121 13:01:55.462539 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-n7kgd_340257c4-9218-49b0-8a75-b2a4e0231fe3/manager/0.log" Jan 21 13:01:55 crc kubenswrapper[4881]: I0121 13:01:55.487668 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b8544795q_b1b17be2-e382-4916-8e53-a68c85b5bfc2/manager/0.log" Jan 21 13:01:55 crc kubenswrapper[4881]: I0121 13:01:55.826137 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-766b56994f-7hsc6_3a9a96af-4c4b-45b4-ade0-688a9029cf7b/operator/0.log" Jan 21 13:01:56 crc kubenswrapper[4881]: I0121 13:01:56.823313 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm54h_d055f37b-fab0-4fd0-b683-4a7974b21ad5/frr/0.log" Jan 21 13:01:56 crc kubenswrapper[4881]: I0121 13:01:56.835626 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm54h_d055f37b-fab0-4fd0-b683-4a7974b21ad5/reloader/0.log" Jan 21 13:01:56 crc kubenswrapper[4881]: I0121 13:01:56.841118 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm54h_d055f37b-fab0-4fd0-b683-4a7974b21ad5/frr-metrics/0.log" Jan 21 13:01:56 crc kubenswrapper[4881]: I0121 13:01:56.853761 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm54h_d055f37b-fab0-4fd0-b683-4a7974b21ad5/kube-rbac-proxy/0.log" Jan 21 13:01:56 crc kubenswrapper[4881]: I0121 13:01:56.863829 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm54h_d055f37b-fab0-4fd0-b683-4a7974b21ad5/kube-rbac-proxy-frr/0.log" Jan 21 13:01:56 crc kubenswrapper[4881]: I0121 13:01:56.870613 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm54h_d055f37b-fab0-4fd0-b683-4a7974b21ad5/cp-frr-files/0.log" Jan 21 13:01:56 crc kubenswrapper[4881]: I0121 13:01:56.881708 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm54h_d055f37b-fab0-4fd0-b683-4a7974b21ad5/cp-reloader/0.log" Jan 21 13:01:56 crc kubenswrapper[4881]: I0121 13:01:56.890332 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lm54h_d055f37b-fab0-4fd0-b683-4a7974b21ad5/cp-metrics/0.log" Jan 21 13:01:56 crc kubenswrapper[4881]: I0121 13:01:56.907644 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-tzxpk_eaaea696-21d8-4963-8364-82fa7bbb0e19/frr-k8s-webhook-server/0.log" Jan 21 13:01:56 crc kubenswrapper[4881]: I0121 13:01:56.939494 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-58bd8f8bd-8k4c9_769e47b6-bd47-489d-9b99-4f2f0e30c4fd/manager/0.log" Jan 21 13:01:56 crc kubenswrapper[4881]: I0121 13:01:56.949074 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5cd4664cfc-6lg4r_a194c95e-cbcb-4d7e-a631-d4a14989e985/webhook-server/0.log" Jan 21 13:01:57 crc kubenswrapper[4881]: I0121 13:01:57.550272 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-697j4_f265a6e2-ea90-45ea-89c0-178d25243784/speaker/0.log" Jan 21 13:01:57 crc kubenswrapper[4881]: I0121 13:01:57.559459 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-697j4_f265a6e2-ea90-45ea-89c0-178d25243784/kube-rbac-proxy/0.log" Jan 21 13:01:57 crc kubenswrapper[4881]: I0121 13:01:57.667132 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-87d6d564b-ktcf8_a55fdb43-cd6c-4415-8ef6-07f6c7da6272/manager/0.log" Jan 21 13:01:57 crc kubenswrapper[4881]: I0121 13:01:57.680643 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-7vz4j_0a051fc2-b6e4-463c-bb0a-b565d12b21b4/registry-server/0.log" Jan 21 13:01:57 crc kubenswrapper[4881]: I0121 13:01:57.742277 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-vpqw4_50cfdf18-6a7e-4b3c-bb0f-5260fc3d42eb/manager/0.log" Jan 21 13:01:57 crc kubenswrapper[4881]: I0121 13:01:57.767522 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-jh4z9_e8e6f423-a07b-4a22-9e39-efa8de22747e/manager/0.log" Jan 21 13:01:57 crc kubenswrapper[4881]: I0121 13:01:57.800142 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-76qxc_8c8feeec-377c-499a-b666-895010f8ebeb/operator/0.log" Jan 21 13:01:57 crc kubenswrapper[4881]: I0121 13:01:57.834361 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-rk8l8_8c504afd-e4e0-4676-b292-b569b638a7dd/manager/0.log" Jan 21 13:01:58 crc kubenswrapper[4881]: I0121 13:01:58.062824 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-fcht4_55ce5ee6-47f4-4874-92dc-6ab78f2ce212/manager/0.log" Jan 21 13:01:58 crc kubenswrapper[4881]: I0121 13:01:58.080292 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-tttcz_2aac430e-3ac8-4624-8575-66386b5c2df3/manager/0.log" Jan 21 13:01:58 crc kubenswrapper[4881]: I0121 13:01:58.163380 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-849fd9b886-k9t7q_1cebbaaf-6189-409a-8f25-43d7fac77f95/manager/0.log" Jan 21 13:01:58 crc kubenswrapper[4881]: I0121 13:01:58.312747 4881 scope.go:117] "RemoveContainer" containerID="c02cffb330d8abf35eb1054cc9e801224a9ea6598c8173b5cd69e9d284d2327a" Jan 21 13:01:58 crc kubenswrapper[4881]: E0121 13:01:58.313275 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:01:58 crc kubenswrapper[4881]: I0121 13:01:58.533704 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-h2ttp_faf7e95d-07e7-4d3d-936b-26b187fc0b0c/cert-manager-controller/0.log" Jan 21 13:01:58 crc kubenswrapper[4881]: I0121 13:01:58.548893 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-cdm4s_1d8014cf-8827-449d-b5fa-d0c098cc377e/cert-manager-cainjector/0.log" Jan 21 13:01:58 crc kubenswrapper[4881]: I0121 13:01:58.558403 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-csqtv_2aeab03b-23ac-4cc2-8f0f-db1111ef2cc4/cert-manager-webhook/0.log" Jan 21 13:01:59 crc kubenswrapper[4881]: I0121 13:01:59.160586 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-hfc8p_bc38f0b5-944c-40ae-aed0-50ca39ea2627/control-plane-machine-set-operator/0.log" Jan 21 13:01:59 crc kubenswrapper[4881]: I0121 13:01:59.181043 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-cclnc_8465162e-dd9f-45b4-83a6-94666ac2b87b/kube-rbac-proxy/0.log" Jan 21 13:01:59 crc kubenswrapper[4881]: I0121 13:01:59.194160 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-cclnc_8465162e-dd9f-45b4-83a6-94666ac2b87b/machine-api-operator/0.log" Jan 21 13:01:59 crc kubenswrapper[4881]: I0121 13:01:59.582941 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-lgdjc_fcdadd73-568f-4ae0-a7bb-9330b2feb835/nmstate-console-plugin/0.log" Jan 21 13:01:59 crc kubenswrapper[4881]: I0121 13:01:59.603738 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-b9rcw_5c705c83-efa0-436f-a0b5-9164dbb6b1df/nmstate-handler/0.log" Jan 21 13:01:59 crc kubenswrapper[4881]: I0121 13:01:59.617533 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-ft48b_f68408aa-3450-42af-a6f8-b5260973f272/nmstate-metrics/0.log" Jan 21 13:01:59 crc kubenswrapper[4881]: I0121 13:01:59.636461 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-ft48b_f68408aa-3450-42af-a6f8-b5260973f272/kube-rbac-proxy/0.log" Jan 21 13:01:59 crc kubenswrapper[4881]: I0121 13:01:59.650260 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-zlxs9_14878b0e-37cc-4c03-89df-ba23d94589a0/nmstate-operator/0.log" Jan 21 13:01:59 crc kubenswrapper[4881]: I0121 13:01:59.707561 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-qmv5k_b6262b8c-2531-4008-9bb8-c3beeb66a3ed/nmstate-webhook/0.log" Jan 21 13:02:00 crc kubenswrapper[4881]: I0121 13:02:00.087655 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l_1c737afe-a2ad-4075-acd6-9f73aada0e4b/extract/0.log" Jan 21 13:02:00 crc kubenswrapper[4881]: I0121 13:02:00.092988 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l_1c737afe-a2ad-4075-acd6-9f73aada0e4b/util/0.log" Jan 21 13:02:00 crc kubenswrapper[4881]: I0121 13:02:00.103836 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_23550c8618544ac9ea89afd4ce99cda9256ff69faea7c95bed8068d414dwp7l_1c737afe-a2ad-4075-acd6-9f73aada0e4b/pull/0.log" Jan 21 13:02:00 crc kubenswrapper[4881]: I0121 13:02:00.174636 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-svq8w_848fd8db-3bd5-4e22-96ca-f69b181e48be/manager/0.log" Jan 21 13:02:00 crc kubenswrapper[4881]: I0121 13:02:00.248373 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-7qgck_a028dcae-6b9d-414d-8bab-652f301de541/manager/0.log" Jan 21 13:02:00 crc kubenswrapper[4881]: I0121 13:02:00.263290 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-4wmln_36e5ddfe-67a4-4721-9ef5-b9459c64bf5c/manager/0.log" Jan 21 13:02:00 crc kubenswrapper[4881]: I0121 13:02:00.330845 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-jv7cr_1f795f92-d385-49bc-bc91-5109734f4d5a/manager/0.log" Jan 21 13:02:00 crc kubenswrapper[4881]: I0121 13:02:00.406995 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-zmgll_efb259b7-934f-4bc3-b502-633472d1a1c5/manager/0.log" Jan 21 13:02:00 crc kubenswrapper[4881]: I0121 13:02:00.545916 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-bv8wz_bb9b2c3f-4f68-44fc-addf-2cf4421be015/manager/0.log" Jan 21 13:02:00 crc kubenswrapper[4881]: I0121 13:02:00.834805 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-klgq4_2fe210a4-2adf-4b55-9a43-c1c390f51b0e/manager/0.log" Jan 21 13:02:00 crc kubenswrapper[4881]: I0121 13:02:00.858146 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-5qcms_d0cafd1d-5f37-499a-a531-547a137aae21/manager/0.log" Jan 21 13:02:00 crc kubenswrapper[4881]: I0121 13:02:00.948235 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-9zp7h_ba9a1249-fc58-4809-a472-d199afa9b52b/manager/0.log" Jan 21 13:02:00 crc kubenswrapper[4881]: I0121 13:02:00.981348 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-h6dr4_b72b2323-5329-4145-9cee-b447d9e2a304/manager/0.log" Jan 21 13:02:01 crc kubenswrapper[4881]: I0121 13:02:01.062674 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-s6gm8_4c2550fe-b3eb-4eef-8ffc-ebb4a9ce1b5f/manager/0.log" Jan 21 13:02:01 crc kubenswrapper[4881]: I0121 13:02:01.105744 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-ncnww_c3b86204-5389-4b6a-bd45-fb6ee23b784e/manager/0.log" Jan 21 13:02:01 crc kubenswrapper[4881]: I0121 13:02:01.170921 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-798zt_761a1a49-e01e-4674-b1f4-da732e1def98/manager/0.log" Jan 21 13:02:01 crc kubenswrapper[4881]: I0121 13:02:01.184889 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-n7kgd_340257c4-9218-49b0-8a75-b2a4e0231fe3/manager/0.log" Jan 21 13:02:01 crc kubenswrapper[4881]: I0121 13:02:01.213935 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b8544795q_b1b17be2-e382-4916-8e53-a68c85b5bfc2/manager/0.log" Jan 21 13:02:01 crc kubenswrapper[4881]: I0121 13:02:01.399026 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-766b56994f-7hsc6_3a9a96af-4c4b-45b4-ade0-688a9029cf7b/operator/0.log" Jan 21 13:02:02 crc kubenswrapper[4881]: I0121 13:02:02.379423 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-87d6d564b-ktcf8_a55fdb43-cd6c-4415-8ef6-07f6c7da6272/manager/0.log" Jan 21 13:02:02 crc kubenswrapper[4881]: I0121 13:02:02.409409 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-7vz4j_0a051fc2-b6e4-463c-bb0a-b565d12b21b4/registry-server/0.log" Jan 21 13:02:02 crc kubenswrapper[4881]: I0121 13:02:02.489993 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-vpqw4_50cfdf18-6a7e-4b3c-bb0f-5260fc3d42eb/manager/0.log" Jan 21 13:02:02 crc kubenswrapper[4881]: I0121 13:02:02.543571 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-jh4z9_e8e6f423-a07b-4a22-9e39-efa8de22747e/manager/0.log" Jan 21 13:02:02 crc kubenswrapper[4881]: I0121 13:02:02.582824 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-76qxc_8c8feeec-377c-499a-b666-895010f8ebeb/operator/0.log" Jan 21 13:02:02 crc kubenswrapper[4881]: I0121 13:02:02.669330 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-rk8l8_8c504afd-e4e0-4676-b292-b569b638a7dd/manager/0.log" Jan 21 13:02:02 crc kubenswrapper[4881]: I0121 13:02:02.842193 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-fcht4_55ce5ee6-47f4-4874-92dc-6ab78f2ce212/manager/0.log" Jan 21 13:02:02 crc kubenswrapper[4881]: I0121 13:02:02.858038 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-tttcz_2aac430e-3ac8-4624-8575-66386b5c2df3/manager/0.log" Jan 21 13:02:02 crc kubenswrapper[4881]: I0121 13:02:02.926618 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-849fd9b886-k9t7q_1cebbaaf-6189-409a-8f25-43d7fac77f95/manager/0.log" Jan 21 13:02:03 crc kubenswrapper[4881]: I0121 13:02:03.085393 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 13:02:05 crc kubenswrapper[4881]: I0121 13:02:05.023956 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-v4wxp_c14980d7-1b3b-463b-8f57-f1e1afbd258c/kube-multus-additional-cni-plugins/0.log" Jan 21 13:02:05 crc kubenswrapper[4881]: I0121 13:02:05.034974 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-v4wxp_c14980d7-1b3b-463b-8f57-f1e1afbd258c/egress-router-binary-copy/0.log" Jan 21 13:02:05 crc kubenswrapper[4881]: I0121 13:02:05.043933 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-v4wxp_c14980d7-1b3b-463b-8f57-f1e1afbd258c/cni-plugins/0.log" Jan 21 13:02:05 crc kubenswrapper[4881]: I0121 13:02:05.053107 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-v4wxp_c14980d7-1b3b-463b-8f57-f1e1afbd258c/bond-cni-plugin/0.log" Jan 21 13:02:05 crc kubenswrapper[4881]: I0121 13:02:05.060648 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-v4wxp_c14980d7-1b3b-463b-8f57-f1e1afbd258c/routeoverride-cni/0.log" Jan 21 13:02:05 crc kubenswrapper[4881]: I0121 13:02:05.069046 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-v4wxp_c14980d7-1b3b-463b-8f57-f1e1afbd258c/whereabouts-cni-bincopy/0.log" Jan 21 13:02:05 crc kubenswrapper[4881]: I0121 13:02:05.079806 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-v4wxp_c14980d7-1b3b-463b-8f57-f1e1afbd258c/whereabouts-cni/0.log" Jan 21 13:02:05 crc kubenswrapper[4881]: I0121 13:02:05.120827 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-j4s5w_6742e18f-a187-4a77-a734-bdec89bd89e0/multus-admission-controller/0.log" Jan 21 13:02:05 crc kubenswrapper[4881]: I0121 13:02:05.127574 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-j4s5w_6742e18f-a187-4a77-a734-bdec89bd89e0/kube-rbac-proxy/0.log" Jan 21 13:02:05 crc kubenswrapper[4881]: I0121 13:02:05.190411 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fs42r_09da9e14-f6d5-4346-a4a0-c17711e3b603/kube-multus/1.log" Jan 21 13:02:05 crc kubenswrapper[4881]: I0121 13:02:05.283329 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-fs42r_09da9e14-f6d5-4346-a4a0-c17711e3b603/kube-multus/2.log" Jan 21 13:02:05 crc kubenswrapper[4881]: I0121 13:02:05.322482 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-dtv4t_3552adbd-011f-4552-9e69-233b92c554c8/network-metrics-daemon/0.log" Jan 21 13:02:05 crc kubenswrapper[4881]: I0121 13:02:05.329180 4881 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-dtv4t_3552adbd-011f-4552-9e69-233b92c554c8/kube-rbac-proxy/0.log" Jan 21 13:02:10 crc kubenswrapper[4881]: I0121 13:02:10.312511 4881 scope.go:117] "RemoveContainer" containerID="c02cffb330d8abf35eb1054cc9e801224a9ea6598c8173b5cd69e9d284d2327a" Jan 21 13:02:10 crc kubenswrapper[4881]: E0121 13:02:10.314234 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:02:24 crc kubenswrapper[4881]: I0121 13:02:24.312085 4881 scope.go:117] "RemoveContainer" containerID="c02cffb330d8abf35eb1054cc9e801224a9ea6598c8173b5cd69e9d284d2327a" Jan 21 13:02:24 crc kubenswrapper[4881]: E0121 13:02:24.313257 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:02:39 crc kubenswrapper[4881]: I0121 13:02:39.312075 4881 scope.go:117] "RemoveContainer" containerID="c02cffb330d8abf35eb1054cc9e801224a9ea6598c8173b5cd69e9d284d2327a" Jan 21 13:02:39 crc kubenswrapper[4881]: E0121 13:02:39.313101 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:02:51 crc kubenswrapper[4881]: I0121 13:02:51.471948 4881 scope.go:117] "RemoveContainer" containerID="c02cffb330d8abf35eb1054cc9e801224a9ea6598c8173b5cd69e9d284d2327a" Jan 21 13:02:51 crc kubenswrapper[4881]: E0121 13:02:51.472605 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:03:03 crc kubenswrapper[4881]: I0121 13:03:03.320923 4881 scope.go:117] "RemoveContainer" containerID="c02cffb330d8abf35eb1054cc9e801224a9ea6598c8173b5cd69e9d284d2327a" Jan 21 13:03:03 crc kubenswrapper[4881]: E0121 13:03:03.324443 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:03:14 crc kubenswrapper[4881]: I0121 13:03:14.311376 4881 scope.go:117] "RemoveContainer" containerID="c02cffb330d8abf35eb1054cc9e801224a9ea6598c8173b5cd69e9d284d2327a" Jan 21 13:03:14 crc kubenswrapper[4881]: E0121 13:03:14.312183 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:03:26 crc kubenswrapper[4881]: I0121 13:03:26.311718 4881 scope.go:117] "RemoveContainer" containerID="c02cffb330d8abf35eb1054cc9e801224a9ea6598c8173b5cd69e9d284d2327a" Jan 21 13:03:26 crc kubenswrapper[4881]: E0121 13:03:26.312732 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:03:38 crc kubenswrapper[4881]: I0121 13:03:38.311329 4881 scope.go:117] "RemoveContainer" containerID="c02cffb330d8abf35eb1054cc9e801224a9ea6598c8173b5cd69e9d284d2327a" Jan 21 13:03:38 crc kubenswrapper[4881]: E0121 13:03:38.312158 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:03:49 crc kubenswrapper[4881]: I0121 13:03:49.311194 4881 scope.go:117] "RemoveContainer" containerID="c02cffb330d8abf35eb1054cc9e801224a9ea6598c8173b5cd69e9d284d2327a" Jan 21 13:03:49 crc kubenswrapper[4881]: E0121 13:03:49.311839 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:04:02 crc kubenswrapper[4881]: I0121 13:04:02.310811 4881 scope.go:117] "RemoveContainer" containerID="c02cffb330d8abf35eb1054cc9e801224a9ea6598c8173b5cd69e9d284d2327a" Jan 21 13:04:02 crc kubenswrapper[4881]: E0121 13:04:02.311564 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:04:16 crc kubenswrapper[4881]: I0121 13:04:16.311244 4881 scope.go:117] "RemoveContainer" containerID="c02cffb330d8abf35eb1054cc9e801224a9ea6598c8173b5cd69e9d284d2327a" Jan 21 13:04:16 crc kubenswrapper[4881]: E0121 13:04:16.312585 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:04:17 crc kubenswrapper[4881]: I0121 13:04:17.163854 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kbtdr"] Jan 21 13:04:17 crc kubenswrapper[4881]: E0121 13:04:17.164487 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31661525-070b-49cf-aacb-1c845c697019" containerName="keystone-cron" Jan 21 13:04:17 crc kubenswrapper[4881]: I0121 13:04:17.164508 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="31661525-070b-49cf-aacb-1c845c697019" containerName="keystone-cron" Jan 21 13:04:17 crc kubenswrapper[4881]: I0121 13:04:17.164803 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="31661525-070b-49cf-aacb-1c845c697019" containerName="keystone-cron" Jan 21 13:04:17 crc kubenswrapper[4881]: I0121 13:04:17.166857 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kbtdr" Jan 21 13:04:17 crc kubenswrapper[4881]: I0121 13:04:17.187401 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kbtdr"] Jan 21 13:04:17 crc kubenswrapper[4881]: I0121 13:04:17.328585 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bec100bc-3f06-4e9f-92c8-d2150746c720-catalog-content\") pod \"redhat-marketplace-kbtdr\" (UID: \"bec100bc-3f06-4e9f-92c8-d2150746c720\") " pod="openshift-marketplace/redhat-marketplace-kbtdr" Jan 21 13:04:17 crc kubenswrapper[4881]: I0121 13:04:17.328677 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bec100bc-3f06-4e9f-92c8-d2150746c720-utilities\") pod \"redhat-marketplace-kbtdr\" (UID: \"bec100bc-3f06-4e9f-92c8-d2150746c720\") " pod="openshift-marketplace/redhat-marketplace-kbtdr" Jan 21 13:04:17 crc kubenswrapper[4881]: I0121 13:04:17.328718 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff5v7\" (UniqueName: \"kubernetes.io/projected/bec100bc-3f06-4e9f-92c8-d2150746c720-kube-api-access-ff5v7\") pod \"redhat-marketplace-kbtdr\" (UID: \"bec100bc-3f06-4e9f-92c8-d2150746c720\") " pod="openshift-marketplace/redhat-marketplace-kbtdr" Jan 21 13:04:17 crc kubenswrapper[4881]: I0121 13:04:17.430509 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bec100bc-3f06-4e9f-92c8-d2150746c720-catalog-content\") pod \"redhat-marketplace-kbtdr\" (UID: \"bec100bc-3f06-4e9f-92c8-d2150746c720\") " pod="openshift-marketplace/redhat-marketplace-kbtdr" Jan 21 13:04:17 crc kubenswrapper[4881]: I0121 13:04:17.430644 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bec100bc-3f06-4e9f-92c8-d2150746c720-utilities\") pod \"redhat-marketplace-kbtdr\" (UID: \"bec100bc-3f06-4e9f-92c8-d2150746c720\") " pod="openshift-marketplace/redhat-marketplace-kbtdr" Jan 21 13:04:17 crc kubenswrapper[4881]: I0121 13:04:17.430728 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ff5v7\" (UniqueName: \"kubernetes.io/projected/bec100bc-3f06-4e9f-92c8-d2150746c720-kube-api-access-ff5v7\") pod \"redhat-marketplace-kbtdr\" (UID: \"bec100bc-3f06-4e9f-92c8-d2150746c720\") " pod="openshift-marketplace/redhat-marketplace-kbtdr" Jan 21 13:04:17 crc kubenswrapper[4881]: I0121 13:04:17.431196 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bec100bc-3f06-4e9f-92c8-d2150746c720-catalog-content\") pod \"redhat-marketplace-kbtdr\" (UID: \"bec100bc-3f06-4e9f-92c8-d2150746c720\") " pod="openshift-marketplace/redhat-marketplace-kbtdr" Jan 21 13:04:17 crc kubenswrapper[4881]: I0121 13:04:17.431258 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bec100bc-3f06-4e9f-92c8-d2150746c720-utilities\") pod \"redhat-marketplace-kbtdr\" (UID: \"bec100bc-3f06-4e9f-92c8-d2150746c720\") " pod="openshift-marketplace/redhat-marketplace-kbtdr" Jan 21 13:04:17 crc kubenswrapper[4881]: I0121 13:04:17.454887 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ff5v7\" (UniqueName: \"kubernetes.io/projected/bec100bc-3f06-4e9f-92c8-d2150746c720-kube-api-access-ff5v7\") pod \"redhat-marketplace-kbtdr\" (UID: \"bec100bc-3f06-4e9f-92c8-d2150746c720\") " pod="openshift-marketplace/redhat-marketplace-kbtdr" Jan 21 13:04:17 crc kubenswrapper[4881]: I0121 13:04:17.499776 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kbtdr" Jan 21 13:04:18 crc kubenswrapper[4881]: I0121 13:04:18.000060 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kbtdr"] Jan 21 13:04:18 crc kubenswrapper[4881]: W0121 13:04:18.015537 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbec100bc_3f06_4e9f_92c8_d2150746c720.slice/crio-148a08f2d0886418890f59c8a5b8966ff6652d5c4935149a9f98df1736464a3a WatchSource:0}: Error finding container 148a08f2d0886418890f59c8a5b8966ff6652d5c4935149a9f98df1736464a3a: Status 404 returned error can't find the container with id 148a08f2d0886418890f59c8a5b8966ff6652d5c4935149a9f98df1736464a3a Jan 21 13:04:18 crc kubenswrapper[4881]: I0121 13:04:18.725824 4881 generic.go:334] "Generic (PLEG): container finished" podID="bec100bc-3f06-4e9f-92c8-d2150746c720" containerID="7ba9268affb7b36ede0c95f07ffb37c2eedb4287b3034bd5ca41d251a17b650e" exitCode=0 Jan 21 13:04:18 crc kubenswrapper[4881]: I0121 13:04:18.726056 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kbtdr" event={"ID":"bec100bc-3f06-4e9f-92c8-d2150746c720","Type":"ContainerDied","Data":"7ba9268affb7b36ede0c95f07ffb37c2eedb4287b3034bd5ca41d251a17b650e"} Jan 21 13:04:18 crc kubenswrapper[4881]: I0121 13:04:18.729140 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kbtdr" event={"ID":"bec100bc-3f06-4e9f-92c8-d2150746c720","Type":"ContainerStarted","Data":"148a08f2d0886418890f59c8a5b8966ff6652d5c4935149a9f98df1736464a3a"} Jan 21 13:04:18 crc kubenswrapper[4881]: I0121 13:04:18.729253 4881 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 13:04:19 crc kubenswrapper[4881]: I0121 13:04:19.746337 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kbtdr" event={"ID":"bec100bc-3f06-4e9f-92c8-d2150746c720","Type":"ContainerStarted","Data":"52d6e3407218ada320893735ba478f1369a2a54d0c437542b8c2fab3e35c4b65"} Jan 21 13:04:20 crc kubenswrapper[4881]: I0121 13:04:20.764240 4881 generic.go:334] "Generic (PLEG): container finished" podID="bec100bc-3f06-4e9f-92c8-d2150746c720" containerID="52d6e3407218ada320893735ba478f1369a2a54d0c437542b8c2fab3e35c4b65" exitCode=0 Jan 21 13:04:20 crc kubenswrapper[4881]: I0121 13:04:20.764300 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kbtdr" event={"ID":"bec100bc-3f06-4e9f-92c8-d2150746c720","Type":"ContainerDied","Data":"52d6e3407218ada320893735ba478f1369a2a54d0c437542b8c2fab3e35c4b65"} Jan 21 13:04:21 crc kubenswrapper[4881]: I0121 13:04:21.774451 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kbtdr" event={"ID":"bec100bc-3f06-4e9f-92c8-d2150746c720","Type":"ContainerStarted","Data":"a613acb4af5b4ff0151733e528bac6fafdfcaaa1c659f0a6b2cc1730debc40e3"} Jan 21 13:04:21 crc kubenswrapper[4881]: I0121 13:04:21.798751 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kbtdr" podStartSLOduration=2.352642249 podStartE2EDuration="4.798732157s" podCreationTimestamp="2026-01-21 13:04:17 +0000 UTC" firstStartedPulling="2026-01-21 13:04:18.72852271 +0000 UTC m=+7645.988479219" lastFinishedPulling="2026-01-21 13:04:21.174612608 +0000 UTC m=+7648.434569127" observedRunningTime="2026-01-21 13:04:21.791923623 +0000 UTC m=+7649.051880092" watchObservedRunningTime="2026-01-21 13:04:21.798732157 +0000 UTC m=+7649.058688626" Jan 21 13:04:27 crc kubenswrapper[4881]: I0121 13:04:27.313985 4881 scope.go:117] "RemoveContainer" containerID="c02cffb330d8abf35eb1054cc9e801224a9ea6598c8173b5cd69e9d284d2327a" Jan 21 13:04:27 crc kubenswrapper[4881]: E0121 13:04:27.314693 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:04:27 crc kubenswrapper[4881]: I0121 13:04:27.500647 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kbtdr" Jan 21 13:04:27 crc kubenswrapper[4881]: I0121 13:04:27.500737 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kbtdr" Jan 21 13:04:27 crc kubenswrapper[4881]: I0121 13:04:27.564253 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kbtdr" Jan 21 13:04:27 crc kubenswrapper[4881]: I0121 13:04:27.913056 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kbtdr" Jan 21 13:04:27 crc kubenswrapper[4881]: I0121 13:04:27.969644 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kbtdr"] Jan 21 13:04:29 crc kubenswrapper[4881]: I0121 13:04:29.885523 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kbtdr" podUID="bec100bc-3f06-4e9f-92c8-d2150746c720" containerName="registry-server" containerID="cri-o://a613acb4af5b4ff0151733e528bac6fafdfcaaa1c659f0a6b2cc1730debc40e3" gracePeriod=2 Jan 21 13:04:30 crc kubenswrapper[4881]: I0121 13:04:30.904757 4881 generic.go:334] "Generic (PLEG): container finished" podID="bec100bc-3f06-4e9f-92c8-d2150746c720" containerID="a613acb4af5b4ff0151733e528bac6fafdfcaaa1c659f0a6b2cc1730debc40e3" exitCode=0 Jan 21 13:04:30 crc kubenswrapper[4881]: I0121 13:04:30.904880 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kbtdr" event={"ID":"bec100bc-3f06-4e9f-92c8-d2150746c720","Type":"ContainerDied","Data":"a613acb4af5b4ff0151733e528bac6fafdfcaaa1c659f0a6b2cc1730debc40e3"} Jan 21 13:04:30 crc kubenswrapper[4881]: I0121 13:04:30.906058 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kbtdr" event={"ID":"bec100bc-3f06-4e9f-92c8-d2150746c720","Type":"ContainerDied","Data":"148a08f2d0886418890f59c8a5b8966ff6652d5c4935149a9f98df1736464a3a"} Jan 21 13:04:30 crc kubenswrapper[4881]: I0121 13:04:30.906083 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="148a08f2d0886418890f59c8a5b8966ff6652d5c4935149a9f98df1736464a3a" Jan 21 13:04:30 crc kubenswrapper[4881]: I0121 13:04:30.967019 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kbtdr" Jan 21 13:04:30 crc kubenswrapper[4881]: I0121 13:04:30.983773 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bec100bc-3f06-4e9f-92c8-d2150746c720-utilities\") pod \"bec100bc-3f06-4e9f-92c8-d2150746c720\" (UID: \"bec100bc-3f06-4e9f-92c8-d2150746c720\") " Jan 21 13:04:30 crc kubenswrapper[4881]: I0121 13:04:30.984059 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bec100bc-3f06-4e9f-92c8-d2150746c720-catalog-content\") pod \"bec100bc-3f06-4e9f-92c8-d2150746c720\" (UID: \"bec100bc-3f06-4e9f-92c8-d2150746c720\") " Jan 21 13:04:30 crc kubenswrapper[4881]: I0121 13:04:30.984163 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ff5v7\" (UniqueName: \"kubernetes.io/projected/bec100bc-3f06-4e9f-92c8-d2150746c720-kube-api-access-ff5v7\") pod \"bec100bc-3f06-4e9f-92c8-d2150746c720\" (UID: \"bec100bc-3f06-4e9f-92c8-d2150746c720\") " Jan 21 13:04:30 crc kubenswrapper[4881]: I0121 13:04:30.985492 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bec100bc-3f06-4e9f-92c8-d2150746c720-utilities" (OuterVolumeSpecName: "utilities") pod "bec100bc-3f06-4e9f-92c8-d2150746c720" (UID: "bec100bc-3f06-4e9f-92c8-d2150746c720"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:04:30 crc kubenswrapper[4881]: I0121 13:04:30.990018 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bec100bc-3f06-4e9f-92c8-d2150746c720-kube-api-access-ff5v7" (OuterVolumeSpecName: "kube-api-access-ff5v7") pod "bec100bc-3f06-4e9f-92c8-d2150746c720" (UID: "bec100bc-3f06-4e9f-92c8-d2150746c720"). InnerVolumeSpecName "kube-api-access-ff5v7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:04:31 crc kubenswrapper[4881]: I0121 13:04:31.014733 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bec100bc-3f06-4e9f-92c8-d2150746c720-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bec100bc-3f06-4e9f-92c8-d2150746c720" (UID: "bec100bc-3f06-4e9f-92c8-d2150746c720"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:04:31 crc kubenswrapper[4881]: I0121 13:04:31.086511 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bec100bc-3f06-4e9f-92c8-d2150746c720-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:04:31 crc kubenswrapper[4881]: I0121 13:04:31.086550 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bec100bc-3f06-4e9f-92c8-d2150746c720-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:04:31 crc kubenswrapper[4881]: I0121 13:04:31.086563 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ff5v7\" (UniqueName: \"kubernetes.io/projected/bec100bc-3f06-4e9f-92c8-d2150746c720-kube-api-access-ff5v7\") on node \"crc\" DevicePath \"\"" Jan 21 13:04:31 crc kubenswrapper[4881]: I0121 13:04:31.919679 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kbtdr" Jan 21 13:04:31 crc kubenswrapper[4881]: I0121 13:04:31.961864 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kbtdr"] Jan 21 13:04:31 crc kubenswrapper[4881]: I0121 13:04:31.975079 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kbtdr"] Jan 21 13:04:33 crc kubenswrapper[4881]: I0121 13:04:33.353381 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bec100bc-3f06-4e9f-92c8-d2150746c720" path="/var/lib/kubelet/pods/bec100bc-3f06-4e9f-92c8-d2150746c720/volumes" Jan 21 13:04:38 crc kubenswrapper[4881]: I0121 13:04:38.312414 4881 scope.go:117] "RemoveContainer" containerID="c02cffb330d8abf35eb1054cc9e801224a9ea6598c8173b5cd69e9d284d2327a" Jan 21 13:04:38 crc kubenswrapper[4881]: E0121 13:04:38.313224 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:04:53 crc kubenswrapper[4881]: I0121 13:04:53.324333 4881 scope.go:117] "RemoveContainer" containerID="c02cffb330d8abf35eb1054cc9e801224a9ea6598c8173b5cd69e9d284d2327a" Jan 21 13:04:53 crc kubenswrapper[4881]: E0121 13:04:53.325241 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:05:07 crc kubenswrapper[4881]: I0121 13:05:07.311450 4881 scope.go:117] "RemoveContainer" containerID="c02cffb330d8abf35eb1054cc9e801224a9ea6598c8173b5cd69e9d284d2327a" Jan 21 13:05:07 crc kubenswrapper[4881]: E0121 13:05:07.312363 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:05:21 crc kubenswrapper[4881]: I0121 13:05:21.311710 4881 scope.go:117] "RemoveContainer" containerID="c02cffb330d8abf35eb1054cc9e801224a9ea6598c8173b5cd69e9d284d2327a" Jan 21 13:05:21 crc kubenswrapper[4881]: E0121 13:05:21.312859 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:05:35 crc kubenswrapper[4881]: I0121 13:05:35.312149 4881 scope.go:117] "RemoveContainer" containerID="c02cffb330d8abf35eb1054cc9e801224a9ea6598c8173b5cd69e9d284d2327a" Jan 21 13:05:35 crc kubenswrapper[4881]: I0121 13:05:35.343424 4881 scope.go:117] "RemoveContainer" containerID="0818ec9313f2fc50a748108c2a7b4170d06db46eb9b811376ec620220e592ebc" Jan 21 13:05:35 crc kubenswrapper[4881]: I0121 13:05:35.718187 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"982d26bca9ae8535bd5c23122103aa1521012b2265c5406dc793a0fdc4c46b01"} Jan 21 13:05:46 crc kubenswrapper[4881]: I0121 13:05:46.713090 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-klj4j"] Jan 21 13:05:46 crc kubenswrapper[4881]: E0121 13:05:46.713987 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bec100bc-3f06-4e9f-92c8-d2150746c720" containerName="registry-server" Jan 21 13:05:46 crc kubenswrapper[4881]: I0121 13:05:46.714003 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="bec100bc-3f06-4e9f-92c8-d2150746c720" containerName="registry-server" Jan 21 13:05:46 crc kubenswrapper[4881]: E0121 13:05:46.714028 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bec100bc-3f06-4e9f-92c8-d2150746c720" containerName="extract-utilities" Jan 21 13:05:46 crc kubenswrapper[4881]: I0121 13:05:46.714035 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="bec100bc-3f06-4e9f-92c8-d2150746c720" containerName="extract-utilities" Jan 21 13:05:46 crc kubenswrapper[4881]: E0121 13:05:46.714053 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bec100bc-3f06-4e9f-92c8-d2150746c720" containerName="extract-content" Jan 21 13:05:46 crc kubenswrapper[4881]: I0121 13:05:46.714063 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="bec100bc-3f06-4e9f-92c8-d2150746c720" containerName="extract-content" Jan 21 13:05:46 crc kubenswrapper[4881]: I0121 13:05:46.714287 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="bec100bc-3f06-4e9f-92c8-d2150746c720" containerName="registry-server" Jan 21 13:05:46 crc kubenswrapper[4881]: I0121 13:05:46.716108 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-klj4j" Jan 21 13:05:46 crc kubenswrapper[4881]: I0121 13:05:46.741078 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-klj4j"] Jan 21 13:05:46 crc kubenswrapper[4881]: I0121 13:05:46.759865 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c1f2821-4561-4775-afd7-f995c7794eb9-utilities\") pod \"certified-operators-klj4j\" (UID: \"1c1f2821-4561-4775-afd7-f995c7794eb9\") " pod="openshift-marketplace/certified-operators-klj4j" Jan 21 13:05:46 crc kubenswrapper[4881]: I0121 13:05:46.759929 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8w72\" (UniqueName: \"kubernetes.io/projected/1c1f2821-4561-4775-afd7-f995c7794eb9-kube-api-access-x8w72\") pod \"certified-operators-klj4j\" (UID: \"1c1f2821-4561-4775-afd7-f995c7794eb9\") " pod="openshift-marketplace/certified-operators-klj4j" Jan 21 13:05:46 crc kubenswrapper[4881]: I0121 13:05:46.759967 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c1f2821-4561-4775-afd7-f995c7794eb9-catalog-content\") pod \"certified-operators-klj4j\" (UID: \"1c1f2821-4561-4775-afd7-f995c7794eb9\") " pod="openshift-marketplace/certified-operators-klj4j" Jan 21 13:05:46 crc kubenswrapper[4881]: I0121 13:05:46.861626 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c1f2821-4561-4775-afd7-f995c7794eb9-utilities\") pod \"certified-operators-klj4j\" (UID: \"1c1f2821-4561-4775-afd7-f995c7794eb9\") " pod="openshift-marketplace/certified-operators-klj4j" Jan 21 13:05:46 crc kubenswrapper[4881]: I0121 13:05:46.861672 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8w72\" (UniqueName: \"kubernetes.io/projected/1c1f2821-4561-4775-afd7-f995c7794eb9-kube-api-access-x8w72\") pod \"certified-operators-klj4j\" (UID: \"1c1f2821-4561-4775-afd7-f995c7794eb9\") " pod="openshift-marketplace/certified-operators-klj4j" Jan 21 13:05:46 crc kubenswrapper[4881]: I0121 13:05:46.861698 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c1f2821-4561-4775-afd7-f995c7794eb9-catalog-content\") pod \"certified-operators-klj4j\" (UID: \"1c1f2821-4561-4775-afd7-f995c7794eb9\") " pod="openshift-marketplace/certified-operators-klj4j" Jan 21 13:05:46 crc kubenswrapper[4881]: I0121 13:05:46.862460 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c1f2821-4561-4775-afd7-f995c7794eb9-catalog-content\") pod \"certified-operators-klj4j\" (UID: \"1c1f2821-4561-4775-afd7-f995c7794eb9\") " pod="openshift-marketplace/certified-operators-klj4j" Jan 21 13:05:46 crc kubenswrapper[4881]: I0121 13:05:46.863414 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c1f2821-4561-4775-afd7-f995c7794eb9-utilities\") pod \"certified-operators-klj4j\" (UID: \"1c1f2821-4561-4775-afd7-f995c7794eb9\") " pod="openshift-marketplace/certified-operators-klj4j" Jan 21 13:05:46 crc kubenswrapper[4881]: I0121 13:05:46.889773 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8w72\" (UniqueName: \"kubernetes.io/projected/1c1f2821-4561-4775-afd7-f995c7794eb9-kube-api-access-x8w72\") pod \"certified-operators-klj4j\" (UID: \"1c1f2821-4561-4775-afd7-f995c7794eb9\") " pod="openshift-marketplace/certified-operators-klj4j" Jan 21 13:05:47 crc kubenswrapper[4881]: I0121 13:05:47.044345 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-klj4j" Jan 21 13:05:47 crc kubenswrapper[4881]: I0121 13:05:47.590761 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-klj4j"] Jan 21 13:05:47 crc kubenswrapper[4881]: W0121 13:05:47.593389 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c1f2821_4561_4775_afd7_f995c7794eb9.slice/crio-5912b65cbe84841f73cf1f4bf22a99a12dcb5a75557795c084a467bac35321b7 WatchSource:0}: Error finding container 5912b65cbe84841f73cf1f4bf22a99a12dcb5a75557795c084a467bac35321b7: Status 404 returned error can't find the container with id 5912b65cbe84841f73cf1f4bf22a99a12dcb5a75557795c084a467bac35321b7 Jan 21 13:05:48 crc kubenswrapper[4881]: I0121 13:05:48.005902 4881 generic.go:334] "Generic (PLEG): container finished" podID="1c1f2821-4561-4775-afd7-f995c7794eb9" containerID="b1795ee85622e8be16c281770c1151c3435236a0db4fe5ab1cd387997e3d12e4" exitCode=0 Jan 21 13:05:48 crc kubenswrapper[4881]: I0121 13:05:48.006007 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-klj4j" event={"ID":"1c1f2821-4561-4775-afd7-f995c7794eb9","Type":"ContainerDied","Data":"b1795ee85622e8be16c281770c1151c3435236a0db4fe5ab1cd387997e3d12e4"} Jan 21 13:05:48 crc kubenswrapper[4881]: I0121 13:05:48.006452 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-klj4j" event={"ID":"1c1f2821-4561-4775-afd7-f995c7794eb9","Type":"ContainerStarted","Data":"5912b65cbe84841f73cf1f4bf22a99a12dcb5a75557795c084a467bac35321b7"} Jan 21 13:05:49 crc kubenswrapper[4881]: I0121 13:05:49.021698 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-klj4j" event={"ID":"1c1f2821-4561-4775-afd7-f995c7794eb9","Type":"ContainerStarted","Data":"71abf720f18590d40c2cbb24e9f89c5138503153c763f029873791959d9b57f6"} Jan 21 13:05:50 crc kubenswrapper[4881]: I0121 13:05:50.042938 4881 generic.go:334] "Generic (PLEG): container finished" podID="1c1f2821-4561-4775-afd7-f995c7794eb9" containerID="71abf720f18590d40c2cbb24e9f89c5138503153c763f029873791959d9b57f6" exitCode=0 Jan 21 13:05:50 crc kubenswrapper[4881]: I0121 13:05:50.043023 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-klj4j" event={"ID":"1c1f2821-4561-4775-afd7-f995c7794eb9","Type":"ContainerDied","Data":"71abf720f18590d40c2cbb24e9f89c5138503153c763f029873791959d9b57f6"} Jan 21 13:05:51 crc kubenswrapper[4881]: I0121 13:05:51.053909 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-klj4j" event={"ID":"1c1f2821-4561-4775-afd7-f995c7794eb9","Type":"ContainerStarted","Data":"f1b1412a13772a8513f335c0801239091b7358315b12cc6ba6559e7a455c8685"} Jan 21 13:05:51 crc kubenswrapper[4881]: I0121 13:05:51.078663 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-klj4j" podStartSLOduration=2.405007554 podStartE2EDuration="5.078636914s" podCreationTimestamp="2026-01-21 13:05:46 +0000 UTC" firstStartedPulling="2026-01-21 13:05:48.008612051 +0000 UTC m=+7735.268568540" lastFinishedPulling="2026-01-21 13:05:50.682241401 +0000 UTC m=+7737.942197900" observedRunningTime="2026-01-21 13:05:51.071732097 +0000 UTC m=+7738.331688576" watchObservedRunningTime="2026-01-21 13:05:51.078636914 +0000 UTC m=+7738.338593393" Jan 21 13:05:57 crc kubenswrapper[4881]: I0121 13:05:57.045766 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-klj4j" Jan 21 13:05:57 crc kubenswrapper[4881]: I0121 13:05:57.046340 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-klj4j" Jan 21 13:05:57 crc kubenswrapper[4881]: I0121 13:05:57.119223 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-klj4j" Jan 21 13:05:57 crc kubenswrapper[4881]: I0121 13:05:57.327550 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-klj4j" Jan 21 13:05:57 crc kubenswrapper[4881]: I0121 13:05:57.389231 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-klj4j"] Jan 21 13:05:59 crc kubenswrapper[4881]: I0121 13:05:59.349337 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-klj4j" podUID="1c1f2821-4561-4775-afd7-f995c7794eb9" containerName="registry-server" containerID="cri-o://f1b1412a13772a8513f335c0801239091b7358315b12cc6ba6559e7a455c8685" gracePeriod=2 Jan 21 13:05:59 crc kubenswrapper[4881]: I0121 13:05:59.930392 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-klj4j" Jan 21 13:06:00 crc kubenswrapper[4881]: I0121 13:06:00.057462 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c1f2821-4561-4775-afd7-f995c7794eb9-catalog-content\") pod \"1c1f2821-4561-4775-afd7-f995c7794eb9\" (UID: \"1c1f2821-4561-4775-afd7-f995c7794eb9\") " Jan 21 13:06:00 crc kubenswrapper[4881]: I0121 13:06:00.057710 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8w72\" (UniqueName: \"kubernetes.io/projected/1c1f2821-4561-4775-afd7-f995c7794eb9-kube-api-access-x8w72\") pod \"1c1f2821-4561-4775-afd7-f995c7794eb9\" (UID: \"1c1f2821-4561-4775-afd7-f995c7794eb9\") " Jan 21 13:06:00 crc kubenswrapper[4881]: I0121 13:06:00.057730 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c1f2821-4561-4775-afd7-f995c7794eb9-utilities\") pod \"1c1f2821-4561-4775-afd7-f995c7794eb9\" (UID: \"1c1f2821-4561-4775-afd7-f995c7794eb9\") " Jan 21 13:06:00 crc kubenswrapper[4881]: I0121 13:06:00.059146 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c1f2821-4561-4775-afd7-f995c7794eb9-utilities" (OuterVolumeSpecName: "utilities") pod "1c1f2821-4561-4775-afd7-f995c7794eb9" (UID: "1c1f2821-4561-4775-afd7-f995c7794eb9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:06:00 crc kubenswrapper[4881]: I0121 13:06:00.064538 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c1f2821-4561-4775-afd7-f995c7794eb9-kube-api-access-x8w72" (OuterVolumeSpecName: "kube-api-access-x8w72") pod "1c1f2821-4561-4775-afd7-f995c7794eb9" (UID: "1c1f2821-4561-4775-afd7-f995c7794eb9"). InnerVolumeSpecName "kube-api-access-x8w72". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:06:00 crc kubenswrapper[4881]: I0121 13:06:00.107008 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c1f2821-4561-4775-afd7-f995c7794eb9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1c1f2821-4561-4775-afd7-f995c7794eb9" (UID: "1c1f2821-4561-4775-afd7-f995c7794eb9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:06:00 crc kubenswrapper[4881]: I0121 13:06:00.160764 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1c1f2821-4561-4775-afd7-f995c7794eb9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:06:00 crc kubenswrapper[4881]: I0121 13:06:00.160833 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8w72\" (UniqueName: \"kubernetes.io/projected/1c1f2821-4561-4775-afd7-f995c7794eb9-kube-api-access-x8w72\") on node \"crc\" DevicePath \"\"" Jan 21 13:06:00 crc kubenswrapper[4881]: I0121 13:06:00.160850 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1c1f2821-4561-4775-afd7-f995c7794eb9-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:06:00 crc kubenswrapper[4881]: I0121 13:06:00.365754 4881 generic.go:334] "Generic (PLEG): container finished" podID="1c1f2821-4561-4775-afd7-f995c7794eb9" containerID="f1b1412a13772a8513f335c0801239091b7358315b12cc6ba6559e7a455c8685" exitCode=0 Jan 21 13:06:00 crc kubenswrapper[4881]: I0121 13:06:00.365814 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-klj4j" event={"ID":"1c1f2821-4561-4775-afd7-f995c7794eb9","Type":"ContainerDied","Data":"f1b1412a13772a8513f335c0801239091b7358315b12cc6ba6559e7a455c8685"} Jan 21 13:06:00 crc kubenswrapper[4881]: I0121 13:06:00.365855 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-klj4j" event={"ID":"1c1f2821-4561-4775-afd7-f995c7794eb9","Type":"ContainerDied","Data":"5912b65cbe84841f73cf1f4bf22a99a12dcb5a75557795c084a467bac35321b7"} Jan 21 13:06:00 crc kubenswrapper[4881]: I0121 13:06:00.365876 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-klj4j" Jan 21 13:06:00 crc kubenswrapper[4881]: I0121 13:06:00.365901 4881 scope.go:117] "RemoveContainer" containerID="f1b1412a13772a8513f335c0801239091b7358315b12cc6ba6559e7a455c8685" Jan 21 13:06:00 crc kubenswrapper[4881]: I0121 13:06:00.410099 4881 scope.go:117] "RemoveContainer" containerID="71abf720f18590d40c2cbb24e9f89c5138503153c763f029873791959d9b57f6" Jan 21 13:06:00 crc kubenswrapper[4881]: I0121 13:06:00.421201 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-klj4j"] Jan 21 13:06:00 crc kubenswrapper[4881]: I0121 13:06:00.430407 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-klj4j"] Jan 21 13:06:00 crc kubenswrapper[4881]: I0121 13:06:00.440523 4881 scope.go:117] "RemoveContainer" containerID="b1795ee85622e8be16c281770c1151c3435236a0db4fe5ab1cd387997e3d12e4" Jan 21 13:06:00 crc kubenswrapper[4881]: I0121 13:06:00.506960 4881 scope.go:117] "RemoveContainer" containerID="f1b1412a13772a8513f335c0801239091b7358315b12cc6ba6559e7a455c8685" Jan 21 13:06:00 crc kubenswrapper[4881]: E0121 13:06:00.507737 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1b1412a13772a8513f335c0801239091b7358315b12cc6ba6559e7a455c8685\": container with ID starting with f1b1412a13772a8513f335c0801239091b7358315b12cc6ba6559e7a455c8685 not found: ID does not exist" containerID="f1b1412a13772a8513f335c0801239091b7358315b12cc6ba6559e7a455c8685" Jan 21 13:06:00 crc kubenswrapper[4881]: I0121 13:06:00.507833 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1b1412a13772a8513f335c0801239091b7358315b12cc6ba6559e7a455c8685"} err="failed to get container status \"f1b1412a13772a8513f335c0801239091b7358315b12cc6ba6559e7a455c8685\": rpc error: code = NotFound desc = could not find container \"f1b1412a13772a8513f335c0801239091b7358315b12cc6ba6559e7a455c8685\": container with ID starting with f1b1412a13772a8513f335c0801239091b7358315b12cc6ba6559e7a455c8685 not found: ID does not exist" Jan 21 13:06:00 crc kubenswrapper[4881]: I0121 13:06:00.507867 4881 scope.go:117] "RemoveContainer" containerID="71abf720f18590d40c2cbb24e9f89c5138503153c763f029873791959d9b57f6" Jan 21 13:06:00 crc kubenswrapper[4881]: E0121 13:06:00.508406 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71abf720f18590d40c2cbb24e9f89c5138503153c763f029873791959d9b57f6\": container with ID starting with 71abf720f18590d40c2cbb24e9f89c5138503153c763f029873791959d9b57f6 not found: ID does not exist" containerID="71abf720f18590d40c2cbb24e9f89c5138503153c763f029873791959d9b57f6" Jan 21 13:06:00 crc kubenswrapper[4881]: I0121 13:06:00.508437 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71abf720f18590d40c2cbb24e9f89c5138503153c763f029873791959d9b57f6"} err="failed to get container status \"71abf720f18590d40c2cbb24e9f89c5138503153c763f029873791959d9b57f6\": rpc error: code = NotFound desc = could not find container \"71abf720f18590d40c2cbb24e9f89c5138503153c763f029873791959d9b57f6\": container with ID starting with 71abf720f18590d40c2cbb24e9f89c5138503153c763f029873791959d9b57f6 not found: ID does not exist" Jan 21 13:06:00 crc kubenswrapper[4881]: I0121 13:06:00.508458 4881 scope.go:117] "RemoveContainer" containerID="b1795ee85622e8be16c281770c1151c3435236a0db4fe5ab1cd387997e3d12e4" Jan 21 13:06:00 crc kubenswrapper[4881]: E0121 13:06:00.508933 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1795ee85622e8be16c281770c1151c3435236a0db4fe5ab1cd387997e3d12e4\": container with ID starting with b1795ee85622e8be16c281770c1151c3435236a0db4fe5ab1cd387997e3d12e4 not found: ID does not exist" containerID="b1795ee85622e8be16c281770c1151c3435236a0db4fe5ab1cd387997e3d12e4" Jan 21 13:06:00 crc kubenswrapper[4881]: I0121 13:06:00.508959 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1795ee85622e8be16c281770c1151c3435236a0db4fe5ab1cd387997e3d12e4"} err="failed to get container status \"b1795ee85622e8be16c281770c1151c3435236a0db4fe5ab1cd387997e3d12e4\": rpc error: code = NotFound desc = could not find container \"b1795ee85622e8be16c281770c1151c3435236a0db4fe5ab1cd387997e3d12e4\": container with ID starting with b1795ee85622e8be16c281770c1151c3435236a0db4fe5ab1cd387997e3d12e4 not found: ID does not exist" Jan 21 13:06:01 crc kubenswrapper[4881]: I0121 13:06:01.339270 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c1f2821-4561-4775-afd7-f995c7794eb9" path="/var/lib/kubelet/pods/1c1f2821-4561-4775-afd7-f995c7794eb9/volumes" Jan 21 13:06:26 crc kubenswrapper[4881]: I0121 13:06:26.266811 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gss9b"] Jan 21 13:06:26 crc kubenswrapper[4881]: E0121 13:06:26.267763 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c1f2821-4561-4775-afd7-f995c7794eb9" containerName="extract-utilities" Jan 21 13:06:26 crc kubenswrapper[4881]: I0121 13:06:26.267785 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c1f2821-4561-4775-afd7-f995c7794eb9" containerName="extract-utilities" Jan 21 13:06:26 crc kubenswrapper[4881]: E0121 13:06:26.267825 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c1f2821-4561-4775-afd7-f995c7794eb9" containerName="extract-content" Jan 21 13:06:26 crc kubenswrapper[4881]: I0121 13:06:26.267833 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c1f2821-4561-4775-afd7-f995c7794eb9" containerName="extract-content" Jan 21 13:06:26 crc kubenswrapper[4881]: E0121 13:06:26.267861 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c1f2821-4561-4775-afd7-f995c7794eb9" containerName="registry-server" Jan 21 13:06:26 crc kubenswrapper[4881]: I0121 13:06:26.267871 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c1f2821-4561-4775-afd7-f995c7794eb9" containerName="registry-server" Jan 21 13:06:26 crc kubenswrapper[4881]: I0121 13:06:26.268146 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c1f2821-4561-4775-afd7-f995c7794eb9" containerName="registry-server" Jan 21 13:06:26 crc kubenswrapper[4881]: I0121 13:06:26.270021 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gss9b" Jan 21 13:06:26 crc kubenswrapper[4881]: I0121 13:06:26.289201 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gss9b"] Jan 21 13:06:26 crc kubenswrapper[4881]: I0121 13:06:26.298600 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxxrm\" (UniqueName: \"kubernetes.io/projected/442f5627-e1c1-4ccc-9b75-c011f432c2a8-kube-api-access-jxxrm\") pod \"community-operators-gss9b\" (UID: \"442f5627-e1c1-4ccc-9b75-c011f432c2a8\") " pod="openshift-marketplace/community-operators-gss9b" Jan 21 13:06:26 crc kubenswrapper[4881]: I0121 13:06:26.298889 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/442f5627-e1c1-4ccc-9b75-c011f432c2a8-catalog-content\") pod \"community-operators-gss9b\" (UID: \"442f5627-e1c1-4ccc-9b75-c011f432c2a8\") " pod="openshift-marketplace/community-operators-gss9b" Jan 21 13:06:26 crc kubenswrapper[4881]: I0121 13:06:26.299636 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/442f5627-e1c1-4ccc-9b75-c011f432c2a8-utilities\") pod \"community-operators-gss9b\" (UID: \"442f5627-e1c1-4ccc-9b75-c011f432c2a8\") " pod="openshift-marketplace/community-operators-gss9b" Jan 21 13:06:26 crc kubenswrapper[4881]: I0121 13:06:26.401230 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxxrm\" (UniqueName: \"kubernetes.io/projected/442f5627-e1c1-4ccc-9b75-c011f432c2a8-kube-api-access-jxxrm\") pod \"community-operators-gss9b\" (UID: \"442f5627-e1c1-4ccc-9b75-c011f432c2a8\") " pod="openshift-marketplace/community-operators-gss9b" Jan 21 13:06:26 crc kubenswrapper[4881]: I0121 13:06:26.401728 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/442f5627-e1c1-4ccc-9b75-c011f432c2a8-catalog-content\") pod \"community-operators-gss9b\" (UID: \"442f5627-e1c1-4ccc-9b75-c011f432c2a8\") " pod="openshift-marketplace/community-operators-gss9b" Jan 21 13:06:26 crc kubenswrapper[4881]: I0121 13:06:26.402042 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/442f5627-e1c1-4ccc-9b75-c011f432c2a8-utilities\") pod \"community-operators-gss9b\" (UID: \"442f5627-e1c1-4ccc-9b75-c011f432c2a8\") " pod="openshift-marketplace/community-operators-gss9b" Jan 21 13:06:26 crc kubenswrapper[4881]: I0121 13:06:26.402529 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/442f5627-e1c1-4ccc-9b75-c011f432c2a8-utilities\") pod \"community-operators-gss9b\" (UID: \"442f5627-e1c1-4ccc-9b75-c011f432c2a8\") " pod="openshift-marketplace/community-operators-gss9b" Jan 21 13:06:26 crc kubenswrapper[4881]: I0121 13:06:26.402789 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/442f5627-e1c1-4ccc-9b75-c011f432c2a8-catalog-content\") pod \"community-operators-gss9b\" (UID: \"442f5627-e1c1-4ccc-9b75-c011f432c2a8\") " pod="openshift-marketplace/community-operators-gss9b" Jan 21 13:06:26 crc kubenswrapper[4881]: I0121 13:06:26.423889 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxxrm\" (UniqueName: \"kubernetes.io/projected/442f5627-e1c1-4ccc-9b75-c011f432c2a8-kube-api-access-jxxrm\") pod \"community-operators-gss9b\" (UID: \"442f5627-e1c1-4ccc-9b75-c011f432c2a8\") " pod="openshift-marketplace/community-operators-gss9b" Jan 21 13:06:26 crc kubenswrapper[4881]: I0121 13:06:26.644851 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gss9b" Jan 21 13:06:27 crc kubenswrapper[4881]: I0121 13:06:27.186207 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gss9b"] Jan 21 13:06:27 crc kubenswrapper[4881]: I0121 13:06:27.790176 4881 generic.go:334] "Generic (PLEG): container finished" podID="442f5627-e1c1-4ccc-9b75-c011f432c2a8" containerID="5e7e7c9ddb17ce2fda50d8009f6372fc579b02d4dfffbc72d9a91591a834ccd8" exitCode=0 Jan 21 13:06:27 crc kubenswrapper[4881]: I0121 13:06:27.790424 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gss9b" event={"ID":"442f5627-e1c1-4ccc-9b75-c011f432c2a8","Type":"ContainerDied","Data":"5e7e7c9ddb17ce2fda50d8009f6372fc579b02d4dfffbc72d9a91591a834ccd8"} Jan 21 13:06:27 crc kubenswrapper[4881]: I0121 13:06:27.790909 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gss9b" event={"ID":"442f5627-e1c1-4ccc-9b75-c011f432c2a8","Type":"ContainerStarted","Data":"13d74233d2fee10bbf68c00871b803fb4c61e118c339bfc524797906efc7d658"} Jan 21 13:06:29 crc kubenswrapper[4881]: I0121 13:06:29.886502 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gss9b" event={"ID":"442f5627-e1c1-4ccc-9b75-c011f432c2a8","Type":"ContainerStarted","Data":"d3455e35e2c9626cf2ac0d5851973407a0f80cd2ba16102bea88b2eca02723df"} Jan 21 13:06:30 crc kubenswrapper[4881]: I0121 13:06:30.902218 4881 generic.go:334] "Generic (PLEG): container finished" podID="442f5627-e1c1-4ccc-9b75-c011f432c2a8" containerID="d3455e35e2c9626cf2ac0d5851973407a0f80cd2ba16102bea88b2eca02723df" exitCode=0 Jan 21 13:06:30 crc kubenswrapper[4881]: I0121 13:06:30.902267 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gss9b" event={"ID":"442f5627-e1c1-4ccc-9b75-c011f432c2a8","Type":"ContainerDied","Data":"d3455e35e2c9626cf2ac0d5851973407a0f80cd2ba16102bea88b2eca02723df"} Jan 21 13:06:30 crc kubenswrapper[4881]: I0121 13:06:30.902732 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gss9b" event={"ID":"442f5627-e1c1-4ccc-9b75-c011f432c2a8","Type":"ContainerStarted","Data":"6ad24cd9e583477a5a1245dcbae85883798e9c12fde8b0dd24ac9d2a5b2f2926"} Jan 21 13:06:30 crc kubenswrapper[4881]: I0121 13:06:30.935988 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gss9b" podStartSLOduration=2.397782226 podStartE2EDuration="4.935965455s" podCreationTimestamp="2026-01-21 13:06:26 +0000 UTC" firstStartedPulling="2026-01-21 13:06:27.792633796 +0000 UTC m=+7775.052590265" lastFinishedPulling="2026-01-21 13:06:30.330816985 +0000 UTC m=+7777.590773494" observedRunningTime="2026-01-21 13:06:30.924361344 +0000 UTC m=+7778.184317843" watchObservedRunningTime="2026-01-21 13:06:30.935965455 +0000 UTC m=+7778.195921924" Jan 21 13:06:35 crc kubenswrapper[4881]: I0121 13:06:35.440250 4881 scope.go:117] "RemoveContainer" containerID="adc0b5280c47db093a6ec180a9e5726fbeb5b4a901615e6f06978e816e37c4a2" Jan 21 13:06:36 crc kubenswrapper[4881]: I0121 13:06:36.645112 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gss9b" Jan 21 13:06:36 crc kubenswrapper[4881]: I0121 13:06:36.645447 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gss9b" Jan 21 13:06:36 crc kubenswrapper[4881]: I0121 13:06:36.726202 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gss9b" Jan 21 13:06:37 crc kubenswrapper[4881]: I0121 13:06:37.036762 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gss9b" Jan 21 13:06:37 crc kubenswrapper[4881]: I0121 13:06:37.098972 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gss9b"] Jan 21 13:06:39 crc kubenswrapper[4881]: I0121 13:06:39.004185 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gss9b" podUID="442f5627-e1c1-4ccc-9b75-c011f432c2a8" containerName="registry-server" containerID="cri-o://6ad24cd9e583477a5a1245dcbae85883798e9c12fde8b0dd24ac9d2a5b2f2926" gracePeriod=2 Jan 21 13:06:39 crc kubenswrapper[4881]: I0121 13:06:39.534934 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gss9b" Jan 21 13:06:39 crc kubenswrapper[4881]: I0121 13:06:39.726077 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxxrm\" (UniqueName: \"kubernetes.io/projected/442f5627-e1c1-4ccc-9b75-c011f432c2a8-kube-api-access-jxxrm\") pod \"442f5627-e1c1-4ccc-9b75-c011f432c2a8\" (UID: \"442f5627-e1c1-4ccc-9b75-c011f432c2a8\") " Jan 21 13:06:39 crc kubenswrapper[4881]: I0121 13:06:39.726309 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/442f5627-e1c1-4ccc-9b75-c011f432c2a8-utilities\") pod \"442f5627-e1c1-4ccc-9b75-c011f432c2a8\" (UID: \"442f5627-e1c1-4ccc-9b75-c011f432c2a8\") " Jan 21 13:06:39 crc kubenswrapper[4881]: I0121 13:06:39.726356 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/442f5627-e1c1-4ccc-9b75-c011f432c2a8-catalog-content\") pod \"442f5627-e1c1-4ccc-9b75-c011f432c2a8\" (UID: \"442f5627-e1c1-4ccc-9b75-c011f432c2a8\") " Jan 21 13:06:39 crc kubenswrapper[4881]: I0121 13:06:39.727449 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/442f5627-e1c1-4ccc-9b75-c011f432c2a8-utilities" (OuterVolumeSpecName: "utilities") pod "442f5627-e1c1-4ccc-9b75-c011f432c2a8" (UID: "442f5627-e1c1-4ccc-9b75-c011f432c2a8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:06:39 crc kubenswrapper[4881]: I0121 13:06:39.739759 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/442f5627-e1c1-4ccc-9b75-c011f432c2a8-kube-api-access-jxxrm" (OuterVolumeSpecName: "kube-api-access-jxxrm") pod "442f5627-e1c1-4ccc-9b75-c011f432c2a8" (UID: "442f5627-e1c1-4ccc-9b75-c011f432c2a8"). InnerVolumeSpecName "kube-api-access-jxxrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:06:39 crc kubenswrapper[4881]: I0121 13:06:39.829069 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/442f5627-e1c1-4ccc-9b75-c011f432c2a8-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:06:39 crc kubenswrapper[4881]: I0121 13:06:39.829114 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxxrm\" (UniqueName: \"kubernetes.io/projected/442f5627-e1c1-4ccc-9b75-c011f432c2a8-kube-api-access-jxxrm\") on node \"crc\" DevicePath \"\"" Jan 21 13:06:39 crc kubenswrapper[4881]: I0121 13:06:39.888944 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/442f5627-e1c1-4ccc-9b75-c011f432c2a8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "442f5627-e1c1-4ccc-9b75-c011f432c2a8" (UID: "442f5627-e1c1-4ccc-9b75-c011f432c2a8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:06:39 crc kubenswrapper[4881]: I0121 13:06:39.931115 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/442f5627-e1c1-4ccc-9b75-c011f432c2a8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:06:40 crc kubenswrapper[4881]: I0121 13:06:40.016155 4881 generic.go:334] "Generic (PLEG): container finished" podID="442f5627-e1c1-4ccc-9b75-c011f432c2a8" containerID="6ad24cd9e583477a5a1245dcbae85883798e9c12fde8b0dd24ac9d2a5b2f2926" exitCode=0 Jan 21 13:06:40 crc kubenswrapper[4881]: I0121 13:06:40.016213 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gss9b" event={"ID":"442f5627-e1c1-4ccc-9b75-c011f432c2a8","Type":"ContainerDied","Data":"6ad24cd9e583477a5a1245dcbae85883798e9c12fde8b0dd24ac9d2a5b2f2926"} Jan 21 13:06:40 crc kubenswrapper[4881]: I0121 13:06:40.016245 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gss9b" event={"ID":"442f5627-e1c1-4ccc-9b75-c011f432c2a8","Type":"ContainerDied","Data":"13d74233d2fee10bbf68c00871b803fb4c61e118c339bfc524797906efc7d658"} Jan 21 13:06:40 crc kubenswrapper[4881]: I0121 13:06:40.016251 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gss9b" Jan 21 13:06:40 crc kubenswrapper[4881]: I0121 13:06:40.016264 4881 scope.go:117] "RemoveContainer" containerID="6ad24cd9e583477a5a1245dcbae85883798e9c12fde8b0dd24ac9d2a5b2f2926" Jan 21 13:06:40 crc kubenswrapper[4881]: I0121 13:06:40.047441 4881 scope.go:117] "RemoveContainer" containerID="d3455e35e2c9626cf2ac0d5851973407a0f80cd2ba16102bea88b2eca02723df" Jan 21 13:06:40 crc kubenswrapper[4881]: I0121 13:06:40.091213 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gss9b"] Jan 21 13:06:40 crc kubenswrapper[4881]: I0121 13:06:40.094460 4881 scope.go:117] "RemoveContainer" containerID="5e7e7c9ddb17ce2fda50d8009f6372fc579b02d4dfffbc72d9a91591a834ccd8" Jan 21 13:06:40 crc kubenswrapper[4881]: I0121 13:06:40.107399 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gss9b"] Jan 21 13:06:40 crc kubenswrapper[4881]: I0121 13:06:40.137407 4881 scope.go:117] "RemoveContainer" containerID="6ad24cd9e583477a5a1245dcbae85883798e9c12fde8b0dd24ac9d2a5b2f2926" Jan 21 13:06:40 crc kubenswrapper[4881]: E0121 13:06:40.137873 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ad24cd9e583477a5a1245dcbae85883798e9c12fde8b0dd24ac9d2a5b2f2926\": container with ID starting with 6ad24cd9e583477a5a1245dcbae85883798e9c12fde8b0dd24ac9d2a5b2f2926 not found: ID does not exist" containerID="6ad24cd9e583477a5a1245dcbae85883798e9c12fde8b0dd24ac9d2a5b2f2926" Jan 21 13:06:40 crc kubenswrapper[4881]: I0121 13:06:40.137903 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ad24cd9e583477a5a1245dcbae85883798e9c12fde8b0dd24ac9d2a5b2f2926"} err="failed to get container status \"6ad24cd9e583477a5a1245dcbae85883798e9c12fde8b0dd24ac9d2a5b2f2926\": rpc error: code = NotFound desc = could not find container \"6ad24cd9e583477a5a1245dcbae85883798e9c12fde8b0dd24ac9d2a5b2f2926\": container with ID starting with 6ad24cd9e583477a5a1245dcbae85883798e9c12fde8b0dd24ac9d2a5b2f2926 not found: ID does not exist" Jan 21 13:06:40 crc kubenswrapper[4881]: I0121 13:06:40.137925 4881 scope.go:117] "RemoveContainer" containerID="d3455e35e2c9626cf2ac0d5851973407a0f80cd2ba16102bea88b2eca02723df" Jan 21 13:06:40 crc kubenswrapper[4881]: E0121 13:06:40.138194 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3455e35e2c9626cf2ac0d5851973407a0f80cd2ba16102bea88b2eca02723df\": container with ID starting with d3455e35e2c9626cf2ac0d5851973407a0f80cd2ba16102bea88b2eca02723df not found: ID does not exist" containerID="d3455e35e2c9626cf2ac0d5851973407a0f80cd2ba16102bea88b2eca02723df" Jan 21 13:06:40 crc kubenswrapper[4881]: I0121 13:06:40.138213 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3455e35e2c9626cf2ac0d5851973407a0f80cd2ba16102bea88b2eca02723df"} err="failed to get container status \"d3455e35e2c9626cf2ac0d5851973407a0f80cd2ba16102bea88b2eca02723df\": rpc error: code = NotFound desc = could not find container \"d3455e35e2c9626cf2ac0d5851973407a0f80cd2ba16102bea88b2eca02723df\": container with ID starting with d3455e35e2c9626cf2ac0d5851973407a0f80cd2ba16102bea88b2eca02723df not found: ID does not exist" Jan 21 13:06:40 crc kubenswrapper[4881]: I0121 13:06:40.138226 4881 scope.go:117] "RemoveContainer" containerID="5e7e7c9ddb17ce2fda50d8009f6372fc579b02d4dfffbc72d9a91591a834ccd8" Jan 21 13:06:40 crc kubenswrapper[4881]: E0121 13:06:40.138389 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e7e7c9ddb17ce2fda50d8009f6372fc579b02d4dfffbc72d9a91591a834ccd8\": container with ID starting with 5e7e7c9ddb17ce2fda50d8009f6372fc579b02d4dfffbc72d9a91591a834ccd8 not found: ID does not exist" containerID="5e7e7c9ddb17ce2fda50d8009f6372fc579b02d4dfffbc72d9a91591a834ccd8" Jan 21 13:06:40 crc kubenswrapper[4881]: I0121 13:06:40.138402 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e7e7c9ddb17ce2fda50d8009f6372fc579b02d4dfffbc72d9a91591a834ccd8"} err="failed to get container status \"5e7e7c9ddb17ce2fda50d8009f6372fc579b02d4dfffbc72d9a91591a834ccd8\": rpc error: code = NotFound desc = could not find container \"5e7e7c9ddb17ce2fda50d8009f6372fc579b02d4dfffbc72d9a91591a834ccd8\": container with ID starting with 5e7e7c9ddb17ce2fda50d8009f6372fc579b02d4dfffbc72d9a91591a834ccd8 not found: ID does not exist" Jan 21 13:06:41 crc kubenswrapper[4881]: I0121 13:06:41.323591 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="442f5627-e1c1-4ccc-9b75-c011f432c2a8" path="/var/lib/kubelet/pods/442f5627-e1c1-4ccc-9b75-c011f432c2a8/volumes" Jan 21 13:07:59 crc kubenswrapper[4881]: I0121 13:07:59.851541 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:07:59 crc kubenswrapper[4881]: I0121 13:07:59.852051 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:08:29 crc kubenswrapper[4881]: I0121 13:08:29.851856 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:08:29 crc kubenswrapper[4881]: I0121 13:08:29.852706 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:08:59 crc kubenswrapper[4881]: I0121 13:08:59.851360 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:08:59 crc kubenswrapper[4881]: I0121 13:08:59.851974 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:08:59 crc kubenswrapper[4881]: I0121 13:08:59.852063 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 13:08:59 crc kubenswrapper[4881]: I0121 13:08:59.853180 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"982d26bca9ae8535bd5c23122103aa1521012b2265c5406dc793a0fdc4c46b01"} pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 13:08:59 crc kubenswrapper[4881]: I0121 13:08:59.853276 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" containerID="cri-o://982d26bca9ae8535bd5c23122103aa1521012b2265c5406dc793a0fdc4c46b01" gracePeriod=600 Jan 21 13:09:00 crc kubenswrapper[4881]: I0121 13:09:00.470307 4881 generic.go:334] "Generic (PLEG): container finished" podID="3687b313-1df2-4274-80db-8c758b51bf2d" containerID="982d26bca9ae8535bd5c23122103aa1521012b2265c5406dc793a0fdc4c46b01" exitCode=0 Jan 21 13:09:00 crc kubenswrapper[4881]: I0121 13:09:00.470371 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerDied","Data":"982d26bca9ae8535bd5c23122103aa1521012b2265c5406dc793a0fdc4c46b01"} Jan 21 13:09:00 crc kubenswrapper[4881]: I0121 13:09:00.471171 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"552bc8eceba0a6e2de711a18f9749360715dc3263cb9e778a7c7f74c86bf256a"} Jan 21 13:09:00 crc kubenswrapper[4881]: I0121 13:09:00.471235 4881 scope.go:117] "RemoveContainer" containerID="c02cffb330d8abf35eb1054cc9e801224a9ea6598c8173b5cd69e9d284d2327a" Jan 21 13:09:35 crc kubenswrapper[4881]: I0121 13:09:35.826405 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tz87r"] Jan 21 13:09:35 crc kubenswrapper[4881]: E0121 13:09:35.827583 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="442f5627-e1c1-4ccc-9b75-c011f432c2a8" containerName="extract-utilities" Jan 21 13:09:35 crc kubenswrapper[4881]: I0121 13:09:35.827616 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="442f5627-e1c1-4ccc-9b75-c011f432c2a8" containerName="extract-utilities" Jan 21 13:09:35 crc kubenswrapper[4881]: E0121 13:09:35.827644 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="442f5627-e1c1-4ccc-9b75-c011f432c2a8" containerName="extract-content" Jan 21 13:09:35 crc kubenswrapper[4881]: I0121 13:09:35.827652 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="442f5627-e1c1-4ccc-9b75-c011f432c2a8" containerName="extract-content" Jan 21 13:09:35 crc kubenswrapper[4881]: E0121 13:09:35.827663 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="442f5627-e1c1-4ccc-9b75-c011f432c2a8" containerName="registry-server" Jan 21 13:09:35 crc kubenswrapper[4881]: I0121 13:09:35.827671 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="442f5627-e1c1-4ccc-9b75-c011f432c2a8" containerName="registry-server" Jan 21 13:09:35 crc kubenswrapper[4881]: I0121 13:09:35.827963 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="442f5627-e1c1-4ccc-9b75-c011f432c2a8" containerName="registry-server" Jan 21 13:09:35 crc kubenswrapper[4881]: I0121 13:09:35.831595 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tz87r" Jan 21 13:09:35 crc kubenswrapper[4881]: I0121 13:09:35.848416 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tz87r"] Jan 21 13:09:35 crc kubenswrapper[4881]: I0121 13:09:35.981462 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htw9q\" (UniqueName: \"kubernetes.io/projected/7a26c7f3-1ab1-4718-b38e-e7312fe50035-kube-api-access-htw9q\") pod \"redhat-operators-tz87r\" (UID: \"7a26c7f3-1ab1-4718-b38e-e7312fe50035\") " pod="openshift-marketplace/redhat-operators-tz87r" Jan 21 13:09:35 crc kubenswrapper[4881]: I0121 13:09:35.981664 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a26c7f3-1ab1-4718-b38e-e7312fe50035-utilities\") pod \"redhat-operators-tz87r\" (UID: \"7a26c7f3-1ab1-4718-b38e-e7312fe50035\") " pod="openshift-marketplace/redhat-operators-tz87r" Jan 21 13:09:35 crc kubenswrapper[4881]: I0121 13:09:35.981943 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a26c7f3-1ab1-4718-b38e-e7312fe50035-catalog-content\") pod \"redhat-operators-tz87r\" (UID: \"7a26c7f3-1ab1-4718-b38e-e7312fe50035\") " pod="openshift-marketplace/redhat-operators-tz87r" Jan 21 13:09:36 crc kubenswrapper[4881]: I0121 13:09:36.084637 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a26c7f3-1ab1-4718-b38e-e7312fe50035-catalog-content\") pod \"redhat-operators-tz87r\" (UID: \"7a26c7f3-1ab1-4718-b38e-e7312fe50035\") " pod="openshift-marketplace/redhat-operators-tz87r" Jan 21 13:09:36 crc kubenswrapper[4881]: I0121 13:09:36.084748 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htw9q\" (UniqueName: \"kubernetes.io/projected/7a26c7f3-1ab1-4718-b38e-e7312fe50035-kube-api-access-htw9q\") pod \"redhat-operators-tz87r\" (UID: \"7a26c7f3-1ab1-4718-b38e-e7312fe50035\") " pod="openshift-marketplace/redhat-operators-tz87r" Jan 21 13:09:36 crc kubenswrapper[4881]: I0121 13:09:36.084818 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a26c7f3-1ab1-4718-b38e-e7312fe50035-utilities\") pod \"redhat-operators-tz87r\" (UID: \"7a26c7f3-1ab1-4718-b38e-e7312fe50035\") " pod="openshift-marketplace/redhat-operators-tz87r" Jan 21 13:09:36 crc kubenswrapper[4881]: I0121 13:09:36.085317 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a26c7f3-1ab1-4718-b38e-e7312fe50035-utilities\") pod \"redhat-operators-tz87r\" (UID: \"7a26c7f3-1ab1-4718-b38e-e7312fe50035\") " pod="openshift-marketplace/redhat-operators-tz87r" Jan 21 13:09:36 crc kubenswrapper[4881]: I0121 13:09:36.085474 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a26c7f3-1ab1-4718-b38e-e7312fe50035-catalog-content\") pod \"redhat-operators-tz87r\" (UID: \"7a26c7f3-1ab1-4718-b38e-e7312fe50035\") " pod="openshift-marketplace/redhat-operators-tz87r" Jan 21 13:09:36 crc kubenswrapper[4881]: I0121 13:09:36.113386 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htw9q\" (UniqueName: \"kubernetes.io/projected/7a26c7f3-1ab1-4718-b38e-e7312fe50035-kube-api-access-htw9q\") pod \"redhat-operators-tz87r\" (UID: \"7a26c7f3-1ab1-4718-b38e-e7312fe50035\") " pod="openshift-marketplace/redhat-operators-tz87r" Jan 21 13:09:36 crc kubenswrapper[4881]: I0121 13:09:36.157085 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tz87r" Jan 21 13:09:36 crc kubenswrapper[4881]: I0121 13:09:36.651257 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tz87r"] Jan 21 13:09:36 crc kubenswrapper[4881]: I0121 13:09:36.912851 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tz87r" event={"ID":"7a26c7f3-1ab1-4718-b38e-e7312fe50035","Type":"ContainerStarted","Data":"6fe7338bc95ad2647c2843d63b62e9c74936582099c757697291e6aa090f1c82"} Jan 21 13:09:37 crc kubenswrapper[4881]: I0121 13:09:37.931467 4881 generic.go:334] "Generic (PLEG): container finished" podID="7a26c7f3-1ab1-4718-b38e-e7312fe50035" containerID="e6d278b12f74a0eb2e4b0567a3236a047127fdb81f8d14d9c8935ae978677831" exitCode=0 Jan 21 13:09:37 crc kubenswrapper[4881]: I0121 13:09:37.931733 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tz87r" event={"ID":"7a26c7f3-1ab1-4718-b38e-e7312fe50035","Type":"ContainerDied","Data":"e6d278b12f74a0eb2e4b0567a3236a047127fdb81f8d14d9c8935ae978677831"} Jan 21 13:09:37 crc kubenswrapper[4881]: I0121 13:09:37.936544 4881 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 13:09:40 crc kubenswrapper[4881]: I0121 13:09:40.970857 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tz87r" event={"ID":"7a26c7f3-1ab1-4718-b38e-e7312fe50035","Type":"ContainerStarted","Data":"dff38752b6a3e2077f07d070df230f1781b4a0baa15f1d79eded0d367d0049c2"} Jan 21 13:09:48 crc kubenswrapper[4881]: I0121 13:09:48.091538 4881 generic.go:334] "Generic (PLEG): container finished" podID="7a26c7f3-1ab1-4718-b38e-e7312fe50035" containerID="dff38752b6a3e2077f07d070df230f1781b4a0baa15f1d79eded0d367d0049c2" exitCode=0 Jan 21 13:09:48 crc kubenswrapper[4881]: I0121 13:09:48.091976 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tz87r" event={"ID":"7a26c7f3-1ab1-4718-b38e-e7312fe50035","Type":"ContainerDied","Data":"dff38752b6a3e2077f07d070df230f1781b4a0baa15f1d79eded0d367d0049c2"} Jan 21 13:09:52 crc kubenswrapper[4881]: I0121 13:09:52.133363 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tz87r" event={"ID":"7a26c7f3-1ab1-4718-b38e-e7312fe50035","Type":"ContainerStarted","Data":"7aacdf3842bfabc09a423a31fb163598b5ed593b68754535d446e7346bc57ef0"} Jan 21 13:09:52 crc kubenswrapper[4881]: I0121 13:09:52.160153 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tz87r" podStartSLOduration=4.592580994 podStartE2EDuration="17.160099374s" podCreationTimestamp="2026-01-21 13:09:35 +0000 UTC" firstStartedPulling="2026-01-21 13:09:37.936069784 +0000 UTC m=+7965.196026263" lastFinishedPulling="2026-01-21 13:09:50.503588174 +0000 UTC m=+7977.763544643" observedRunningTime="2026-01-21 13:09:52.154919148 +0000 UTC m=+7979.414875637" watchObservedRunningTime="2026-01-21 13:09:52.160099374 +0000 UTC m=+7979.420055853" Jan 21 13:09:56 crc kubenswrapper[4881]: I0121 13:09:56.158170 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tz87r" Jan 21 13:09:56 crc kubenswrapper[4881]: I0121 13:09:56.161343 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tz87r" Jan 21 13:09:57 crc kubenswrapper[4881]: I0121 13:09:57.239618 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tz87r" podUID="7a26c7f3-1ab1-4718-b38e-e7312fe50035" containerName="registry-server" probeResult="failure" output=< Jan 21 13:09:57 crc kubenswrapper[4881]: timeout: failed to connect service ":50051" within 1s Jan 21 13:09:57 crc kubenswrapper[4881]: > Jan 21 13:10:06 crc kubenswrapper[4881]: I0121 13:10:06.210327 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tz87r" Jan 21 13:10:06 crc kubenswrapper[4881]: I0121 13:10:06.269877 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tz87r" Jan 21 13:10:07 crc kubenswrapper[4881]: I0121 13:10:07.032871 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tz87r"] Jan 21 13:10:07 crc kubenswrapper[4881]: I0121 13:10:07.299082 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tz87r" podUID="7a26c7f3-1ab1-4718-b38e-e7312fe50035" containerName="registry-server" containerID="cri-o://7aacdf3842bfabc09a423a31fb163598b5ed593b68754535d446e7346bc57ef0" gracePeriod=2 Jan 21 13:10:07 crc kubenswrapper[4881]: I0121 13:10:07.818465 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tz87r" Jan 21 13:10:08 crc kubenswrapper[4881]: I0121 13:10:08.120518 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a26c7f3-1ab1-4718-b38e-e7312fe50035-catalog-content\") pod \"7a26c7f3-1ab1-4718-b38e-e7312fe50035\" (UID: \"7a26c7f3-1ab1-4718-b38e-e7312fe50035\") " Jan 21 13:10:08 crc kubenswrapper[4881]: I0121 13:10:08.120739 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htw9q\" (UniqueName: \"kubernetes.io/projected/7a26c7f3-1ab1-4718-b38e-e7312fe50035-kube-api-access-htw9q\") pod \"7a26c7f3-1ab1-4718-b38e-e7312fe50035\" (UID: \"7a26c7f3-1ab1-4718-b38e-e7312fe50035\") " Jan 21 13:10:08 crc kubenswrapper[4881]: I0121 13:10:08.120820 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a26c7f3-1ab1-4718-b38e-e7312fe50035-utilities\") pod \"7a26c7f3-1ab1-4718-b38e-e7312fe50035\" (UID: \"7a26c7f3-1ab1-4718-b38e-e7312fe50035\") " Jan 21 13:10:08 crc kubenswrapper[4881]: I0121 13:10:08.122571 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a26c7f3-1ab1-4718-b38e-e7312fe50035-utilities" (OuterVolumeSpecName: "utilities") pod "7a26c7f3-1ab1-4718-b38e-e7312fe50035" (UID: "7a26c7f3-1ab1-4718-b38e-e7312fe50035"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:10:08 crc kubenswrapper[4881]: I0121 13:10:08.135752 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a26c7f3-1ab1-4718-b38e-e7312fe50035-kube-api-access-htw9q" (OuterVolumeSpecName: "kube-api-access-htw9q") pod "7a26c7f3-1ab1-4718-b38e-e7312fe50035" (UID: "7a26c7f3-1ab1-4718-b38e-e7312fe50035"). InnerVolumeSpecName "kube-api-access-htw9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:10:08 crc kubenswrapper[4881]: I0121 13:10:08.223514 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htw9q\" (UniqueName: \"kubernetes.io/projected/7a26c7f3-1ab1-4718-b38e-e7312fe50035-kube-api-access-htw9q\") on node \"crc\" DevicePath \"\"" Jan 21 13:10:08 crc kubenswrapper[4881]: I0121 13:10:08.223543 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a26c7f3-1ab1-4718-b38e-e7312fe50035-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:10:08 crc kubenswrapper[4881]: I0121 13:10:08.238678 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a26c7f3-1ab1-4718-b38e-e7312fe50035-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7a26c7f3-1ab1-4718-b38e-e7312fe50035" (UID: "7a26c7f3-1ab1-4718-b38e-e7312fe50035"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:10:08 crc kubenswrapper[4881]: I0121 13:10:08.316011 4881 generic.go:334] "Generic (PLEG): container finished" podID="7a26c7f3-1ab1-4718-b38e-e7312fe50035" containerID="7aacdf3842bfabc09a423a31fb163598b5ed593b68754535d446e7346bc57ef0" exitCode=0 Jan 21 13:10:08 crc kubenswrapper[4881]: I0121 13:10:08.316068 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tz87r" event={"ID":"7a26c7f3-1ab1-4718-b38e-e7312fe50035","Type":"ContainerDied","Data":"7aacdf3842bfabc09a423a31fb163598b5ed593b68754535d446e7346bc57ef0"} Jan 21 13:10:08 crc kubenswrapper[4881]: I0121 13:10:08.316149 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tz87r" event={"ID":"7a26c7f3-1ab1-4718-b38e-e7312fe50035","Type":"ContainerDied","Data":"6fe7338bc95ad2647c2843d63b62e9c74936582099c757697291e6aa090f1c82"} Jan 21 13:10:08 crc kubenswrapper[4881]: I0121 13:10:08.316177 4881 scope.go:117] "RemoveContainer" containerID="7aacdf3842bfabc09a423a31fb163598b5ed593b68754535d446e7346bc57ef0" Jan 21 13:10:08 crc kubenswrapper[4881]: I0121 13:10:08.318036 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tz87r" Jan 21 13:10:08 crc kubenswrapper[4881]: I0121 13:10:08.326200 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a26c7f3-1ab1-4718-b38e-e7312fe50035-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:10:08 crc kubenswrapper[4881]: I0121 13:10:08.348871 4881 scope.go:117] "RemoveContainer" containerID="dff38752b6a3e2077f07d070df230f1781b4a0baa15f1d79eded0d367d0049c2" Jan 21 13:10:08 crc kubenswrapper[4881]: I0121 13:10:08.369701 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tz87r"] Jan 21 13:10:08 crc kubenswrapper[4881]: I0121 13:10:08.385599 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tz87r"] Jan 21 13:10:08 crc kubenswrapper[4881]: I0121 13:10:08.395338 4881 scope.go:117] "RemoveContainer" containerID="e6d278b12f74a0eb2e4b0567a3236a047127fdb81f8d14d9c8935ae978677831" Jan 21 13:10:08 crc kubenswrapper[4881]: I0121 13:10:08.436594 4881 scope.go:117] "RemoveContainer" containerID="7aacdf3842bfabc09a423a31fb163598b5ed593b68754535d446e7346bc57ef0" Jan 21 13:10:08 crc kubenswrapper[4881]: E0121 13:10:08.437086 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7aacdf3842bfabc09a423a31fb163598b5ed593b68754535d446e7346bc57ef0\": container with ID starting with 7aacdf3842bfabc09a423a31fb163598b5ed593b68754535d446e7346bc57ef0 not found: ID does not exist" containerID="7aacdf3842bfabc09a423a31fb163598b5ed593b68754535d446e7346bc57ef0" Jan 21 13:10:08 crc kubenswrapper[4881]: I0121 13:10:08.437133 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7aacdf3842bfabc09a423a31fb163598b5ed593b68754535d446e7346bc57ef0"} err="failed to get container status \"7aacdf3842bfabc09a423a31fb163598b5ed593b68754535d446e7346bc57ef0\": rpc error: code = NotFound desc = could not find container \"7aacdf3842bfabc09a423a31fb163598b5ed593b68754535d446e7346bc57ef0\": container with ID starting with 7aacdf3842bfabc09a423a31fb163598b5ed593b68754535d446e7346bc57ef0 not found: ID does not exist" Jan 21 13:10:08 crc kubenswrapper[4881]: I0121 13:10:08.437157 4881 scope.go:117] "RemoveContainer" containerID="dff38752b6a3e2077f07d070df230f1781b4a0baa15f1d79eded0d367d0049c2" Jan 21 13:10:08 crc kubenswrapper[4881]: E0121 13:10:08.437345 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dff38752b6a3e2077f07d070df230f1781b4a0baa15f1d79eded0d367d0049c2\": container with ID starting with dff38752b6a3e2077f07d070df230f1781b4a0baa15f1d79eded0d367d0049c2 not found: ID does not exist" containerID="dff38752b6a3e2077f07d070df230f1781b4a0baa15f1d79eded0d367d0049c2" Jan 21 13:10:08 crc kubenswrapper[4881]: I0121 13:10:08.437367 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dff38752b6a3e2077f07d070df230f1781b4a0baa15f1d79eded0d367d0049c2"} err="failed to get container status \"dff38752b6a3e2077f07d070df230f1781b4a0baa15f1d79eded0d367d0049c2\": rpc error: code = NotFound desc = could not find container \"dff38752b6a3e2077f07d070df230f1781b4a0baa15f1d79eded0d367d0049c2\": container with ID starting with dff38752b6a3e2077f07d070df230f1781b4a0baa15f1d79eded0d367d0049c2 not found: ID does not exist" Jan 21 13:10:08 crc kubenswrapper[4881]: I0121 13:10:08.437379 4881 scope.go:117] "RemoveContainer" containerID="e6d278b12f74a0eb2e4b0567a3236a047127fdb81f8d14d9c8935ae978677831" Jan 21 13:10:08 crc kubenswrapper[4881]: E0121 13:10:08.437567 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6d278b12f74a0eb2e4b0567a3236a047127fdb81f8d14d9c8935ae978677831\": container with ID starting with e6d278b12f74a0eb2e4b0567a3236a047127fdb81f8d14d9c8935ae978677831 not found: ID does not exist" containerID="e6d278b12f74a0eb2e4b0567a3236a047127fdb81f8d14d9c8935ae978677831" Jan 21 13:10:08 crc kubenswrapper[4881]: I0121 13:10:08.437593 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6d278b12f74a0eb2e4b0567a3236a047127fdb81f8d14d9c8935ae978677831"} err="failed to get container status \"e6d278b12f74a0eb2e4b0567a3236a047127fdb81f8d14d9c8935ae978677831\": rpc error: code = NotFound desc = could not find container \"e6d278b12f74a0eb2e4b0567a3236a047127fdb81f8d14d9c8935ae978677831\": container with ID starting with e6d278b12f74a0eb2e4b0567a3236a047127fdb81f8d14d9c8935ae978677831 not found: ID does not exist" Jan 21 13:10:09 crc kubenswrapper[4881]: I0121 13:10:09.328712 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a26c7f3-1ab1-4718-b38e-e7312fe50035" path="/var/lib/kubelet/pods/7a26c7f3-1ab1-4718-b38e-e7312fe50035/volumes" Jan 21 13:10:35 crc kubenswrapper[4881]: I0121 13:10:35.643064 4881 scope.go:117] "RemoveContainer" containerID="52d6e3407218ada320893735ba478f1369a2a54d0c437542b8c2fab3e35c4b65" Jan 21 13:10:35 crc kubenswrapper[4881]: I0121 13:10:35.697063 4881 scope.go:117] "RemoveContainer" containerID="7ba9268affb7b36ede0c95f07ffb37c2eedb4287b3034bd5ca41d251a17b650e" Jan 21 13:10:35 crc kubenswrapper[4881]: I0121 13:10:35.775572 4881 scope.go:117] "RemoveContainer" containerID="a613acb4af5b4ff0151733e528bac6fafdfcaaa1c659f0a6b2cc1730debc40e3" Jan 21 13:11:29 crc kubenswrapper[4881]: I0121 13:11:29.851330 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:11:29 crc kubenswrapper[4881]: I0121 13:11:29.852060 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:11:59 crc kubenswrapper[4881]: I0121 13:11:59.851314 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:11:59 crc kubenswrapper[4881]: I0121 13:11:59.851831 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:12:29 crc kubenswrapper[4881]: I0121 13:12:29.851424 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:12:29 crc kubenswrapper[4881]: I0121 13:12:29.853070 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:12:29 crc kubenswrapper[4881]: I0121 13:12:29.853334 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 13:12:29 crc kubenswrapper[4881]: I0121 13:12:29.854434 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"552bc8eceba0a6e2de711a18f9749360715dc3263cb9e778a7c7f74c86bf256a"} pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 13:12:29 crc kubenswrapper[4881]: I0121 13:12:29.854624 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" containerID="cri-o://552bc8eceba0a6e2de711a18f9749360715dc3263cb9e778a7c7f74c86bf256a" gracePeriod=600 Jan 21 13:12:29 crc kubenswrapper[4881]: E0121 13:12:29.990246 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:12:30 crc kubenswrapper[4881]: I0121 13:12:30.242713 4881 generic.go:334] "Generic (PLEG): container finished" podID="3687b313-1df2-4274-80db-8c758b51bf2d" containerID="552bc8eceba0a6e2de711a18f9749360715dc3263cb9e778a7c7f74c86bf256a" exitCode=0 Jan 21 13:12:30 crc kubenswrapper[4881]: I0121 13:12:30.242763 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerDied","Data":"552bc8eceba0a6e2de711a18f9749360715dc3263cb9e778a7c7f74c86bf256a"} Jan 21 13:12:30 crc kubenswrapper[4881]: I0121 13:12:30.242820 4881 scope.go:117] "RemoveContainer" containerID="982d26bca9ae8535bd5c23122103aa1521012b2265c5406dc793a0fdc4c46b01" Jan 21 13:12:30 crc kubenswrapper[4881]: I0121 13:12:30.243670 4881 scope.go:117] "RemoveContainer" containerID="552bc8eceba0a6e2de711a18f9749360715dc3263cb9e778a7c7f74c86bf256a" Jan 21 13:12:30 crc kubenswrapper[4881]: E0121 13:12:30.244065 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:12:41 crc kubenswrapper[4881]: I0121 13:12:41.312037 4881 scope.go:117] "RemoveContainer" containerID="552bc8eceba0a6e2de711a18f9749360715dc3263cb9e778a7c7f74c86bf256a" Jan 21 13:12:41 crc kubenswrapper[4881]: E0121 13:12:41.312891 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:12:56 crc kubenswrapper[4881]: I0121 13:12:56.310680 4881 scope.go:117] "RemoveContainer" containerID="552bc8eceba0a6e2de711a18f9749360715dc3263cb9e778a7c7f74c86bf256a" Jan 21 13:12:56 crc kubenswrapper[4881]: E0121 13:12:56.313741 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:13:09 crc kubenswrapper[4881]: I0121 13:13:09.312401 4881 scope.go:117] "RemoveContainer" containerID="552bc8eceba0a6e2de711a18f9749360715dc3263cb9e778a7c7f74c86bf256a" Jan 21 13:13:09 crc kubenswrapper[4881]: E0121 13:13:09.313366 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:13:21 crc kubenswrapper[4881]: I0121 13:13:21.311099 4881 scope.go:117] "RemoveContainer" containerID="552bc8eceba0a6e2de711a18f9749360715dc3263cb9e778a7c7f74c86bf256a" Jan 21 13:13:21 crc kubenswrapper[4881]: E0121 13:13:21.311806 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:13:32 crc kubenswrapper[4881]: I0121 13:13:32.312519 4881 scope.go:117] "RemoveContainer" containerID="552bc8eceba0a6e2de711a18f9749360715dc3263cb9e778a7c7f74c86bf256a" Jan 21 13:13:32 crc kubenswrapper[4881]: E0121 13:13:32.313476 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:13:44 crc kubenswrapper[4881]: I0121 13:13:44.313871 4881 scope.go:117] "RemoveContainer" containerID="552bc8eceba0a6e2de711a18f9749360715dc3263cb9e778a7c7f74c86bf256a" Jan 21 13:13:44 crc kubenswrapper[4881]: E0121 13:13:44.314497 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:13:56 crc kubenswrapper[4881]: I0121 13:13:56.310986 4881 scope.go:117] "RemoveContainer" containerID="552bc8eceba0a6e2de711a18f9749360715dc3263cb9e778a7c7f74c86bf256a" Jan 21 13:13:56 crc kubenswrapper[4881]: E0121 13:13:56.312163 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:14:09 crc kubenswrapper[4881]: I0121 13:14:09.312452 4881 scope.go:117] "RemoveContainer" containerID="552bc8eceba0a6e2de711a18f9749360715dc3263cb9e778a7c7f74c86bf256a" Jan 21 13:14:09 crc kubenswrapper[4881]: E0121 13:14:09.313348 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:14:21 crc kubenswrapper[4881]: I0121 13:14:21.311115 4881 scope.go:117] "RemoveContainer" containerID="552bc8eceba0a6e2de711a18f9749360715dc3263cb9e778a7c7f74c86bf256a" Jan 21 13:14:21 crc kubenswrapper[4881]: E0121 13:14:21.312204 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:14:33 crc kubenswrapper[4881]: I0121 13:14:33.323509 4881 scope.go:117] "RemoveContainer" containerID="552bc8eceba0a6e2de711a18f9749360715dc3263cb9e778a7c7f74c86bf256a" Jan 21 13:14:33 crc kubenswrapper[4881]: E0121 13:14:33.324856 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:14:46 crc kubenswrapper[4881]: I0121 13:14:46.311585 4881 scope.go:117] "RemoveContainer" containerID="552bc8eceba0a6e2de711a18f9749360715dc3263cb9e778a7c7f74c86bf256a" Jan 21 13:14:46 crc kubenswrapper[4881]: E0121 13:14:46.312959 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:14:59 crc kubenswrapper[4881]: I0121 13:14:59.310707 4881 scope.go:117] "RemoveContainer" containerID="552bc8eceba0a6e2de711a18f9749360715dc3263cb9e778a7c7f74c86bf256a" Jan 21 13:14:59 crc kubenswrapper[4881]: E0121 13:14:59.311545 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:15:00 crc kubenswrapper[4881]: I0121 13:15:00.202926 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483355-2ccpq"] Jan 21 13:15:00 crc kubenswrapper[4881]: E0121 13:15:00.203860 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a26c7f3-1ab1-4718-b38e-e7312fe50035" containerName="registry-server" Jan 21 13:15:00 crc kubenswrapper[4881]: I0121 13:15:00.203885 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a26c7f3-1ab1-4718-b38e-e7312fe50035" containerName="registry-server" Jan 21 13:15:00 crc kubenswrapper[4881]: E0121 13:15:00.203916 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a26c7f3-1ab1-4718-b38e-e7312fe50035" containerName="extract-utilities" Jan 21 13:15:00 crc kubenswrapper[4881]: I0121 13:15:00.203923 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a26c7f3-1ab1-4718-b38e-e7312fe50035" containerName="extract-utilities" Jan 21 13:15:00 crc kubenswrapper[4881]: E0121 13:15:00.203935 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a26c7f3-1ab1-4718-b38e-e7312fe50035" containerName="extract-content" Jan 21 13:15:00 crc kubenswrapper[4881]: I0121 13:15:00.203940 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a26c7f3-1ab1-4718-b38e-e7312fe50035" containerName="extract-content" Jan 21 13:15:00 crc kubenswrapper[4881]: I0121 13:15:00.204149 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a26c7f3-1ab1-4718-b38e-e7312fe50035" containerName="registry-server" Jan 21 13:15:00 crc kubenswrapper[4881]: I0121 13:15:00.204971 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-2ccpq" Jan 21 13:15:00 crc kubenswrapper[4881]: I0121 13:15:00.207920 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 13:15:00 crc kubenswrapper[4881]: I0121 13:15:00.213800 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 13:15:00 crc kubenswrapper[4881]: I0121 13:15:00.230722 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483355-2ccpq"] Jan 21 13:15:00 crc kubenswrapper[4881]: I0121 13:15:00.284959 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f55151a2-6511-456d-b38a-be9f5a21c93c-secret-volume\") pod \"collect-profiles-29483355-2ccpq\" (UID: \"f55151a2-6511-456d-b38a-be9f5a21c93c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-2ccpq" Jan 21 13:15:00 crc kubenswrapper[4881]: I0121 13:15:00.285021 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9q2q\" (UniqueName: \"kubernetes.io/projected/f55151a2-6511-456d-b38a-be9f5a21c93c-kube-api-access-r9q2q\") pod \"collect-profiles-29483355-2ccpq\" (UID: \"f55151a2-6511-456d-b38a-be9f5a21c93c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-2ccpq" Jan 21 13:15:00 crc kubenswrapper[4881]: I0121 13:15:00.285643 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f55151a2-6511-456d-b38a-be9f5a21c93c-config-volume\") pod \"collect-profiles-29483355-2ccpq\" (UID: \"f55151a2-6511-456d-b38a-be9f5a21c93c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-2ccpq" Jan 21 13:15:00 crc kubenswrapper[4881]: I0121 13:15:00.387579 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f55151a2-6511-456d-b38a-be9f5a21c93c-config-volume\") pod \"collect-profiles-29483355-2ccpq\" (UID: \"f55151a2-6511-456d-b38a-be9f5a21c93c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-2ccpq" Jan 21 13:15:00 crc kubenswrapper[4881]: I0121 13:15:00.387676 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f55151a2-6511-456d-b38a-be9f5a21c93c-secret-volume\") pod \"collect-profiles-29483355-2ccpq\" (UID: \"f55151a2-6511-456d-b38a-be9f5a21c93c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-2ccpq" Jan 21 13:15:00 crc kubenswrapper[4881]: I0121 13:15:00.387696 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9q2q\" (UniqueName: \"kubernetes.io/projected/f55151a2-6511-456d-b38a-be9f5a21c93c-kube-api-access-r9q2q\") pod \"collect-profiles-29483355-2ccpq\" (UID: \"f55151a2-6511-456d-b38a-be9f5a21c93c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-2ccpq" Jan 21 13:15:00 crc kubenswrapper[4881]: I0121 13:15:00.389149 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f55151a2-6511-456d-b38a-be9f5a21c93c-config-volume\") pod \"collect-profiles-29483355-2ccpq\" (UID: \"f55151a2-6511-456d-b38a-be9f5a21c93c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-2ccpq" Jan 21 13:15:00 crc kubenswrapper[4881]: I0121 13:15:00.402758 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f55151a2-6511-456d-b38a-be9f5a21c93c-secret-volume\") pod \"collect-profiles-29483355-2ccpq\" (UID: \"f55151a2-6511-456d-b38a-be9f5a21c93c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-2ccpq" Jan 21 13:15:00 crc kubenswrapper[4881]: I0121 13:15:00.417528 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9q2q\" (UniqueName: \"kubernetes.io/projected/f55151a2-6511-456d-b38a-be9f5a21c93c-kube-api-access-r9q2q\") pod \"collect-profiles-29483355-2ccpq\" (UID: \"f55151a2-6511-456d-b38a-be9f5a21c93c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-2ccpq" Jan 21 13:15:00 crc kubenswrapper[4881]: I0121 13:15:00.523154 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-2ccpq" Jan 21 13:15:01 crc kubenswrapper[4881]: I0121 13:15:01.019143 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483355-2ccpq"] Jan 21 13:15:01 crc kubenswrapper[4881]: I0121 13:15:01.119083 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-2ccpq" event={"ID":"f55151a2-6511-456d-b38a-be9f5a21c93c","Type":"ContainerStarted","Data":"78a03f2d314a3b74edc7187f8196a6192aa8a9d1e02cd6b8dc0699796d7cd89d"} Jan 21 13:15:02 crc kubenswrapper[4881]: I0121 13:15:02.137241 4881 generic.go:334] "Generic (PLEG): container finished" podID="f55151a2-6511-456d-b38a-be9f5a21c93c" containerID="e0dd23d233b9caa539382c8a1564b0d40bb269edd2ad3466941af737a67501dd" exitCode=0 Jan 21 13:15:02 crc kubenswrapper[4881]: I0121 13:15:02.137306 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-2ccpq" event={"ID":"f55151a2-6511-456d-b38a-be9f5a21c93c","Type":"ContainerDied","Data":"e0dd23d233b9caa539382c8a1564b0d40bb269edd2ad3466941af737a67501dd"} Jan 21 13:15:03 crc kubenswrapper[4881]: I0121 13:15:03.538431 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-2ccpq" Jan 21 13:15:03 crc kubenswrapper[4881]: I0121 13:15:03.674081 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f55151a2-6511-456d-b38a-be9f5a21c93c-config-volume\") pod \"f55151a2-6511-456d-b38a-be9f5a21c93c\" (UID: \"f55151a2-6511-456d-b38a-be9f5a21c93c\") " Jan 21 13:15:03 crc kubenswrapper[4881]: I0121 13:15:03.674508 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f55151a2-6511-456d-b38a-be9f5a21c93c-secret-volume\") pod \"f55151a2-6511-456d-b38a-be9f5a21c93c\" (UID: \"f55151a2-6511-456d-b38a-be9f5a21c93c\") " Jan 21 13:15:03 crc kubenswrapper[4881]: I0121 13:15:03.674610 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9q2q\" (UniqueName: \"kubernetes.io/projected/f55151a2-6511-456d-b38a-be9f5a21c93c-kube-api-access-r9q2q\") pod \"f55151a2-6511-456d-b38a-be9f5a21c93c\" (UID: \"f55151a2-6511-456d-b38a-be9f5a21c93c\") " Jan 21 13:15:03 crc kubenswrapper[4881]: I0121 13:15:03.674980 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f55151a2-6511-456d-b38a-be9f5a21c93c-config-volume" (OuterVolumeSpecName: "config-volume") pod "f55151a2-6511-456d-b38a-be9f5a21c93c" (UID: "f55151a2-6511-456d-b38a-be9f5a21c93c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:15:03 crc kubenswrapper[4881]: I0121 13:15:03.675212 4881 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f55151a2-6511-456d-b38a-be9f5a21c93c-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 13:15:03 crc kubenswrapper[4881]: I0121 13:15:03.687395 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f55151a2-6511-456d-b38a-be9f5a21c93c-kube-api-access-r9q2q" (OuterVolumeSpecName: "kube-api-access-r9q2q") pod "f55151a2-6511-456d-b38a-be9f5a21c93c" (UID: "f55151a2-6511-456d-b38a-be9f5a21c93c"). InnerVolumeSpecName "kube-api-access-r9q2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:15:03 crc kubenswrapper[4881]: I0121 13:15:03.688134 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f55151a2-6511-456d-b38a-be9f5a21c93c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f55151a2-6511-456d-b38a-be9f5a21c93c" (UID: "f55151a2-6511-456d-b38a-be9f5a21c93c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:15:03 crc kubenswrapper[4881]: I0121 13:15:03.777897 4881 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f55151a2-6511-456d-b38a-be9f5a21c93c-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 13:15:03 crc kubenswrapper[4881]: I0121 13:15:03.777947 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9q2q\" (UniqueName: \"kubernetes.io/projected/f55151a2-6511-456d-b38a-be9f5a21c93c-kube-api-access-r9q2q\") on node \"crc\" DevicePath \"\"" Jan 21 13:15:04 crc kubenswrapper[4881]: I0121 13:15:04.167731 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-2ccpq" event={"ID":"f55151a2-6511-456d-b38a-be9f5a21c93c","Type":"ContainerDied","Data":"78a03f2d314a3b74edc7187f8196a6192aa8a9d1e02cd6b8dc0699796d7cd89d"} Jan 21 13:15:04 crc kubenswrapper[4881]: I0121 13:15:04.167767 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78a03f2d314a3b74edc7187f8196a6192aa8a9d1e02cd6b8dc0699796d7cd89d" Jan 21 13:15:04 crc kubenswrapper[4881]: I0121 13:15:04.167824 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483355-2ccpq" Jan 21 13:15:04 crc kubenswrapper[4881]: I0121 13:15:04.639739 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483310-ntw6g"] Jan 21 13:15:04 crc kubenswrapper[4881]: I0121 13:15:04.651426 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483310-ntw6g"] Jan 21 13:15:05 crc kubenswrapper[4881]: I0121 13:15:05.330479 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5368d7c4-a23a-46aa-8dea-1fde26f5df53" path="/var/lib/kubelet/pods/5368d7c4-a23a-46aa-8dea-1fde26f5df53/volumes" Jan 21 13:15:07 crc kubenswrapper[4881]: I0121 13:15:07.422661 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9zbwp"] Jan 21 13:15:07 crc kubenswrapper[4881]: E0121 13:15:07.425677 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f55151a2-6511-456d-b38a-be9f5a21c93c" containerName="collect-profiles" Jan 21 13:15:07 crc kubenswrapper[4881]: I0121 13:15:07.425707 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="f55151a2-6511-456d-b38a-be9f5a21c93c" containerName="collect-profiles" Jan 21 13:15:07 crc kubenswrapper[4881]: I0121 13:15:07.426070 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="f55151a2-6511-456d-b38a-be9f5a21c93c" containerName="collect-profiles" Jan 21 13:15:07 crc kubenswrapper[4881]: I0121 13:15:07.429136 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9zbwp" Jan 21 13:15:07 crc kubenswrapper[4881]: I0121 13:15:07.442084 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9zbwp"] Jan 21 13:15:07 crc kubenswrapper[4881]: I0121 13:15:07.478338 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc894132-ff81-4462-808c-04b91aa131c5-catalog-content\") pod \"redhat-marketplace-9zbwp\" (UID: \"cc894132-ff81-4462-808c-04b91aa131c5\") " pod="openshift-marketplace/redhat-marketplace-9zbwp" Jan 21 13:15:07 crc kubenswrapper[4881]: I0121 13:15:07.478391 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc894132-ff81-4462-808c-04b91aa131c5-utilities\") pod \"redhat-marketplace-9zbwp\" (UID: \"cc894132-ff81-4462-808c-04b91aa131c5\") " pod="openshift-marketplace/redhat-marketplace-9zbwp" Jan 21 13:15:07 crc kubenswrapper[4881]: I0121 13:15:07.478497 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcpql\" (UniqueName: \"kubernetes.io/projected/cc894132-ff81-4462-808c-04b91aa131c5-kube-api-access-vcpql\") pod \"redhat-marketplace-9zbwp\" (UID: \"cc894132-ff81-4462-808c-04b91aa131c5\") " pod="openshift-marketplace/redhat-marketplace-9zbwp" Jan 21 13:15:07 crc kubenswrapper[4881]: I0121 13:15:07.580899 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcpql\" (UniqueName: \"kubernetes.io/projected/cc894132-ff81-4462-808c-04b91aa131c5-kube-api-access-vcpql\") pod \"redhat-marketplace-9zbwp\" (UID: \"cc894132-ff81-4462-808c-04b91aa131c5\") " pod="openshift-marketplace/redhat-marketplace-9zbwp" Jan 21 13:15:07 crc kubenswrapper[4881]: I0121 13:15:07.581330 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc894132-ff81-4462-808c-04b91aa131c5-catalog-content\") pod \"redhat-marketplace-9zbwp\" (UID: \"cc894132-ff81-4462-808c-04b91aa131c5\") " pod="openshift-marketplace/redhat-marketplace-9zbwp" Jan 21 13:15:07 crc kubenswrapper[4881]: I0121 13:15:07.581366 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc894132-ff81-4462-808c-04b91aa131c5-utilities\") pod \"redhat-marketplace-9zbwp\" (UID: \"cc894132-ff81-4462-808c-04b91aa131c5\") " pod="openshift-marketplace/redhat-marketplace-9zbwp" Jan 21 13:15:07 crc kubenswrapper[4881]: I0121 13:15:07.581842 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc894132-ff81-4462-808c-04b91aa131c5-catalog-content\") pod \"redhat-marketplace-9zbwp\" (UID: \"cc894132-ff81-4462-808c-04b91aa131c5\") " pod="openshift-marketplace/redhat-marketplace-9zbwp" Jan 21 13:15:07 crc kubenswrapper[4881]: I0121 13:15:07.581946 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc894132-ff81-4462-808c-04b91aa131c5-utilities\") pod \"redhat-marketplace-9zbwp\" (UID: \"cc894132-ff81-4462-808c-04b91aa131c5\") " pod="openshift-marketplace/redhat-marketplace-9zbwp" Jan 21 13:15:07 crc kubenswrapper[4881]: I0121 13:15:07.613975 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcpql\" (UniqueName: \"kubernetes.io/projected/cc894132-ff81-4462-808c-04b91aa131c5-kube-api-access-vcpql\") pod \"redhat-marketplace-9zbwp\" (UID: \"cc894132-ff81-4462-808c-04b91aa131c5\") " pod="openshift-marketplace/redhat-marketplace-9zbwp" Jan 21 13:15:07 crc kubenswrapper[4881]: I0121 13:15:07.756473 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9zbwp" Jan 21 13:15:08 crc kubenswrapper[4881]: I0121 13:15:08.321922 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9zbwp"] Jan 21 13:15:09 crc kubenswrapper[4881]: I0121 13:15:09.225863 4881 generic.go:334] "Generic (PLEG): container finished" podID="cc894132-ff81-4462-808c-04b91aa131c5" containerID="1f0cf2aba23d64564f86d3e47e178b26c66b88713e2c1b4e63ada03ff3001e47" exitCode=0 Jan 21 13:15:09 crc kubenswrapper[4881]: I0121 13:15:09.225920 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9zbwp" event={"ID":"cc894132-ff81-4462-808c-04b91aa131c5","Type":"ContainerDied","Data":"1f0cf2aba23d64564f86d3e47e178b26c66b88713e2c1b4e63ada03ff3001e47"} Jan 21 13:15:09 crc kubenswrapper[4881]: I0121 13:15:09.230700 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9zbwp" event={"ID":"cc894132-ff81-4462-808c-04b91aa131c5","Type":"ContainerStarted","Data":"8051ab9dc5d632a0547e190564f726d71a1e7a469f81499a6307d3d35f95846e"} Jan 21 13:15:09 crc kubenswrapper[4881]: I0121 13:15:09.228925 4881 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 13:15:10 crc kubenswrapper[4881]: I0121 13:15:10.310581 4881 scope.go:117] "RemoveContainer" containerID="552bc8eceba0a6e2de711a18f9749360715dc3263cb9e778a7c7f74c86bf256a" Jan 21 13:15:10 crc kubenswrapper[4881]: E0121 13:15:10.311966 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:15:11 crc kubenswrapper[4881]: I0121 13:15:11.261489 4881 generic.go:334] "Generic (PLEG): container finished" podID="cc894132-ff81-4462-808c-04b91aa131c5" containerID="7905ef1bd8eb4c2a74ecd66dee0f7a7d01738c48ab72e0bfb49efb8ba199940b" exitCode=0 Jan 21 13:15:11 crc kubenswrapper[4881]: I0121 13:15:11.261561 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9zbwp" event={"ID":"cc894132-ff81-4462-808c-04b91aa131c5","Type":"ContainerDied","Data":"7905ef1bd8eb4c2a74ecd66dee0f7a7d01738c48ab72e0bfb49efb8ba199940b"} Jan 21 13:15:12 crc kubenswrapper[4881]: I0121 13:15:12.272402 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9zbwp" event={"ID":"cc894132-ff81-4462-808c-04b91aa131c5","Type":"ContainerStarted","Data":"b2480cdd412677da34ca1262943186b4f02a412993e268c2cc5a3c46d5441e61"} Jan 21 13:15:12 crc kubenswrapper[4881]: I0121 13:15:12.321137 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9zbwp" podStartSLOduration=2.715422987 podStartE2EDuration="5.321106479s" podCreationTimestamp="2026-01-21 13:15:07 +0000 UTC" firstStartedPulling="2026-01-21 13:15:09.228582772 +0000 UTC m=+8296.488539261" lastFinishedPulling="2026-01-21 13:15:11.834266244 +0000 UTC m=+8299.094222753" observedRunningTime="2026-01-21 13:15:12.301390098 +0000 UTC m=+8299.561346577" watchObservedRunningTime="2026-01-21 13:15:12.321106479 +0000 UTC m=+8299.581062948" Jan 21 13:15:17 crc kubenswrapper[4881]: I0121 13:15:17.757021 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9zbwp" Jan 21 13:15:17 crc kubenswrapper[4881]: I0121 13:15:17.759892 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9zbwp" Jan 21 13:15:17 crc kubenswrapper[4881]: I0121 13:15:17.813297 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9zbwp" Jan 21 13:15:18 crc kubenswrapper[4881]: I0121 13:15:18.402717 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9zbwp" Jan 21 13:15:19 crc kubenswrapper[4881]: I0121 13:15:19.590833 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9zbwp"] Jan 21 13:15:21 crc kubenswrapper[4881]: I0121 13:15:21.421860 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9zbwp" podUID="cc894132-ff81-4462-808c-04b91aa131c5" containerName="registry-server" containerID="cri-o://b2480cdd412677da34ca1262943186b4f02a412993e268c2cc5a3c46d5441e61" gracePeriod=2 Jan 21 13:15:22 crc kubenswrapper[4881]: I0121 13:15:22.311217 4881 scope.go:117] "RemoveContainer" containerID="552bc8eceba0a6e2de711a18f9749360715dc3263cb9e778a7c7f74c86bf256a" Jan 21 13:15:22 crc kubenswrapper[4881]: E0121 13:15:22.312166 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:15:22 crc kubenswrapper[4881]: I0121 13:15:22.434055 4881 generic.go:334] "Generic (PLEG): container finished" podID="cc894132-ff81-4462-808c-04b91aa131c5" containerID="b2480cdd412677da34ca1262943186b4f02a412993e268c2cc5a3c46d5441e61" exitCode=0 Jan 21 13:15:22 crc kubenswrapper[4881]: I0121 13:15:22.434109 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9zbwp" event={"ID":"cc894132-ff81-4462-808c-04b91aa131c5","Type":"ContainerDied","Data":"b2480cdd412677da34ca1262943186b4f02a412993e268c2cc5a3c46d5441e61"} Jan 21 13:15:22 crc kubenswrapper[4881]: I0121 13:15:22.434175 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9zbwp" event={"ID":"cc894132-ff81-4462-808c-04b91aa131c5","Type":"ContainerDied","Data":"8051ab9dc5d632a0547e190564f726d71a1e7a469f81499a6307d3d35f95846e"} Jan 21 13:15:22 crc kubenswrapper[4881]: I0121 13:15:22.434195 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8051ab9dc5d632a0547e190564f726d71a1e7a469f81499a6307d3d35f95846e" Jan 21 13:15:22 crc kubenswrapper[4881]: I0121 13:15:22.493603 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9zbwp" Jan 21 13:15:22 crc kubenswrapper[4881]: I0121 13:15:22.683931 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc894132-ff81-4462-808c-04b91aa131c5-utilities\") pod \"cc894132-ff81-4462-808c-04b91aa131c5\" (UID: \"cc894132-ff81-4462-808c-04b91aa131c5\") " Jan 21 13:15:22 crc kubenswrapper[4881]: I0121 13:15:22.684159 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc894132-ff81-4462-808c-04b91aa131c5-catalog-content\") pod \"cc894132-ff81-4462-808c-04b91aa131c5\" (UID: \"cc894132-ff81-4462-808c-04b91aa131c5\") " Jan 21 13:15:22 crc kubenswrapper[4881]: I0121 13:15:22.684244 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcpql\" (UniqueName: \"kubernetes.io/projected/cc894132-ff81-4462-808c-04b91aa131c5-kube-api-access-vcpql\") pod \"cc894132-ff81-4462-808c-04b91aa131c5\" (UID: \"cc894132-ff81-4462-808c-04b91aa131c5\") " Jan 21 13:15:22 crc kubenswrapper[4881]: I0121 13:15:22.692134 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc894132-ff81-4462-808c-04b91aa131c5-utilities" (OuterVolumeSpecName: "utilities") pod "cc894132-ff81-4462-808c-04b91aa131c5" (UID: "cc894132-ff81-4462-808c-04b91aa131c5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:15:22 crc kubenswrapper[4881]: I0121 13:15:22.702371 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc894132-ff81-4462-808c-04b91aa131c5-kube-api-access-vcpql" (OuterVolumeSpecName: "kube-api-access-vcpql") pod "cc894132-ff81-4462-808c-04b91aa131c5" (UID: "cc894132-ff81-4462-808c-04b91aa131c5"). InnerVolumeSpecName "kube-api-access-vcpql". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:15:22 crc kubenswrapper[4881]: I0121 13:15:22.714399 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc894132-ff81-4462-808c-04b91aa131c5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc894132-ff81-4462-808c-04b91aa131c5" (UID: "cc894132-ff81-4462-808c-04b91aa131c5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:15:22 crc kubenswrapper[4881]: I0121 13:15:22.787251 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc894132-ff81-4462-808c-04b91aa131c5-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:15:22 crc kubenswrapper[4881]: I0121 13:15:22.788338 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc894132-ff81-4462-808c-04b91aa131c5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:15:22 crc kubenswrapper[4881]: I0121 13:15:22.788404 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcpql\" (UniqueName: \"kubernetes.io/projected/cc894132-ff81-4462-808c-04b91aa131c5-kube-api-access-vcpql\") on node \"crc\" DevicePath \"\"" Jan 21 13:15:23 crc kubenswrapper[4881]: I0121 13:15:23.444071 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9zbwp" Jan 21 13:15:23 crc kubenswrapper[4881]: I0121 13:15:23.470521 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9zbwp"] Jan 21 13:15:23 crc kubenswrapper[4881]: I0121 13:15:23.481073 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9zbwp"] Jan 21 13:15:25 crc kubenswrapper[4881]: I0121 13:15:25.327601 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc894132-ff81-4462-808c-04b91aa131c5" path="/var/lib/kubelet/pods/cc894132-ff81-4462-808c-04b91aa131c5/volumes" Jan 21 13:15:34 crc kubenswrapper[4881]: I0121 13:15:34.310935 4881 scope.go:117] "RemoveContainer" containerID="552bc8eceba0a6e2de711a18f9749360715dc3263cb9e778a7c7f74c86bf256a" Jan 21 13:15:34 crc kubenswrapper[4881]: E0121 13:15:34.311747 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:15:35 crc kubenswrapper[4881]: I0121 13:15:35.955444 4881 scope.go:117] "RemoveContainer" containerID="b60782b6ad5aeb71531d28ab48543fd988c6726bf0975c069d2238cd6237f3ab" Jan 21 13:15:48 crc kubenswrapper[4881]: I0121 13:15:48.311845 4881 scope.go:117] "RemoveContainer" containerID="552bc8eceba0a6e2de711a18f9749360715dc3263cb9e778a7c7f74c86bf256a" Jan 21 13:15:48 crc kubenswrapper[4881]: E0121 13:15:48.312729 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:16:00 crc kubenswrapper[4881]: I0121 13:16:00.311291 4881 scope.go:117] "RemoveContainer" containerID="552bc8eceba0a6e2de711a18f9749360715dc3263cb9e778a7c7f74c86bf256a" Jan 21 13:16:00 crc kubenswrapper[4881]: E0121 13:16:00.312112 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:16:15 crc kubenswrapper[4881]: I0121 13:16:15.311603 4881 scope.go:117] "RemoveContainer" containerID="552bc8eceba0a6e2de711a18f9749360715dc3263cb9e778a7c7f74c86bf256a" Jan 21 13:16:15 crc kubenswrapper[4881]: E0121 13:16:15.312510 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:16:25 crc kubenswrapper[4881]: I0121 13:16:25.408924 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9zw6q"] Jan 21 13:16:25 crc kubenswrapper[4881]: E0121 13:16:25.411673 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc894132-ff81-4462-808c-04b91aa131c5" containerName="registry-server" Jan 21 13:16:25 crc kubenswrapper[4881]: I0121 13:16:25.411701 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc894132-ff81-4462-808c-04b91aa131c5" containerName="registry-server" Jan 21 13:16:25 crc kubenswrapper[4881]: E0121 13:16:25.411837 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc894132-ff81-4462-808c-04b91aa131c5" containerName="extract-utilities" Jan 21 13:16:25 crc kubenswrapper[4881]: I0121 13:16:25.411851 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc894132-ff81-4462-808c-04b91aa131c5" containerName="extract-utilities" Jan 21 13:16:25 crc kubenswrapper[4881]: E0121 13:16:25.411906 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc894132-ff81-4462-808c-04b91aa131c5" containerName="extract-content" Jan 21 13:16:25 crc kubenswrapper[4881]: I0121 13:16:25.411924 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc894132-ff81-4462-808c-04b91aa131c5" containerName="extract-content" Jan 21 13:16:25 crc kubenswrapper[4881]: I0121 13:16:25.412428 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc894132-ff81-4462-808c-04b91aa131c5" containerName="registry-server" Jan 21 13:16:25 crc kubenswrapper[4881]: I0121 13:16:25.414818 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9zw6q" Jan 21 13:16:25 crc kubenswrapper[4881]: I0121 13:16:25.418604 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66383caa-595c-4dad-b9a9-a2878ef04277-catalog-content\") pod \"certified-operators-9zw6q\" (UID: \"66383caa-595c-4dad-b9a9-a2878ef04277\") " pod="openshift-marketplace/certified-operators-9zw6q" Jan 21 13:16:25 crc kubenswrapper[4881]: I0121 13:16:25.418670 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cnwb\" (UniqueName: \"kubernetes.io/projected/66383caa-595c-4dad-b9a9-a2878ef04277-kube-api-access-5cnwb\") pod \"certified-operators-9zw6q\" (UID: \"66383caa-595c-4dad-b9a9-a2878ef04277\") " pod="openshift-marketplace/certified-operators-9zw6q" Jan 21 13:16:25 crc kubenswrapper[4881]: I0121 13:16:25.418723 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66383caa-595c-4dad-b9a9-a2878ef04277-utilities\") pod \"certified-operators-9zw6q\" (UID: \"66383caa-595c-4dad-b9a9-a2878ef04277\") " pod="openshift-marketplace/certified-operators-9zw6q" Jan 21 13:16:25 crc kubenswrapper[4881]: I0121 13:16:25.441235 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9zw6q"] Jan 21 13:16:25 crc kubenswrapper[4881]: I0121 13:16:25.521871 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66383caa-595c-4dad-b9a9-a2878ef04277-catalog-content\") pod \"certified-operators-9zw6q\" (UID: \"66383caa-595c-4dad-b9a9-a2878ef04277\") " pod="openshift-marketplace/certified-operators-9zw6q" Jan 21 13:16:25 crc kubenswrapper[4881]: I0121 13:16:25.521927 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cnwb\" (UniqueName: \"kubernetes.io/projected/66383caa-595c-4dad-b9a9-a2878ef04277-kube-api-access-5cnwb\") pod \"certified-operators-9zw6q\" (UID: \"66383caa-595c-4dad-b9a9-a2878ef04277\") " pod="openshift-marketplace/certified-operators-9zw6q" Jan 21 13:16:25 crc kubenswrapper[4881]: I0121 13:16:25.521965 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66383caa-595c-4dad-b9a9-a2878ef04277-utilities\") pod \"certified-operators-9zw6q\" (UID: \"66383caa-595c-4dad-b9a9-a2878ef04277\") " pod="openshift-marketplace/certified-operators-9zw6q" Jan 21 13:16:25 crc kubenswrapper[4881]: I0121 13:16:25.522454 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66383caa-595c-4dad-b9a9-a2878ef04277-catalog-content\") pod \"certified-operators-9zw6q\" (UID: \"66383caa-595c-4dad-b9a9-a2878ef04277\") " pod="openshift-marketplace/certified-operators-9zw6q" Jan 21 13:16:25 crc kubenswrapper[4881]: I0121 13:16:25.522728 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66383caa-595c-4dad-b9a9-a2878ef04277-utilities\") pod \"certified-operators-9zw6q\" (UID: \"66383caa-595c-4dad-b9a9-a2878ef04277\") " pod="openshift-marketplace/certified-operators-9zw6q" Jan 21 13:16:25 crc kubenswrapper[4881]: I0121 13:16:25.557434 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cnwb\" (UniqueName: \"kubernetes.io/projected/66383caa-595c-4dad-b9a9-a2878ef04277-kube-api-access-5cnwb\") pod \"certified-operators-9zw6q\" (UID: \"66383caa-595c-4dad-b9a9-a2878ef04277\") " pod="openshift-marketplace/certified-operators-9zw6q" Jan 21 13:16:25 crc kubenswrapper[4881]: I0121 13:16:25.749406 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9zw6q" Jan 21 13:16:26 crc kubenswrapper[4881]: I0121 13:16:26.308136 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9zw6q"] Jan 21 13:16:26 crc kubenswrapper[4881]: I0121 13:16:26.313048 4881 scope.go:117] "RemoveContainer" containerID="552bc8eceba0a6e2de711a18f9749360715dc3263cb9e778a7c7f74c86bf256a" Jan 21 13:16:26 crc kubenswrapper[4881]: E0121 13:16:26.313400 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:16:27 crc kubenswrapper[4881]: I0121 13:16:27.237513 4881 generic.go:334] "Generic (PLEG): container finished" podID="66383caa-595c-4dad-b9a9-a2878ef04277" containerID="888c145ca396a50869d27c201487ff33c86b2ddf4c4044b3820855e98578a9e7" exitCode=0 Jan 21 13:16:27 crc kubenswrapper[4881]: I0121 13:16:27.238055 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9zw6q" event={"ID":"66383caa-595c-4dad-b9a9-a2878ef04277","Type":"ContainerDied","Data":"888c145ca396a50869d27c201487ff33c86b2ddf4c4044b3820855e98578a9e7"} Jan 21 13:16:27 crc kubenswrapper[4881]: I0121 13:16:27.238632 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9zw6q" event={"ID":"66383caa-595c-4dad-b9a9-a2878ef04277","Type":"ContainerStarted","Data":"fa71987c36f90575b883cf28f8ac5bdfa3d896fa89f6a90865690b81487ced82"} Jan 21 13:16:30 crc kubenswrapper[4881]: I0121 13:16:30.295380 4881 generic.go:334] "Generic (PLEG): container finished" podID="66383caa-595c-4dad-b9a9-a2878ef04277" containerID="bff63c9802c0398fc00e1986f634cb55138bcb056e7756d4c61b3750ac66677f" exitCode=0 Jan 21 13:16:30 crc kubenswrapper[4881]: I0121 13:16:30.295945 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9zw6q" event={"ID":"66383caa-595c-4dad-b9a9-a2878ef04277","Type":"ContainerDied","Data":"bff63c9802c0398fc00e1986f634cb55138bcb056e7756d4c61b3750ac66677f"} Jan 21 13:16:33 crc kubenswrapper[4881]: I0121 13:16:33.354224 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9zw6q" event={"ID":"66383caa-595c-4dad-b9a9-a2878ef04277","Type":"ContainerStarted","Data":"d74a903ddc5fab46287d1a5319c135f5bf5966234a33aa48043f7dab675fc495"} Jan 21 13:16:33 crc kubenswrapper[4881]: I0121 13:16:33.383463 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9zw6q" podStartSLOduration=3.335513833 podStartE2EDuration="8.383429028s" podCreationTimestamp="2026-01-21 13:16:25 +0000 UTC" firstStartedPulling="2026-01-21 13:16:27.244560593 +0000 UTC m=+8374.504517062" lastFinishedPulling="2026-01-21 13:16:32.292475788 +0000 UTC m=+8379.552432257" observedRunningTime="2026-01-21 13:16:33.37237841 +0000 UTC m=+8380.632334889" watchObservedRunningTime="2026-01-21 13:16:33.383429028 +0000 UTC m=+8380.643385527" Jan 21 13:16:35 crc kubenswrapper[4881]: I0121 13:16:35.750183 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9zw6q" Jan 21 13:16:35 crc kubenswrapper[4881]: I0121 13:16:35.750633 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9zw6q" Jan 21 13:16:35 crc kubenswrapper[4881]: I0121 13:16:35.831501 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9zw6q" Jan 21 13:16:37 crc kubenswrapper[4881]: I0121 13:16:37.311841 4881 scope.go:117] "RemoveContainer" containerID="552bc8eceba0a6e2de711a18f9749360715dc3263cb9e778a7c7f74c86bf256a" Jan 21 13:16:37 crc kubenswrapper[4881]: E0121 13:16:37.312743 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:16:46 crc kubenswrapper[4881]: I0121 13:16:46.073425 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9zw6q" Jan 21 13:16:46 crc kubenswrapper[4881]: I0121 13:16:46.196167 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9zw6q"] Jan 21 13:16:46 crc kubenswrapper[4881]: I0121 13:16:46.730768 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9zw6q" podUID="66383caa-595c-4dad-b9a9-a2878ef04277" containerName="registry-server" containerID="cri-o://d74a903ddc5fab46287d1a5319c135f5bf5966234a33aa48043f7dab675fc495" gracePeriod=2 Jan 21 13:16:47 crc kubenswrapper[4881]: I0121 13:16:47.291686 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9zw6q" Jan 21 13:16:47 crc kubenswrapper[4881]: I0121 13:16:47.469710 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cnwb\" (UniqueName: \"kubernetes.io/projected/66383caa-595c-4dad-b9a9-a2878ef04277-kube-api-access-5cnwb\") pod \"66383caa-595c-4dad-b9a9-a2878ef04277\" (UID: \"66383caa-595c-4dad-b9a9-a2878ef04277\") " Jan 21 13:16:47 crc kubenswrapper[4881]: I0121 13:16:47.469766 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66383caa-595c-4dad-b9a9-a2878ef04277-utilities\") pod \"66383caa-595c-4dad-b9a9-a2878ef04277\" (UID: \"66383caa-595c-4dad-b9a9-a2878ef04277\") " Jan 21 13:16:47 crc kubenswrapper[4881]: I0121 13:16:47.469988 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66383caa-595c-4dad-b9a9-a2878ef04277-catalog-content\") pod \"66383caa-595c-4dad-b9a9-a2878ef04277\" (UID: \"66383caa-595c-4dad-b9a9-a2878ef04277\") " Jan 21 13:16:47 crc kubenswrapper[4881]: I0121 13:16:47.471905 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66383caa-595c-4dad-b9a9-a2878ef04277-utilities" (OuterVolumeSpecName: "utilities") pod "66383caa-595c-4dad-b9a9-a2878ef04277" (UID: "66383caa-595c-4dad-b9a9-a2878ef04277"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:16:47 crc kubenswrapper[4881]: I0121 13:16:47.496859 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66383caa-595c-4dad-b9a9-a2878ef04277-kube-api-access-5cnwb" (OuterVolumeSpecName: "kube-api-access-5cnwb") pod "66383caa-595c-4dad-b9a9-a2878ef04277" (UID: "66383caa-595c-4dad-b9a9-a2878ef04277"). InnerVolumeSpecName "kube-api-access-5cnwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:16:47 crc kubenswrapper[4881]: I0121 13:16:47.519616 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66383caa-595c-4dad-b9a9-a2878ef04277-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "66383caa-595c-4dad-b9a9-a2878ef04277" (UID: "66383caa-595c-4dad-b9a9-a2878ef04277"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:16:47 crc kubenswrapper[4881]: I0121 13:16:47.572233 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66383caa-595c-4dad-b9a9-a2878ef04277-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:16:47 crc kubenswrapper[4881]: I0121 13:16:47.572263 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cnwb\" (UniqueName: \"kubernetes.io/projected/66383caa-595c-4dad-b9a9-a2878ef04277-kube-api-access-5cnwb\") on node \"crc\" DevicePath \"\"" Jan 21 13:16:47 crc kubenswrapper[4881]: I0121 13:16:47.572276 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66383caa-595c-4dad-b9a9-a2878ef04277-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:16:47 crc kubenswrapper[4881]: I0121 13:16:47.746497 4881 generic.go:334] "Generic (PLEG): container finished" podID="66383caa-595c-4dad-b9a9-a2878ef04277" containerID="d74a903ddc5fab46287d1a5319c135f5bf5966234a33aa48043f7dab675fc495" exitCode=0 Jan 21 13:16:47 crc kubenswrapper[4881]: I0121 13:16:47.746558 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9zw6q" event={"ID":"66383caa-595c-4dad-b9a9-a2878ef04277","Type":"ContainerDied","Data":"d74a903ddc5fab46287d1a5319c135f5bf5966234a33aa48043f7dab675fc495"} Jan 21 13:16:47 crc kubenswrapper[4881]: I0121 13:16:47.746594 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9zw6q" event={"ID":"66383caa-595c-4dad-b9a9-a2878ef04277","Type":"ContainerDied","Data":"fa71987c36f90575b883cf28f8ac5bdfa3d896fa89f6a90865690b81487ced82"} Jan 21 13:16:47 crc kubenswrapper[4881]: I0121 13:16:47.746616 4881 scope.go:117] "RemoveContainer" containerID="d74a903ddc5fab46287d1a5319c135f5bf5966234a33aa48043f7dab675fc495" Jan 21 13:16:47 crc kubenswrapper[4881]: I0121 13:16:47.746813 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9zw6q" Jan 21 13:16:47 crc kubenswrapper[4881]: I0121 13:16:47.793373 4881 scope.go:117] "RemoveContainer" containerID="bff63c9802c0398fc00e1986f634cb55138bcb056e7756d4c61b3750ac66677f" Jan 21 13:16:47 crc kubenswrapper[4881]: I0121 13:16:47.815043 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9zw6q"] Jan 21 13:16:47 crc kubenswrapper[4881]: I0121 13:16:47.817232 4881 scope.go:117] "RemoveContainer" containerID="888c145ca396a50869d27c201487ff33c86b2ddf4c4044b3820855e98578a9e7" Jan 21 13:16:47 crc kubenswrapper[4881]: I0121 13:16:47.822892 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9zw6q"] Jan 21 13:16:47 crc kubenswrapper[4881]: I0121 13:16:47.883404 4881 scope.go:117] "RemoveContainer" containerID="d74a903ddc5fab46287d1a5319c135f5bf5966234a33aa48043f7dab675fc495" Jan 21 13:16:47 crc kubenswrapper[4881]: E0121 13:16:47.886910 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d74a903ddc5fab46287d1a5319c135f5bf5966234a33aa48043f7dab675fc495\": container with ID starting with d74a903ddc5fab46287d1a5319c135f5bf5966234a33aa48043f7dab675fc495 not found: ID does not exist" containerID="d74a903ddc5fab46287d1a5319c135f5bf5966234a33aa48043f7dab675fc495" Jan 21 13:16:47 crc kubenswrapper[4881]: I0121 13:16:47.887117 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d74a903ddc5fab46287d1a5319c135f5bf5966234a33aa48043f7dab675fc495"} err="failed to get container status \"d74a903ddc5fab46287d1a5319c135f5bf5966234a33aa48043f7dab675fc495\": rpc error: code = NotFound desc = could not find container \"d74a903ddc5fab46287d1a5319c135f5bf5966234a33aa48043f7dab675fc495\": container with ID starting with d74a903ddc5fab46287d1a5319c135f5bf5966234a33aa48043f7dab675fc495 not found: ID does not exist" Jan 21 13:16:47 crc kubenswrapper[4881]: I0121 13:16:47.887219 4881 scope.go:117] "RemoveContainer" containerID="bff63c9802c0398fc00e1986f634cb55138bcb056e7756d4c61b3750ac66677f" Jan 21 13:16:47 crc kubenswrapper[4881]: E0121 13:16:47.888165 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bff63c9802c0398fc00e1986f634cb55138bcb056e7756d4c61b3750ac66677f\": container with ID starting with bff63c9802c0398fc00e1986f634cb55138bcb056e7756d4c61b3750ac66677f not found: ID does not exist" containerID="bff63c9802c0398fc00e1986f634cb55138bcb056e7756d4c61b3750ac66677f" Jan 21 13:16:47 crc kubenswrapper[4881]: I0121 13:16:47.888193 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bff63c9802c0398fc00e1986f634cb55138bcb056e7756d4c61b3750ac66677f"} err="failed to get container status \"bff63c9802c0398fc00e1986f634cb55138bcb056e7756d4c61b3750ac66677f\": rpc error: code = NotFound desc = could not find container \"bff63c9802c0398fc00e1986f634cb55138bcb056e7756d4c61b3750ac66677f\": container with ID starting with bff63c9802c0398fc00e1986f634cb55138bcb056e7756d4c61b3750ac66677f not found: ID does not exist" Jan 21 13:16:47 crc kubenswrapper[4881]: I0121 13:16:47.888212 4881 scope.go:117] "RemoveContainer" containerID="888c145ca396a50869d27c201487ff33c86b2ddf4c4044b3820855e98578a9e7" Jan 21 13:16:47 crc kubenswrapper[4881]: E0121 13:16:47.888505 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"888c145ca396a50869d27c201487ff33c86b2ddf4c4044b3820855e98578a9e7\": container with ID starting with 888c145ca396a50869d27c201487ff33c86b2ddf4c4044b3820855e98578a9e7 not found: ID does not exist" containerID="888c145ca396a50869d27c201487ff33c86b2ddf4c4044b3820855e98578a9e7" Jan 21 13:16:47 crc kubenswrapper[4881]: I0121 13:16:47.888549 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"888c145ca396a50869d27c201487ff33c86b2ddf4c4044b3820855e98578a9e7"} err="failed to get container status \"888c145ca396a50869d27c201487ff33c86b2ddf4c4044b3820855e98578a9e7\": rpc error: code = NotFound desc = could not find container \"888c145ca396a50869d27c201487ff33c86b2ddf4c4044b3820855e98578a9e7\": container with ID starting with 888c145ca396a50869d27c201487ff33c86b2ddf4c4044b3820855e98578a9e7 not found: ID does not exist" Jan 21 13:16:49 crc kubenswrapper[4881]: I0121 13:16:49.334868 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66383caa-595c-4dad-b9a9-a2878ef04277" path="/var/lib/kubelet/pods/66383caa-595c-4dad-b9a9-a2878ef04277/volumes" Jan 21 13:16:50 crc kubenswrapper[4881]: I0121 13:16:50.310900 4881 scope.go:117] "RemoveContainer" containerID="552bc8eceba0a6e2de711a18f9749360715dc3263cb9e778a7c7f74c86bf256a" Jan 21 13:16:50 crc kubenswrapper[4881]: E0121 13:16:50.311281 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:17:01 crc kubenswrapper[4881]: I0121 13:17:01.319345 4881 scope.go:117] "RemoveContainer" containerID="552bc8eceba0a6e2de711a18f9749360715dc3263cb9e778a7c7f74c86bf256a" Jan 21 13:17:01 crc kubenswrapper[4881]: E0121 13:17:01.320391 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:17:16 crc kubenswrapper[4881]: I0121 13:17:16.311455 4881 scope.go:117] "RemoveContainer" containerID="552bc8eceba0a6e2de711a18f9749360715dc3263cb9e778a7c7f74c86bf256a" Jan 21 13:17:16 crc kubenswrapper[4881]: E0121 13:17:16.312278 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:17:31 crc kubenswrapper[4881]: I0121 13:17:31.312054 4881 scope.go:117] "RemoveContainer" containerID="552bc8eceba0a6e2de711a18f9749360715dc3263cb9e778a7c7f74c86bf256a" Jan 21 13:17:32 crc kubenswrapper[4881]: I0121 13:17:32.303258 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"3ae329a055e11a6e18e47ddb94b164ca6b139ccd6dac8d7c44083794de49a8f4"} Jan 21 13:19:59 crc kubenswrapper[4881]: I0121 13:19:59.851622 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:19:59 crc kubenswrapper[4881]: I0121 13:19:59.853500 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:20:29 crc kubenswrapper[4881]: I0121 13:20:29.851074 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:20:29 crc kubenswrapper[4881]: I0121 13:20:29.852842 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:20:52 crc kubenswrapper[4881]: I0121 13:20:52.446114 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-w2b5n"] Jan 21 13:20:52 crc kubenswrapper[4881]: E0121 13:20:52.447445 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66383caa-595c-4dad-b9a9-a2878ef04277" containerName="registry-server" Jan 21 13:20:52 crc kubenswrapper[4881]: I0121 13:20:52.447479 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="66383caa-595c-4dad-b9a9-a2878ef04277" containerName="registry-server" Jan 21 13:20:52 crc kubenswrapper[4881]: E0121 13:20:52.447512 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66383caa-595c-4dad-b9a9-a2878ef04277" containerName="extract-content" Jan 21 13:20:52 crc kubenswrapper[4881]: I0121 13:20:52.447520 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="66383caa-595c-4dad-b9a9-a2878ef04277" containerName="extract-content" Jan 21 13:20:52 crc kubenswrapper[4881]: E0121 13:20:52.447541 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66383caa-595c-4dad-b9a9-a2878ef04277" containerName="extract-utilities" Jan 21 13:20:52 crc kubenswrapper[4881]: I0121 13:20:52.447550 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="66383caa-595c-4dad-b9a9-a2878ef04277" containerName="extract-utilities" Jan 21 13:20:52 crc kubenswrapper[4881]: I0121 13:20:52.447833 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="66383caa-595c-4dad-b9a9-a2878ef04277" containerName="registry-server" Jan 21 13:20:52 crc kubenswrapper[4881]: I0121 13:20:52.450022 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w2b5n" Jan 21 13:20:52 crc kubenswrapper[4881]: I0121 13:20:52.476287 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w2b5n"] Jan 21 13:20:52 crc kubenswrapper[4881]: I0121 13:20:52.628448 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db19ebef-05c6-4b18-9143-641c362c472a-utilities\") pod \"redhat-operators-w2b5n\" (UID: \"db19ebef-05c6-4b18-9143-641c362c472a\") " pod="openshift-marketplace/redhat-operators-w2b5n" Jan 21 13:20:52 crc kubenswrapper[4881]: I0121 13:20:52.628526 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5mcz\" (UniqueName: \"kubernetes.io/projected/db19ebef-05c6-4b18-9143-641c362c472a-kube-api-access-k5mcz\") pod \"redhat-operators-w2b5n\" (UID: \"db19ebef-05c6-4b18-9143-641c362c472a\") " pod="openshift-marketplace/redhat-operators-w2b5n" Jan 21 13:20:52 crc kubenswrapper[4881]: I0121 13:20:52.628747 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db19ebef-05c6-4b18-9143-641c362c472a-catalog-content\") pod \"redhat-operators-w2b5n\" (UID: \"db19ebef-05c6-4b18-9143-641c362c472a\") " pod="openshift-marketplace/redhat-operators-w2b5n" Jan 21 13:20:52 crc kubenswrapper[4881]: I0121 13:20:52.730931 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db19ebef-05c6-4b18-9143-641c362c472a-catalog-content\") pod \"redhat-operators-w2b5n\" (UID: \"db19ebef-05c6-4b18-9143-641c362c472a\") " pod="openshift-marketplace/redhat-operators-w2b5n" Jan 21 13:20:52 crc kubenswrapper[4881]: I0121 13:20:52.731200 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db19ebef-05c6-4b18-9143-641c362c472a-utilities\") pod \"redhat-operators-w2b5n\" (UID: \"db19ebef-05c6-4b18-9143-641c362c472a\") " pod="openshift-marketplace/redhat-operators-w2b5n" Jan 21 13:20:52 crc kubenswrapper[4881]: I0121 13:20:52.731236 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5mcz\" (UniqueName: \"kubernetes.io/projected/db19ebef-05c6-4b18-9143-641c362c472a-kube-api-access-k5mcz\") pod \"redhat-operators-w2b5n\" (UID: \"db19ebef-05c6-4b18-9143-641c362c472a\") " pod="openshift-marketplace/redhat-operators-w2b5n" Jan 21 13:20:52 crc kubenswrapper[4881]: I0121 13:20:52.733001 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db19ebef-05c6-4b18-9143-641c362c472a-utilities\") pod \"redhat-operators-w2b5n\" (UID: \"db19ebef-05c6-4b18-9143-641c362c472a\") " pod="openshift-marketplace/redhat-operators-w2b5n" Jan 21 13:20:52 crc kubenswrapper[4881]: I0121 13:20:52.733437 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db19ebef-05c6-4b18-9143-641c362c472a-catalog-content\") pod \"redhat-operators-w2b5n\" (UID: \"db19ebef-05c6-4b18-9143-641c362c472a\") " pod="openshift-marketplace/redhat-operators-w2b5n" Jan 21 13:20:52 crc kubenswrapper[4881]: I0121 13:20:52.762540 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5mcz\" (UniqueName: \"kubernetes.io/projected/db19ebef-05c6-4b18-9143-641c362c472a-kube-api-access-k5mcz\") pod \"redhat-operators-w2b5n\" (UID: \"db19ebef-05c6-4b18-9143-641c362c472a\") " pod="openshift-marketplace/redhat-operators-w2b5n" Jan 21 13:20:52 crc kubenswrapper[4881]: I0121 13:20:52.779924 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w2b5n" Jan 21 13:20:53 crc kubenswrapper[4881]: I0121 13:20:53.750985 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w2b5n"] Jan 21 13:20:54 crc kubenswrapper[4881]: I0121 13:20:54.749460 4881 generic.go:334] "Generic (PLEG): container finished" podID="db19ebef-05c6-4b18-9143-641c362c472a" containerID="6dbe2048b7bc3a11cae2e8d7d9c920a0149d21d882e0a5f95950ab0f8e3a03a8" exitCode=0 Jan 21 13:20:54 crc kubenswrapper[4881]: I0121 13:20:54.749845 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w2b5n" event={"ID":"db19ebef-05c6-4b18-9143-641c362c472a","Type":"ContainerDied","Data":"6dbe2048b7bc3a11cae2e8d7d9c920a0149d21d882e0a5f95950ab0f8e3a03a8"} Jan 21 13:20:54 crc kubenswrapper[4881]: I0121 13:20:54.749908 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w2b5n" event={"ID":"db19ebef-05c6-4b18-9143-641c362c472a","Type":"ContainerStarted","Data":"ddf4df98d45221ed009798fb432f66248e0003d2feeb478daa19954df3572ec4"} Jan 21 13:20:54 crc kubenswrapper[4881]: I0121 13:20:54.752657 4881 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 13:20:55 crc kubenswrapper[4881]: I0121 13:20:55.760936 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w2b5n" event={"ID":"db19ebef-05c6-4b18-9143-641c362c472a","Type":"ContainerStarted","Data":"29cdfe4f4a58d6bfbac2fda27216d3e6f54bfa02a8c3395266542bab5b1d563e"} Jan 21 13:20:59 crc kubenswrapper[4881]: I0121 13:20:59.811250 4881 generic.go:334] "Generic (PLEG): container finished" podID="db19ebef-05c6-4b18-9143-641c362c472a" containerID="29cdfe4f4a58d6bfbac2fda27216d3e6f54bfa02a8c3395266542bab5b1d563e" exitCode=0 Jan 21 13:20:59 crc kubenswrapper[4881]: I0121 13:20:59.811345 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w2b5n" event={"ID":"db19ebef-05c6-4b18-9143-641c362c472a","Type":"ContainerDied","Data":"29cdfe4f4a58d6bfbac2fda27216d3e6f54bfa02a8c3395266542bab5b1d563e"} Jan 21 13:20:59 crc kubenswrapper[4881]: I0121 13:20:59.851159 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:20:59 crc kubenswrapper[4881]: I0121 13:20:59.851252 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:20:59 crc kubenswrapper[4881]: I0121 13:20:59.851300 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 13:20:59 crc kubenswrapper[4881]: I0121 13:20:59.852179 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3ae329a055e11a6e18e47ddb94b164ca6b139ccd6dac8d7c44083794de49a8f4"} pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 13:20:59 crc kubenswrapper[4881]: I0121 13:20:59.852257 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" containerID="cri-o://3ae329a055e11a6e18e47ddb94b164ca6b139ccd6dac8d7c44083794de49a8f4" gracePeriod=600 Jan 21 13:21:00 crc kubenswrapper[4881]: I0121 13:21:00.827831 4881 generic.go:334] "Generic (PLEG): container finished" podID="3687b313-1df2-4274-80db-8c758b51bf2d" containerID="3ae329a055e11a6e18e47ddb94b164ca6b139ccd6dac8d7c44083794de49a8f4" exitCode=0 Jan 21 13:21:00 crc kubenswrapper[4881]: I0121 13:21:00.827894 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerDied","Data":"3ae329a055e11a6e18e47ddb94b164ca6b139ccd6dac8d7c44083794de49a8f4"} Jan 21 13:21:00 crc kubenswrapper[4881]: I0121 13:21:00.828341 4881 scope.go:117] "RemoveContainer" containerID="552bc8eceba0a6e2de711a18f9749360715dc3263cb9e778a7c7f74c86bf256a" Jan 21 13:21:01 crc kubenswrapper[4881]: I0121 13:21:01.844587 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"4886b3658c033dd32099ac18722859936d425d0f630ab9d495cc3181594e8fc4"} Jan 21 13:21:03 crc kubenswrapper[4881]: I0121 13:21:03.866116 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w2b5n" event={"ID":"db19ebef-05c6-4b18-9143-641c362c472a","Type":"ContainerStarted","Data":"18fba6b402b1f7f76f16f037f9cbdc79db022c2952c74431b3a0f07a73053da5"} Jan 21 13:21:03 crc kubenswrapper[4881]: I0121 13:21:03.911446 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-w2b5n" podStartSLOduration=3.526557338 podStartE2EDuration="11.911400621s" podCreationTimestamp="2026-01-21 13:20:52 +0000 UTC" firstStartedPulling="2026-01-21 13:20:54.752240727 +0000 UTC m=+8642.012197196" lastFinishedPulling="2026-01-21 13:21:03.13708401 +0000 UTC m=+8650.397040479" observedRunningTime="2026-01-21 13:21:03.894176328 +0000 UTC m=+8651.154132797" watchObservedRunningTime="2026-01-21 13:21:03.911400621 +0000 UTC m=+8651.171357090" Jan 21 13:21:12 crc kubenswrapper[4881]: I0121 13:21:12.785317 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-w2b5n" Jan 21 13:21:12 crc kubenswrapper[4881]: I0121 13:21:12.786012 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-w2b5n" Jan 21 13:21:12 crc kubenswrapper[4881]: I0121 13:21:12.846041 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-w2b5n" Jan 21 13:21:13 crc kubenswrapper[4881]: I0121 13:21:13.018074 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-w2b5n" Jan 21 13:21:13 crc kubenswrapper[4881]: I0121 13:21:13.091233 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w2b5n"] Jan 21 13:21:14 crc kubenswrapper[4881]: I0121 13:21:14.987132 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-w2b5n" podUID="db19ebef-05c6-4b18-9143-641c362c472a" containerName="registry-server" containerID="cri-o://18fba6b402b1f7f76f16f037f9cbdc79db022c2952c74431b3a0f07a73053da5" gracePeriod=2 Jan 21 13:21:15 crc kubenswrapper[4881]: I0121 13:21:15.504363 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w2b5n" Jan 21 13:21:15 crc kubenswrapper[4881]: I0121 13:21:15.683302 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db19ebef-05c6-4b18-9143-641c362c472a-catalog-content\") pod \"db19ebef-05c6-4b18-9143-641c362c472a\" (UID: \"db19ebef-05c6-4b18-9143-641c362c472a\") " Jan 21 13:21:15 crc kubenswrapper[4881]: I0121 13:21:15.683801 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db19ebef-05c6-4b18-9143-641c362c472a-utilities\") pod \"db19ebef-05c6-4b18-9143-641c362c472a\" (UID: \"db19ebef-05c6-4b18-9143-641c362c472a\") " Jan 21 13:21:15 crc kubenswrapper[4881]: I0121 13:21:15.683875 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5mcz\" (UniqueName: \"kubernetes.io/projected/db19ebef-05c6-4b18-9143-641c362c472a-kube-api-access-k5mcz\") pod \"db19ebef-05c6-4b18-9143-641c362c472a\" (UID: \"db19ebef-05c6-4b18-9143-641c362c472a\") " Jan 21 13:21:15 crc kubenswrapper[4881]: I0121 13:21:15.685158 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db19ebef-05c6-4b18-9143-641c362c472a-utilities" (OuterVolumeSpecName: "utilities") pod "db19ebef-05c6-4b18-9143-641c362c472a" (UID: "db19ebef-05c6-4b18-9143-641c362c472a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:21:15 crc kubenswrapper[4881]: I0121 13:21:15.692147 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db19ebef-05c6-4b18-9143-641c362c472a-kube-api-access-k5mcz" (OuterVolumeSpecName: "kube-api-access-k5mcz") pod "db19ebef-05c6-4b18-9143-641c362c472a" (UID: "db19ebef-05c6-4b18-9143-641c362c472a"). InnerVolumeSpecName "kube-api-access-k5mcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:21:15 crc kubenswrapper[4881]: I0121 13:21:15.786260 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db19ebef-05c6-4b18-9143-641c362c472a-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:15 crc kubenswrapper[4881]: I0121 13:21:15.786301 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5mcz\" (UniqueName: \"kubernetes.io/projected/db19ebef-05c6-4b18-9143-641c362c472a-kube-api-access-k5mcz\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:15 crc kubenswrapper[4881]: I0121 13:21:15.812407 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db19ebef-05c6-4b18-9143-641c362c472a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "db19ebef-05c6-4b18-9143-641c362c472a" (UID: "db19ebef-05c6-4b18-9143-641c362c472a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:21:15 crc kubenswrapper[4881]: I0121 13:21:15.889310 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db19ebef-05c6-4b18-9143-641c362c472a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:21:16 crc kubenswrapper[4881]: I0121 13:21:16.002748 4881 generic.go:334] "Generic (PLEG): container finished" podID="db19ebef-05c6-4b18-9143-641c362c472a" containerID="18fba6b402b1f7f76f16f037f9cbdc79db022c2952c74431b3a0f07a73053da5" exitCode=0 Jan 21 13:21:16 crc kubenswrapper[4881]: I0121 13:21:16.002818 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w2b5n" event={"ID":"db19ebef-05c6-4b18-9143-641c362c472a","Type":"ContainerDied","Data":"18fba6b402b1f7f76f16f037f9cbdc79db022c2952c74431b3a0f07a73053da5"} Jan 21 13:21:16 crc kubenswrapper[4881]: I0121 13:21:16.002859 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w2b5n" event={"ID":"db19ebef-05c6-4b18-9143-641c362c472a","Type":"ContainerDied","Data":"ddf4df98d45221ed009798fb432f66248e0003d2feeb478daa19954df3572ec4"} Jan 21 13:21:16 crc kubenswrapper[4881]: I0121 13:21:16.002884 4881 scope.go:117] "RemoveContainer" containerID="18fba6b402b1f7f76f16f037f9cbdc79db022c2952c74431b3a0f07a73053da5" Jan 21 13:21:16 crc kubenswrapper[4881]: I0121 13:21:16.003084 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w2b5n" Jan 21 13:21:16 crc kubenswrapper[4881]: I0121 13:21:16.035859 4881 scope.go:117] "RemoveContainer" containerID="29cdfe4f4a58d6bfbac2fda27216d3e6f54bfa02a8c3395266542bab5b1d563e" Jan 21 13:21:16 crc kubenswrapper[4881]: I0121 13:21:16.069016 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w2b5n"] Jan 21 13:21:16 crc kubenswrapper[4881]: I0121 13:21:16.090218 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-w2b5n"] Jan 21 13:21:16 crc kubenswrapper[4881]: I0121 13:21:16.090881 4881 scope.go:117] "RemoveContainer" containerID="6dbe2048b7bc3a11cae2e8d7d9c920a0149d21d882e0a5f95950ab0f8e3a03a8" Jan 21 13:21:16 crc kubenswrapper[4881]: I0121 13:21:16.138832 4881 scope.go:117] "RemoveContainer" containerID="18fba6b402b1f7f76f16f037f9cbdc79db022c2952c74431b3a0f07a73053da5" Jan 21 13:21:16 crc kubenswrapper[4881]: E0121 13:21:16.139512 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18fba6b402b1f7f76f16f037f9cbdc79db022c2952c74431b3a0f07a73053da5\": container with ID starting with 18fba6b402b1f7f76f16f037f9cbdc79db022c2952c74431b3a0f07a73053da5 not found: ID does not exist" containerID="18fba6b402b1f7f76f16f037f9cbdc79db022c2952c74431b3a0f07a73053da5" Jan 21 13:21:16 crc kubenswrapper[4881]: I0121 13:21:16.139582 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18fba6b402b1f7f76f16f037f9cbdc79db022c2952c74431b3a0f07a73053da5"} err="failed to get container status \"18fba6b402b1f7f76f16f037f9cbdc79db022c2952c74431b3a0f07a73053da5\": rpc error: code = NotFound desc = could not find container \"18fba6b402b1f7f76f16f037f9cbdc79db022c2952c74431b3a0f07a73053da5\": container with ID starting with 18fba6b402b1f7f76f16f037f9cbdc79db022c2952c74431b3a0f07a73053da5 not found: ID does not exist" Jan 21 13:21:16 crc kubenswrapper[4881]: I0121 13:21:16.139621 4881 scope.go:117] "RemoveContainer" containerID="29cdfe4f4a58d6bfbac2fda27216d3e6f54bfa02a8c3395266542bab5b1d563e" Jan 21 13:21:16 crc kubenswrapper[4881]: E0121 13:21:16.140294 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29cdfe4f4a58d6bfbac2fda27216d3e6f54bfa02a8c3395266542bab5b1d563e\": container with ID starting with 29cdfe4f4a58d6bfbac2fda27216d3e6f54bfa02a8c3395266542bab5b1d563e not found: ID does not exist" containerID="29cdfe4f4a58d6bfbac2fda27216d3e6f54bfa02a8c3395266542bab5b1d563e" Jan 21 13:21:16 crc kubenswrapper[4881]: I0121 13:21:16.140347 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29cdfe4f4a58d6bfbac2fda27216d3e6f54bfa02a8c3395266542bab5b1d563e"} err="failed to get container status \"29cdfe4f4a58d6bfbac2fda27216d3e6f54bfa02a8c3395266542bab5b1d563e\": rpc error: code = NotFound desc = could not find container \"29cdfe4f4a58d6bfbac2fda27216d3e6f54bfa02a8c3395266542bab5b1d563e\": container with ID starting with 29cdfe4f4a58d6bfbac2fda27216d3e6f54bfa02a8c3395266542bab5b1d563e not found: ID does not exist" Jan 21 13:21:16 crc kubenswrapper[4881]: I0121 13:21:16.140385 4881 scope.go:117] "RemoveContainer" containerID="6dbe2048b7bc3a11cae2e8d7d9c920a0149d21d882e0a5f95950ab0f8e3a03a8" Jan 21 13:21:16 crc kubenswrapper[4881]: E0121 13:21:16.140760 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6dbe2048b7bc3a11cae2e8d7d9c920a0149d21d882e0a5f95950ab0f8e3a03a8\": container with ID starting with 6dbe2048b7bc3a11cae2e8d7d9c920a0149d21d882e0a5f95950ab0f8e3a03a8 not found: ID does not exist" containerID="6dbe2048b7bc3a11cae2e8d7d9c920a0149d21d882e0a5f95950ab0f8e3a03a8" Jan 21 13:21:16 crc kubenswrapper[4881]: I0121 13:21:16.140824 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6dbe2048b7bc3a11cae2e8d7d9c920a0149d21d882e0a5f95950ab0f8e3a03a8"} err="failed to get container status \"6dbe2048b7bc3a11cae2e8d7d9c920a0149d21d882e0a5f95950ab0f8e3a03a8\": rpc error: code = NotFound desc = could not find container \"6dbe2048b7bc3a11cae2e8d7d9c920a0149d21d882e0a5f95950ab0f8e3a03a8\": container with ID starting with 6dbe2048b7bc3a11cae2e8d7d9c920a0149d21d882e0a5f95950ab0f8e3a03a8 not found: ID does not exist" Jan 21 13:21:17 crc kubenswrapper[4881]: I0121 13:21:17.325335 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db19ebef-05c6-4b18-9143-641c362c472a" path="/var/lib/kubelet/pods/db19ebef-05c6-4b18-9143-641c362c472a/volumes" Jan 21 13:21:36 crc kubenswrapper[4881]: I0121 13:21:36.188160 4881 scope.go:117] "RemoveContainer" containerID="b2480cdd412677da34ca1262943186b4f02a412993e268c2cc5a3c46d5441e61" Jan 21 13:21:36 crc kubenswrapper[4881]: I0121 13:21:36.232780 4881 scope.go:117] "RemoveContainer" containerID="7905ef1bd8eb4c2a74ecd66dee0f7a7d01738c48ab72e0bfb49efb8ba199940b" Jan 21 13:21:36 crc kubenswrapper[4881]: I0121 13:21:36.262535 4881 scope.go:117] "RemoveContainer" containerID="1f0cf2aba23d64564f86d3e47e178b26c66b88713e2c1b4e63ada03ff3001e47" Jan 21 13:21:54 crc kubenswrapper[4881]: E0121 13:21:54.351596 4881 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.042s" Jan 21 13:23:29 crc kubenswrapper[4881]: I0121 13:23:29.851378 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:23:29 crc kubenswrapper[4881]: I0121 13:23:29.852219 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:23:47 crc kubenswrapper[4881]: I0121 13:23:47.242922 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vhjdq"] Jan 21 13:23:47 crc kubenswrapper[4881]: E0121 13:23:47.245396 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db19ebef-05c6-4b18-9143-641c362c472a" containerName="registry-server" Jan 21 13:23:47 crc kubenswrapper[4881]: I0121 13:23:47.245496 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="db19ebef-05c6-4b18-9143-641c362c472a" containerName="registry-server" Jan 21 13:23:47 crc kubenswrapper[4881]: E0121 13:23:47.245572 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db19ebef-05c6-4b18-9143-641c362c472a" containerName="extract-utilities" Jan 21 13:23:47 crc kubenswrapper[4881]: I0121 13:23:47.245633 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="db19ebef-05c6-4b18-9143-641c362c472a" containerName="extract-utilities" Jan 21 13:23:47 crc kubenswrapper[4881]: E0121 13:23:47.245828 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db19ebef-05c6-4b18-9143-641c362c472a" containerName="extract-content" Jan 21 13:23:47 crc kubenswrapper[4881]: I0121 13:23:47.245899 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="db19ebef-05c6-4b18-9143-641c362c472a" containerName="extract-content" Jan 21 13:23:47 crc kubenswrapper[4881]: I0121 13:23:47.247294 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="db19ebef-05c6-4b18-9143-641c362c472a" containerName="registry-server" Jan 21 13:23:47 crc kubenswrapper[4881]: I0121 13:23:47.251678 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vhjdq" Jan 21 13:23:47 crc kubenswrapper[4881]: I0121 13:23:47.257615 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vhjdq"] Jan 21 13:23:47 crc kubenswrapper[4881]: I0121 13:23:47.378959 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftpn2\" (UniqueName: \"kubernetes.io/projected/e5931128-9209-474d-b0c0-430405aba54d-kube-api-access-ftpn2\") pod \"community-operators-vhjdq\" (UID: \"e5931128-9209-474d-b0c0-430405aba54d\") " pod="openshift-marketplace/community-operators-vhjdq" Jan 21 13:23:47 crc kubenswrapper[4881]: I0121 13:23:47.379440 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5931128-9209-474d-b0c0-430405aba54d-utilities\") pod \"community-operators-vhjdq\" (UID: \"e5931128-9209-474d-b0c0-430405aba54d\") " pod="openshift-marketplace/community-operators-vhjdq" Jan 21 13:23:47 crc kubenswrapper[4881]: I0121 13:23:47.379568 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5931128-9209-474d-b0c0-430405aba54d-catalog-content\") pod \"community-operators-vhjdq\" (UID: \"e5931128-9209-474d-b0c0-430405aba54d\") " pod="openshift-marketplace/community-operators-vhjdq" Jan 21 13:23:47 crc kubenswrapper[4881]: I0121 13:23:47.481934 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5931128-9209-474d-b0c0-430405aba54d-utilities\") pod \"community-operators-vhjdq\" (UID: \"e5931128-9209-474d-b0c0-430405aba54d\") " pod="openshift-marketplace/community-operators-vhjdq" Jan 21 13:23:47 crc kubenswrapper[4881]: I0121 13:23:47.482073 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5931128-9209-474d-b0c0-430405aba54d-catalog-content\") pod \"community-operators-vhjdq\" (UID: \"e5931128-9209-474d-b0c0-430405aba54d\") " pod="openshift-marketplace/community-operators-vhjdq" Jan 21 13:23:47 crc kubenswrapper[4881]: I0121 13:23:47.482221 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftpn2\" (UniqueName: \"kubernetes.io/projected/e5931128-9209-474d-b0c0-430405aba54d-kube-api-access-ftpn2\") pod \"community-operators-vhjdq\" (UID: \"e5931128-9209-474d-b0c0-430405aba54d\") " pod="openshift-marketplace/community-operators-vhjdq" Jan 21 13:23:47 crc kubenswrapper[4881]: I0121 13:23:47.483342 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5931128-9209-474d-b0c0-430405aba54d-utilities\") pod \"community-operators-vhjdq\" (UID: \"e5931128-9209-474d-b0c0-430405aba54d\") " pod="openshift-marketplace/community-operators-vhjdq" Jan 21 13:23:47 crc kubenswrapper[4881]: I0121 13:23:47.484628 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5931128-9209-474d-b0c0-430405aba54d-catalog-content\") pod \"community-operators-vhjdq\" (UID: \"e5931128-9209-474d-b0c0-430405aba54d\") " pod="openshift-marketplace/community-operators-vhjdq" Jan 21 13:23:47 crc kubenswrapper[4881]: I0121 13:23:47.519431 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftpn2\" (UniqueName: \"kubernetes.io/projected/e5931128-9209-474d-b0c0-430405aba54d-kube-api-access-ftpn2\") pod \"community-operators-vhjdq\" (UID: \"e5931128-9209-474d-b0c0-430405aba54d\") " pod="openshift-marketplace/community-operators-vhjdq" Jan 21 13:23:47 crc kubenswrapper[4881]: I0121 13:23:47.611874 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vhjdq" Jan 21 13:23:48 crc kubenswrapper[4881]: I0121 13:23:48.247370 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vhjdq"] Jan 21 13:23:48 crc kubenswrapper[4881]: I0121 13:23:48.876893 4881 generic.go:334] "Generic (PLEG): container finished" podID="e5931128-9209-474d-b0c0-430405aba54d" containerID="1dcf807907c7c61b3327bd841ffb67f6eaa94ff76bb682a90885d8c3edaa4561" exitCode=0 Jan 21 13:23:48 crc kubenswrapper[4881]: I0121 13:23:48.876963 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vhjdq" event={"ID":"e5931128-9209-474d-b0c0-430405aba54d","Type":"ContainerDied","Data":"1dcf807907c7c61b3327bd841ffb67f6eaa94ff76bb682a90885d8c3edaa4561"} Jan 21 13:23:48 crc kubenswrapper[4881]: I0121 13:23:48.877007 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vhjdq" event={"ID":"e5931128-9209-474d-b0c0-430405aba54d","Type":"ContainerStarted","Data":"4fee55b896d0ecbf9818e45d47464bfc2bc9c8ad108315cfabe6f1907d2c198c"} Jan 21 13:23:50 crc kubenswrapper[4881]: I0121 13:23:50.898316 4881 generic.go:334] "Generic (PLEG): container finished" podID="e5931128-9209-474d-b0c0-430405aba54d" containerID="fcc04433138715defe86fdd8c9275c671e551753f371db62c8278823285624d2" exitCode=0 Jan 21 13:23:50 crc kubenswrapper[4881]: I0121 13:23:50.899658 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vhjdq" event={"ID":"e5931128-9209-474d-b0c0-430405aba54d","Type":"ContainerDied","Data":"fcc04433138715defe86fdd8c9275c671e551753f371db62c8278823285624d2"} Jan 21 13:23:51 crc kubenswrapper[4881]: I0121 13:23:51.911616 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vhjdq" event={"ID":"e5931128-9209-474d-b0c0-430405aba54d","Type":"ContainerStarted","Data":"9b34621f5a1bcd3c8d5cb4fca7e17a589967058ec4af1cef1580ef5949ba4bc7"} Jan 21 13:23:51 crc kubenswrapper[4881]: I0121 13:23:51.944564 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vhjdq" podStartSLOduration=2.4966168460000002 podStartE2EDuration="4.944539616s" podCreationTimestamp="2026-01-21 13:23:47 +0000 UTC" firstStartedPulling="2026-01-21 13:23:48.879889269 +0000 UTC m=+8816.139845738" lastFinishedPulling="2026-01-21 13:23:51.327812039 +0000 UTC m=+8818.587768508" observedRunningTime="2026-01-21 13:23:51.936535522 +0000 UTC m=+8819.196492011" watchObservedRunningTime="2026-01-21 13:23:51.944539616 +0000 UTC m=+8819.204496085" Jan 21 13:23:57 crc kubenswrapper[4881]: I0121 13:23:57.620490 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vhjdq" Jan 21 13:23:57 crc kubenswrapper[4881]: I0121 13:23:57.622008 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vhjdq" Jan 21 13:23:57 crc kubenswrapper[4881]: I0121 13:23:57.701860 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vhjdq" Jan 21 13:23:58 crc kubenswrapper[4881]: I0121 13:23:58.045269 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vhjdq" Jan 21 13:23:58 crc kubenswrapper[4881]: I0121 13:23:58.115454 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vhjdq"] Jan 21 13:23:59 crc kubenswrapper[4881]: I0121 13:23:59.851257 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:23:59 crc kubenswrapper[4881]: I0121 13:23:59.851869 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:24:00 crc kubenswrapper[4881]: I0121 13:24:00.008699 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vhjdq" podUID="e5931128-9209-474d-b0c0-430405aba54d" containerName="registry-server" containerID="cri-o://9b34621f5a1bcd3c8d5cb4fca7e17a589967058ec4af1cef1580ef5949ba4bc7" gracePeriod=2 Jan 21 13:24:00 crc kubenswrapper[4881]: I0121 13:24:00.530780 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vhjdq" Jan 21 13:24:00 crc kubenswrapper[4881]: I0121 13:24:00.592022 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5931128-9209-474d-b0c0-430405aba54d-catalog-content\") pod \"e5931128-9209-474d-b0c0-430405aba54d\" (UID: \"e5931128-9209-474d-b0c0-430405aba54d\") " Jan 21 13:24:00 crc kubenswrapper[4881]: I0121 13:24:00.592297 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftpn2\" (UniqueName: \"kubernetes.io/projected/e5931128-9209-474d-b0c0-430405aba54d-kube-api-access-ftpn2\") pod \"e5931128-9209-474d-b0c0-430405aba54d\" (UID: \"e5931128-9209-474d-b0c0-430405aba54d\") " Jan 21 13:24:00 crc kubenswrapper[4881]: I0121 13:24:00.592347 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5931128-9209-474d-b0c0-430405aba54d-utilities\") pod \"e5931128-9209-474d-b0c0-430405aba54d\" (UID: \"e5931128-9209-474d-b0c0-430405aba54d\") " Jan 21 13:24:00 crc kubenswrapper[4881]: I0121 13:24:00.593200 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5931128-9209-474d-b0c0-430405aba54d-utilities" (OuterVolumeSpecName: "utilities") pod "e5931128-9209-474d-b0c0-430405aba54d" (UID: "e5931128-9209-474d-b0c0-430405aba54d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:24:00 crc kubenswrapper[4881]: I0121 13:24:00.598701 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5931128-9209-474d-b0c0-430405aba54d-kube-api-access-ftpn2" (OuterVolumeSpecName: "kube-api-access-ftpn2") pod "e5931128-9209-474d-b0c0-430405aba54d" (UID: "e5931128-9209-474d-b0c0-430405aba54d"). InnerVolumeSpecName "kube-api-access-ftpn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:24:00 crc kubenswrapper[4881]: I0121 13:24:00.696237 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftpn2\" (UniqueName: \"kubernetes.io/projected/e5931128-9209-474d-b0c0-430405aba54d-kube-api-access-ftpn2\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:00 crc kubenswrapper[4881]: I0121 13:24:00.696283 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e5931128-9209-474d-b0c0-430405aba54d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:00 crc kubenswrapper[4881]: I0121 13:24:00.967135 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5931128-9209-474d-b0c0-430405aba54d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e5931128-9209-474d-b0c0-430405aba54d" (UID: "e5931128-9209-474d-b0c0-430405aba54d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:24:01 crc kubenswrapper[4881]: I0121 13:24:01.009190 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e5931128-9209-474d-b0c0-430405aba54d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:24:01 crc kubenswrapper[4881]: I0121 13:24:01.026541 4881 generic.go:334] "Generic (PLEG): container finished" podID="e5931128-9209-474d-b0c0-430405aba54d" containerID="9b34621f5a1bcd3c8d5cb4fca7e17a589967058ec4af1cef1580ef5949ba4bc7" exitCode=0 Jan 21 13:24:01 crc kubenswrapper[4881]: I0121 13:24:01.026603 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vhjdq" event={"ID":"e5931128-9209-474d-b0c0-430405aba54d","Type":"ContainerDied","Data":"9b34621f5a1bcd3c8d5cb4fca7e17a589967058ec4af1cef1580ef5949ba4bc7"} Jan 21 13:24:01 crc kubenswrapper[4881]: I0121 13:24:01.026633 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vhjdq" event={"ID":"e5931128-9209-474d-b0c0-430405aba54d","Type":"ContainerDied","Data":"4fee55b896d0ecbf9818e45d47464bfc2bc9c8ad108315cfabe6f1907d2c198c"} Jan 21 13:24:01 crc kubenswrapper[4881]: I0121 13:24:01.026650 4881 scope.go:117] "RemoveContainer" containerID="9b34621f5a1bcd3c8d5cb4fca7e17a589967058ec4af1cef1580ef5949ba4bc7" Jan 21 13:24:01 crc kubenswrapper[4881]: I0121 13:24:01.026946 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vhjdq" Jan 21 13:24:01 crc kubenswrapper[4881]: I0121 13:24:01.072525 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vhjdq"] Jan 21 13:24:01 crc kubenswrapper[4881]: I0121 13:24:01.076993 4881 scope.go:117] "RemoveContainer" containerID="fcc04433138715defe86fdd8c9275c671e551753f371db62c8278823285624d2" Jan 21 13:24:01 crc kubenswrapper[4881]: I0121 13:24:01.086561 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vhjdq"] Jan 21 13:24:01 crc kubenswrapper[4881]: I0121 13:24:01.115331 4881 scope.go:117] "RemoveContainer" containerID="1dcf807907c7c61b3327bd841ffb67f6eaa94ff76bb682a90885d8c3edaa4561" Jan 21 13:24:01 crc kubenswrapper[4881]: I0121 13:24:01.171070 4881 scope.go:117] "RemoveContainer" containerID="9b34621f5a1bcd3c8d5cb4fca7e17a589967058ec4af1cef1580ef5949ba4bc7" Jan 21 13:24:01 crc kubenswrapper[4881]: E0121 13:24:01.171975 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b34621f5a1bcd3c8d5cb4fca7e17a589967058ec4af1cef1580ef5949ba4bc7\": container with ID starting with 9b34621f5a1bcd3c8d5cb4fca7e17a589967058ec4af1cef1580ef5949ba4bc7 not found: ID does not exist" containerID="9b34621f5a1bcd3c8d5cb4fca7e17a589967058ec4af1cef1580ef5949ba4bc7" Jan 21 13:24:01 crc kubenswrapper[4881]: I0121 13:24:01.172019 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b34621f5a1bcd3c8d5cb4fca7e17a589967058ec4af1cef1580ef5949ba4bc7"} err="failed to get container status \"9b34621f5a1bcd3c8d5cb4fca7e17a589967058ec4af1cef1580ef5949ba4bc7\": rpc error: code = NotFound desc = could not find container \"9b34621f5a1bcd3c8d5cb4fca7e17a589967058ec4af1cef1580ef5949ba4bc7\": container with ID starting with 9b34621f5a1bcd3c8d5cb4fca7e17a589967058ec4af1cef1580ef5949ba4bc7 not found: ID does not exist" Jan 21 13:24:01 crc kubenswrapper[4881]: I0121 13:24:01.172047 4881 scope.go:117] "RemoveContainer" containerID="fcc04433138715defe86fdd8c9275c671e551753f371db62c8278823285624d2" Jan 21 13:24:01 crc kubenswrapper[4881]: E0121 13:24:01.172531 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcc04433138715defe86fdd8c9275c671e551753f371db62c8278823285624d2\": container with ID starting with fcc04433138715defe86fdd8c9275c671e551753f371db62c8278823285624d2 not found: ID does not exist" containerID="fcc04433138715defe86fdd8c9275c671e551753f371db62c8278823285624d2" Jan 21 13:24:01 crc kubenswrapper[4881]: I0121 13:24:01.172563 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcc04433138715defe86fdd8c9275c671e551753f371db62c8278823285624d2"} err="failed to get container status \"fcc04433138715defe86fdd8c9275c671e551753f371db62c8278823285624d2\": rpc error: code = NotFound desc = could not find container \"fcc04433138715defe86fdd8c9275c671e551753f371db62c8278823285624d2\": container with ID starting with fcc04433138715defe86fdd8c9275c671e551753f371db62c8278823285624d2 not found: ID does not exist" Jan 21 13:24:01 crc kubenswrapper[4881]: I0121 13:24:01.172580 4881 scope.go:117] "RemoveContainer" containerID="1dcf807907c7c61b3327bd841ffb67f6eaa94ff76bb682a90885d8c3edaa4561" Jan 21 13:24:01 crc kubenswrapper[4881]: E0121 13:24:01.173364 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1dcf807907c7c61b3327bd841ffb67f6eaa94ff76bb682a90885d8c3edaa4561\": container with ID starting with 1dcf807907c7c61b3327bd841ffb67f6eaa94ff76bb682a90885d8c3edaa4561 not found: ID does not exist" containerID="1dcf807907c7c61b3327bd841ffb67f6eaa94ff76bb682a90885d8c3edaa4561" Jan 21 13:24:01 crc kubenswrapper[4881]: I0121 13:24:01.173390 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1dcf807907c7c61b3327bd841ffb67f6eaa94ff76bb682a90885d8c3edaa4561"} err="failed to get container status \"1dcf807907c7c61b3327bd841ffb67f6eaa94ff76bb682a90885d8c3edaa4561\": rpc error: code = NotFound desc = could not find container \"1dcf807907c7c61b3327bd841ffb67f6eaa94ff76bb682a90885d8c3edaa4561\": container with ID starting with 1dcf807907c7c61b3327bd841ffb67f6eaa94ff76bb682a90885d8c3edaa4561 not found: ID does not exist" Jan 21 13:24:01 crc kubenswrapper[4881]: I0121 13:24:01.335343 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5931128-9209-474d-b0c0-430405aba54d" path="/var/lib/kubelet/pods/e5931128-9209-474d-b0c0-430405aba54d/volumes" Jan 21 13:24:29 crc kubenswrapper[4881]: I0121 13:24:29.851337 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:24:29 crc kubenswrapper[4881]: I0121 13:24:29.852043 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:24:29 crc kubenswrapper[4881]: I0121 13:24:29.852119 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 13:24:29 crc kubenswrapper[4881]: I0121 13:24:29.853249 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4886b3658c033dd32099ac18722859936d425d0f630ab9d495cc3181594e8fc4"} pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 13:24:29 crc kubenswrapper[4881]: I0121 13:24:29.853326 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" containerID="cri-o://4886b3658c033dd32099ac18722859936d425d0f630ab9d495cc3181594e8fc4" gracePeriod=600 Jan 21 13:24:29 crc kubenswrapper[4881]: E0121 13:24:29.979527 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:24:30 crc kubenswrapper[4881]: I0121 13:24:30.392478 4881 generic.go:334] "Generic (PLEG): container finished" podID="3687b313-1df2-4274-80db-8c758b51bf2d" containerID="4886b3658c033dd32099ac18722859936d425d0f630ab9d495cc3181594e8fc4" exitCode=0 Jan 21 13:24:30 crc kubenswrapper[4881]: I0121 13:24:30.392739 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerDied","Data":"4886b3658c033dd32099ac18722859936d425d0f630ab9d495cc3181594e8fc4"} Jan 21 13:24:30 crc kubenswrapper[4881]: I0121 13:24:30.393133 4881 scope.go:117] "RemoveContainer" containerID="3ae329a055e11a6e18e47ddb94b164ca6b139ccd6dac8d7c44083794de49a8f4" Jan 21 13:24:30 crc kubenswrapper[4881]: I0121 13:24:30.394273 4881 scope.go:117] "RemoveContainer" containerID="4886b3658c033dd32099ac18722859936d425d0f630ab9d495cc3181594e8fc4" Jan 21 13:24:30 crc kubenswrapper[4881]: E0121 13:24:30.394956 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:24:43 crc kubenswrapper[4881]: I0121 13:24:43.328263 4881 scope.go:117] "RemoveContainer" containerID="4886b3658c033dd32099ac18722859936d425d0f630ab9d495cc3181594e8fc4" Jan 21 13:24:43 crc kubenswrapper[4881]: E0121 13:24:43.328930 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:24:57 crc kubenswrapper[4881]: I0121 13:24:57.311182 4881 scope.go:117] "RemoveContainer" containerID="4886b3658c033dd32099ac18722859936d425d0f630ab9d495cc3181594e8fc4" Jan 21 13:24:57 crc kubenswrapper[4881]: E0121 13:24:57.312101 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:25:09 crc kubenswrapper[4881]: I0121 13:25:09.312237 4881 scope.go:117] "RemoveContainer" containerID="4886b3658c033dd32099ac18722859936d425d0f630ab9d495cc3181594e8fc4" Jan 21 13:25:09 crc kubenswrapper[4881]: E0121 13:25:09.313603 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:25:23 crc kubenswrapper[4881]: I0121 13:25:23.327688 4881 scope.go:117] "RemoveContainer" containerID="4886b3658c033dd32099ac18722859936d425d0f630ab9d495cc3181594e8fc4" Jan 21 13:25:23 crc kubenswrapper[4881]: E0121 13:25:23.328567 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:25:35 crc kubenswrapper[4881]: I0121 13:25:35.312186 4881 scope.go:117] "RemoveContainer" containerID="4886b3658c033dd32099ac18722859936d425d0f630ab9d495cc3181594e8fc4" Jan 21 13:25:35 crc kubenswrapper[4881]: E0121 13:25:35.313413 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:25:48 crc kubenswrapper[4881]: I0121 13:25:48.310581 4881 scope.go:117] "RemoveContainer" containerID="4886b3658c033dd32099ac18722859936d425d0f630ab9d495cc3181594e8fc4" Jan 21 13:25:48 crc kubenswrapper[4881]: E0121 13:25:48.311902 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:26:03 crc kubenswrapper[4881]: I0121 13:26:03.317582 4881 scope.go:117] "RemoveContainer" containerID="4886b3658c033dd32099ac18722859936d425d0f630ab9d495cc3181594e8fc4" Jan 21 13:26:03 crc kubenswrapper[4881]: E0121 13:26:03.318313 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:26:12 crc kubenswrapper[4881]: I0121 13:26:12.797667 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vb5m2"] Jan 21 13:26:12 crc kubenswrapper[4881]: E0121 13:26:12.799097 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5931128-9209-474d-b0c0-430405aba54d" containerName="registry-server" Jan 21 13:26:12 crc kubenswrapper[4881]: I0121 13:26:12.799125 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5931128-9209-474d-b0c0-430405aba54d" containerName="registry-server" Jan 21 13:26:12 crc kubenswrapper[4881]: E0121 13:26:12.799169 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5931128-9209-474d-b0c0-430405aba54d" containerName="extract-utilities" Jan 21 13:26:12 crc kubenswrapper[4881]: I0121 13:26:12.799180 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5931128-9209-474d-b0c0-430405aba54d" containerName="extract-utilities" Jan 21 13:26:12 crc kubenswrapper[4881]: E0121 13:26:12.799238 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5931128-9209-474d-b0c0-430405aba54d" containerName="extract-content" Jan 21 13:26:12 crc kubenswrapper[4881]: I0121 13:26:12.799251 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5931128-9209-474d-b0c0-430405aba54d" containerName="extract-content" Jan 21 13:26:12 crc kubenswrapper[4881]: I0121 13:26:12.799585 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5931128-9209-474d-b0c0-430405aba54d" containerName="registry-server" Jan 21 13:26:12 crc kubenswrapper[4881]: I0121 13:26:12.802030 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vb5m2" Jan 21 13:26:12 crc kubenswrapper[4881]: I0121 13:26:12.811760 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vb5m2"] Jan 21 13:26:12 crc kubenswrapper[4881]: I0121 13:26:12.853942 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b86nz\" (UniqueName: \"kubernetes.io/projected/bb6f50a9-e997-4629-bec7-5b36f8467213-kube-api-access-b86nz\") pod \"redhat-marketplace-vb5m2\" (UID: \"bb6f50a9-e997-4629-bec7-5b36f8467213\") " pod="openshift-marketplace/redhat-marketplace-vb5m2" Jan 21 13:26:12 crc kubenswrapper[4881]: I0121 13:26:12.854117 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb6f50a9-e997-4629-bec7-5b36f8467213-catalog-content\") pod \"redhat-marketplace-vb5m2\" (UID: \"bb6f50a9-e997-4629-bec7-5b36f8467213\") " pod="openshift-marketplace/redhat-marketplace-vb5m2" Jan 21 13:26:12 crc kubenswrapper[4881]: I0121 13:26:12.854182 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb6f50a9-e997-4629-bec7-5b36f8467213-utilities\") pod \"redhat-marketplace-vb5m2\" (UID: \"bb6f50a9-e997-4629-bec7-5b36f8467213\") " pod="openshift-marketplace/redhat-marketplace-vb5m2" Jan 21 13:26:12 crc kubenswrapper[4881]: I0121 13:26:12.957052 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b86nz\" (UniqueName: \"kubernetes.io/projected/bb6f50a9-e997-4629-bec7-5b36f8467213-kube-api-access-b86nz\") pod \"redhat-marketplace-vb5m2\" (UID: \"bb6f50a9-e997-4629-bec7-5b36f8467213\") " pod="openshift-marketplace/redhat-marketplace-vb5m2" Jan 21 13:26:12 crc kubenswrapper[4881]: I0121 13:26:12.957143 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb6f50a9-e997-4629-bec7-5b36f8467213-catalog-content\") pod \"redhat-marketplace-vb5m2\" (UID: \"bb6f50a9-e997-4629-bec7-5b36f8467213\") " pod="openshift-marketplace/redhat-marketplace-vb5m2" Jan 21 13:26:12 crc kubenswrapper[4881]: I0121 13:26:12.957176 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb6f50a9-e997-4629-bec7-5b36f8467213-utilities\") pod \"redhat-marketplace-vb5m2\" (UID: \"bb6f50a9-e997-4629-bec7-5b36f8467213\") " pod="openshift-marketplace/redhat-marketplace-vb5m2" Jan 21 13:26:12 crc kubenswrapper[4881]: I0121 13:26:12.958016 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb6f50a9-e997-4629-bec7-5b36f8467213-utilities\") pod \"redhat-marketplace-vb5m2\" (UID: \"bb6f50a9-e997-4629-bec7-5b36f8467213\") " pod="openshift-marketplace/redhat-marketplace-vb5m2" Jan 21 13:26:12 crc kubenswrapper[4881]: I0121 13:26:12.958077 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb6f50a9-e997-4629-bec7-5b36f8467213-catalog-content\") pod \"redhat-marketplace-vb5m2\" (UID: \"bb6f50a9-e997-4629-bec7-5b36f8467213\") " pod="openshift-marketplace/redhat-marketplace-vb5m2" Jan 21 13:26:12 crc kubenswrapper[4881]: I0121 13:26:12.992834 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b86nz\" (UniqueName: \"kubernetes.io/projected/bb6f50a9-e997-4629-bec7-5b36f8467213-kube-api-access-b86nz\") pod \"redhat-marketplace-vb5m2\" (UID: \"bb6f50a9-e997-4629-bec7-5b36f8467213\") " pod="openshift-marketplace/redhat-marketplace-vb5m2" Jan 21 13:26:13 crc kubenswrapper[4881]: I0121 13:26:13.181496 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vb5m2" Jan 21 13:26:13 crc kubenswrapper[4881]: I0121 13:26:13.768385 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vb5m2"] Jan 21 13:26:13 crc kubenswrapper[4881]: W0121 13:26:13.786489 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb6f50a9_e997_4629_bec7_5b36f8467213.slice/crio-135065841fcb0e210ebbee24ed1ceeaff870895357eed67f8b8b185d7ce2cb2f WatchSource:0}: Error finding container 135065841fcb0e210ebbee24ed1ceeaff870895357eed67f8b8b185d7ce2cb2f: Status 404 returned error can't find the container with id 135065841fcb0e210ebbee24ed1ceeaff870895357eed67f8b8b185d7ce2cb2f Jan 21 13:26:14 crc kubenswrapper[4881]: I0121 13:26:14.311317 4881 scope.go:117] "RemoveContainer" containerID="4886b3658c033dd32099ac18722859936d425d0f630ab9d495cc3181594e8fc4" Jan 21 13:26:14 crc kubenswrapper[4881]: E0121 13:26:14.311639 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:26:14 crc kubenswrapper[4881]: I0121 13:26:14.721817 4881 generic.go:334] "Generic (PLEG): container finished" podID="bb6f50a9-e997-4629-bec7-5b36f8467213" containerID="e4fcd361049a40e2cf6013975baa76bd341d51cb8094aec3651fcae44987a113" exitCode=0 Jan 21 13:26:14 crc kubenswrapper[4881]: I0121 13:26:14.721915 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vb5m2" event={"ID":"bb6f50a9-e997-4629-bec7-5b36f8467213","Type":"ContainerDied","Data":"e4fcd361049a40e2cf6013975baa76bd341d51cb8094aec3651fcae44987a113"} Jan 21 13:26:14 crc kubenswrapper[4881]: I0121 13:26:14.722165 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vb5m2" event={"ID":"bb6f50a9-e997-4629-bec7-5b36f8467213","Type":"ContainerStarted","Data":"135065841fcb0e210ebbee24ed1ceeaff870895357eed67f8b8b185d7ce2cb2f"} Jan 21 13:26:14 crc kubenswrapper[4881]: I0121 13:26:14.726573 4881 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 13:26:16 crc kubenswrapper[4881]: I0121 13:26:16.748293 4881 generic.go:334] "Generic (PLEG): container finished" podID="bb6f50a9-e997-4629-bec7-5b36f8467213" containerID="b3303c4261f3440d0169b7caa34aa63d1b3bd27bea62238bec06aabbf1a04789" exitCode=0 Jan 21 13:26:16 crc kubenswrapper[4881]: I0121 13:26:16.748386 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vb5m2" event={"ID":"bb6f50a9-e997-4629-bec7-5b36f8467213","Type":"ContainerDied","Data":"b3303c4261f3440d0169b7caa34aa63d1b3bd27bea62238bec06aabbf1a04789"} Jan 21 13:26:17 crc kubenswrapper[4881]: I0121 13:26:17.766896 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vb5m2" event={"ID":"bb6f50a9-e997-4629-bec7-5b36f8467213","Type":"ContainerStarted","Data":"7dd8a762e96ff6a259c0d7125968f53fa2dfb55f33c0dcefc1a43c070370cd41"} Jan 21 13:26:17 crc kubenswrapper[4881]: I0121 13:26:17.807122 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vb5m2" podStartSLOduration=3.34028762 podStartE2EDuration="5.80707773s" podCreationTimestamp="2026-01-21 13:26:12 +0000 UTC" firstStartedPulling="2026-01-21 13:26:14.726321359 +0000 UTC m=+8961.986277828" lastFinishedPulling="2026-01-21 13:26:17.193111459 +0000 UTC m=+8964.453067938" observedRunningTime="2026-01-21 13:26:17.794132484 +0000 UTC m=+8965.054088973" watchObservedRunningTime="2026-01-21 13:26:17.80707773 +0000 UTC m=+8965.067034209" Jan 21 13:26:23 crc kubenswrapper[4881]: I0121 13:26:23.182691 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vb5m2" Jan 21 13:26:23 crc kubenswrapper[4881]: I0121 13:26:23.184344 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vb5m2" Jan 21 13:26:23 crc kubenswrapper[4881]: I0121 13:26:23.229338 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vb5m2" Jan 21 13:26:23 crc kubenswrapper[4881]: I0121 13:26:23.536396 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vb5m2" Jan 21 13:26:23 crc kubenswrapper[4881]: I0121 13:26:23.586378 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vb5m2"] Jan 21 13:26:25 crc kubenswrapper[4881]: I0121 13:26:25.476565 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vb5m2" podUID="bb6f50a9-e997-4629-bec7-5b36f8467213" containerName="registry-server" containerID="cri-o://7dd8a762e96ff6a259c0d7125968f53fa2dfb55f33c0dcefc1a43c070370cd41" gracePeriod=2 Jan 21 13:26:26 crc kubenswrapper[4881]: I0121 13:26:26.257577 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vb5m2" Jan 21 13:26:26 crc kubenswrapper[4881]: I0121 13:26:26.360816 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b86nz\" (UniqueName: \"kubernetes.io/projected/bb6f50a9-e997-4629-bec7-5b36f8467213-kube-api-access-b86nz\") pod \"bb6f50a9-e997-4629-bec7-5b36f8467213\" (UID: \"bb6f50a9-e997-4629-bec7-5b36f8467213\") " Jan 21 13:26:26 crc kubenswrapper[4881]: I0121 13:26:26.360936 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb6f50a9-e997-4629-bec7-5b36f8467213-catalog-content\") pod \"bb6f50a9-e997-4629-bec7-5b36f8467213\" (UID: \"bb6f50a9-e997-4629-bec7-5b36f8467213\") " Jan 21 13:26:26 crc kubenswrapper[4881]: I0121 13:26:26.361060 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb6f50a9-e997-4629-bec7-5b36f8467213-utilities\") pod \"bb6f50a9-e997-4629-bec7-5b36f8467213\" (UID: \"bb6f50a9-e997-4629-bec7-5b36f8467213\") " Jan 21 13:26:26 crc kubenswrapper[4881]: I0121 13:26:26.362019 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb6f50a9-e997-4629-bec7-5b36f8467213-utilities" (OuterVolumeSpecName: "utilities") pod "bb6f50a9-e997-4629-bec7-5b36f8467213" (UID: "bb6f50a9-e997-4629-bec7-5b36f8467213"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:26:26 crc kubenswrapper[4881]: I0121 13:26:26.373086 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb6f50a9-e997-4629-bec7-5b36f8467213-kube-api-access-b86nz" (OuterVolumeSpecName: "kube-api-access-b86nz") pod "bb6f50a9-e997-4629-bec7-5b36f8467213" (UID: "bb6f50a9-e997-4629-bec7-5b36f8467213"). InnerVolumeSpecName "kube-api-access-b86nz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:26:26 crc kubenswrapper[4881]: I0121 13:26:26.391088 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb6f50a9-e997-4629-bec7-5b36f8467213-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bb6f50a9-e997-4629-bec7-5b36f8467213" (UID: "bb6f50a9-e997-4629-bec7-5b36f8467213"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:26:26 crc kubenswrapper[4881]: I0121 13:26:26.465109 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb6f50a9-e997-4629-bec7-5b36f8467213-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:26:26 crc kubenswrapper[4881]: I0121 13:26:26.465379 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b86nz\" (UniqueName: \"kubernetes.io/projected/bb6f50a9-e997-4629-bec7-5b36f8467213-kube-api-access-b86nz\") on node \"crc\" DevicePath \"\"" Jan 21 13:26:26 crc kubenswrapper[4881]: I0121 13:26:26.465394 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb6f50a9-e997-4629-bec7-5b36f8467213-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:26:26 crc kubenswrapper[4881]: I0121 13:26:26.490059 4881 generic.go:334] "Generic (PLEG): container finished" podID="bb6f50a9-e997-4629-bec7-5b36f8467213" containerID="7dd8a762e96ff6a259c0d7125968f53fa2dfb55f33c0dcefc1a43c070370cd41" exitCode=0 Jan 21 13:26:26 crc kubenswrapper[4881]: I0121 13:26:26.490097 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vb5m2" Jan 21 13:26:26 crc kubenswrapper[4881]: I0121 13:26:26.490102 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vb5m2" event={"ID":"bb6f50a9-e997-4629-bec7-5b36f8467213","Type":"ContainerDied","Data":"7dd8a762e96ff6a259c0d7125968f53fa2dfb55f33c0dcefc1a43c070370cd41"} Jan 21 13:26:26 crc kubenswrapper[4881]: I0121 13:26:26.490132 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vb5m2" event={"ID":"bb6f50a9-e997-4629-bec7-5b36f8467213","Type":"ContainerDied","Data":"135065841fcb0e210ebbee24ed1ceeaff870895357eed67f8b8b185d7ce2cb2f"} Jan 21 13:26:26 crc kubenswrapper[4881]: I0121 13:26:26.490154 4881 scope.go:117] "RemoveContainer" containerID="7dd8a762e96ff6a259c0d7125968f53fa2dfb55f33c0dcefc1a43c070370cd41" Jan 21 13:26:26 crc kubenswrapper[4881]: I0121 13:26:26.529950 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vb5m2"] Jan 21 13:26:26 crc kubenswrapper[4881]: I0121 13:26:26.535931 4881 scope.go:117] "RemoveContainer" containerID="b3303c4261f3440d0169b7caa34aa63d1b3bd27bea62238bec06aabbf1a04789" Jan 21 13:26:26 crc kubenswrapper[4881]: I0121 13:26:26.541028 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vb5m2"] Jan 21 13:26:26 crc kubenswrapper[4881]: I0121 13:26:26.563472 4881 scope.go:117] "RemoveContainer" containerID="e4fcd361049a40e2cf6013975baa76bd341d51cb8094aec3651fcae44987a113" Jan 21 13:26:26 crc kubenswrapper[4881]: I0121 13:26:26.622597 4881 scope.go:117] "RemoveContainer" containerID="7dd8a762e96ff6a259c0d7125968f53fa2dfb55f33c0dcefc1a43c070370cd41" Jan 21 13:26:26 crc kubenswrapper[4881]: E0121 13:26:26.624029 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7dd8a762e96ff6a259c0d7125968f53fa2dfb55f33c0dcefc1a43c070370cd41\": container with ID starting with 7dd8a762e96ff6a259c0d7125968f53fa2dfb55f33c0dcefc1a43c070370cd41 not found: ID does not exist" containerID="7dd8a762e96ff6a259c0d7125968f53fa2dfb55f33c0dcefc1a43c070370cd41" Jan 21 13:26:26 crc kubenswrapper[4881]: I0121 13:26:26.624074 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7dd8a762e96ff6a259c0d7125968f53fa2dfb55f33c0dcefc1a43c070370cd41"} err="failed to get container status \"7dd8a762e96ff6a259c0d7125968f53fa2dfb55f33c0dcefc1a43c070370cd41\": rpc error: code = NotFound desc = could not find container \"7dd8a762e96ff6a259c0d7125968f53fa2dfb55f33c0dcefc1a43c070370cd41\": container with ID starting with 7dd8a762e96ff6a259c0d7125968f53fa2dfb55f33c0dcefc1a43c070370cd41 not found: ID does not exist" Jan 21 13:26:26 crc kubenswrapper[4881]: I0121 13:26:26.624103 4881 scope.go:117] "RemoveContainer" containerID="b3303c4261f3440d0169b7caa34aa63d1b3bd27bea62238bec06aabbf1a04789" Jan 21 13:26:26 crc kubenswrapper[4881]: E0121 13:26:26.624690 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3303c4261f3440d0169b7caa34aa63d1b3bd27bea62238bec06aabbf1a04789\": container with ID starting with b3303c4261f3440d0169b7caa34aa63d1b3bd27bea62238bec06aabbf1a04789 not found: ID does not exist" containerID="b3303c4261f3440d0169b7caa34aa63d1b3bd27bea62238bec06aabbf1a04789" Jan 21 13:26:26 crc kubenswrapper[4881]: I0121 13:26:26.624720 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3303c4261f3440d0169b7caa34aa63d1b3bd27bea62238bec06aabbf1a04789"} err="failed to get container status \"b3303c4261f3440d0169b7caa34aa63d1b3bd27bea62238bec06aabbf1a04789\": rpc error: code = NotFound desc = could not find container \"b3303c4261f3440d0169b7caa34aa63d1b3bd27bea62238bec06aabbf1a04789\": container with ID starting with b3303c4261f3440d0169b7caa34aa63d1b3bd27bea62238bec06aabbf1a04789 not found: ID does not exist" Jan 21 13:26:26 crc kubenswrapper[4881]: I0121 13:26:26.624741 4881 scope.go:117] "RemoveContainer" containerID="e4fcd361049a40e2cf6013975baa76bd341d51cb8094aec3651fcae44987a113" Jan 21 13:26:26 crc kubenswrapper[4881]: E0121 13:26:26.625113 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4fcd361049a40e2cf6013975baa76bd341d51cb8094aec3651fcae44987a113\": container with ID starting with e4fcd361049a40e2cf6013975baa76bd341d51cb8094aec3651fcae44987a113 not found: ID does not exist" containerID="e4fcd361049a40e2cf6013975baa76bd341d51cb8094aec3651fcae44987a113" Jan 21 13:26:26 crc kubenswrapper[4881]: I0121 13:26:26.625167 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4fcd361049a40e2cf6013975baa76bd341d51cb8094aec3651fcae44987a113"} err="failed to get container status \"e4fcd361049a40e2cf6013975baa76bd341d51cb8094aec3651fcae44987a113\": rpc error: code = NotFound desc = could not find container \"e4fcd361049a40e2cf6013975baa76bd341d51cb8094aec3651fcae44987a113\": container with ID starting with e4fcd361049a40e2cf6013975baa76bd341d51cb8094aec3651fcae44987a113 not found: ID does not exist" Jan 21 13:26:27 crc kubenswrapper[4881]: I0121 13:26:27.335696 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb6f50a9-e997-4629-bec7-5b36f8467213" path="/var/lib/kubelet/pods/bb6f50a9-e997-4629-bec7-5b36f8467213/volumes" Jan 21 13:26:29 crc kubenswrapper[4881]: I0121 13:26:29.312024 4881 scope.go:117] "RemoveContainer" containerID="4886b3658c033dd32099ac18722859936d425d0f630ab9d495cc3181594e8fc4" Jan 21 13:26:29 crc kubenswrapper[4881]: E0121 13:26:29.313165 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:26:43 crc kubenswrapper[4881]: I0121 13:26:43.323332 4881 scope.go:117] "RemoveContainer" containerID="4886b3658c033dd32099ac18722859936d425d0f630ab9d495cc3181594e8fc4" Jan 21 13:26:43 crc kubenswrapper[4881]: E0121 13:26:43.324183 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:26:56 crc kubenswrapper[4881]: I0121 13:26:56.311274 4881 scope.go:117] "RemoveContainer" containerID="4886b3658c033dd32099ac18722859936d425d0f630ab9d495cc3181594e8fc4" Jan 21 13:26:56 crc kubenswrapper[4881]: E0121 13:26:56.312198 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:27:03 crc kubenswrapper[4881]: I0121 13:27:03.001496 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8jbx6"] Jan 21 13:27:03 crc kubenswrapper[4881]: E0121 13:27:03.002900 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb6f50a9-e997-4629-bec7-5b36f8467213" containerName="registry-server" Jan 21 13:27:03 crc kubenswrapper[4881]: I0121 13:27:03.002926 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb6f50a9-e997-4629-bec7-5b36f8467213" containerName="registry-server" Jan 21 13:27:03 crc kubenswrapper[4881]: E0121 13:27:03.002949 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb6f50a9-e997-4629-bec7-5b36f8467213" containerName="extract-utilities" Jan 21 13:27:03 crc kubenswrapper[4881]: I0121 13:27:03.002961 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb6f50a9-e997-4629-bec7-5b36f8467213" containerName="extract-utilities" Jan 21 13:27:03 crc kubenswrapper[4881]: E0121 13:27:03.003077 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb6f50a9-e997-4629-bec7-5b36f8467213" containerName="extract-content" Jan 21 13:27:03 crc kubenswrapper[4881]: I0121 13:27:03.003092 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb6f50a9-e997-4629-bec7-5b36f8467213" containerName="extract-content" Jan 21 13:27:03 crc kubenswrapper[4881]: I0121 13:27:03.003419 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb6f50a9-e997-4629-bec7-5b36f8467213" containerName="registry-server" Jan 21 13:27:03 crc kubenswrapper[4881]: I0121 13:27:03.006162 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8jbx6" Jan 21 13:27:03 crc kubenswrapper[4881]: I0121 13:27:03.037962 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8jbx6"] Jan 21 13:27:03 crc kubenswrapper[4881]: I0121 13:27:03.153567 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51bcc54a-e7f1-455f-a90e-6dbb13e2ca49-catalog-content\") pod \"certified-operators-8jbx6\" (UID: \"51bcc54a-e7f1-455f-a90e-6dbb13e2ca49\") " pod="openshift-marketplace/certified-operators-8jbx6" Jan 21 13:27:03 crc kubenswrapper[4881]: I0121 13:27:03.153641 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51bcc54a-e7f1-455f-a90e-6dbb13e2ca49-utilities\") pod \"certified-operators-8jbx6\" (UID: \"51bcc54a-e7f1-455f-a90e-6dbb13e2ca49\") " pod="openshift-marketplace/certified-operators-8jbx6" Jan 21 13:27:03 crc kubenswrapper[4881]: I0121 13:27:03.153723 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmq6z\" (UniqueName: \"kubernetes.io/projected/51bcc54a-e7f1-455f-a90e-6dbb13e2ca49-kube-api-access-gmq6z\") pod \"certified-operators-8jbx6\" (UID: \"51bcc54a-e7f1-455f-a90e-6dbb13e2ca49\") " pod="openshift-marketplace/certified-operators-8jbx6" Jan 21 13:27:03 crc kubenswrapper[4881]: I0121 13:27:03.256277 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmq6z\" (UniqueName: \"kubernetes.io/projected/51bcc54a-e7f1-455f-a90e-6dbb13e2ca49-kube-api-access-gmq6z\") pod \"certified-operators-8jbx6\" (UID: \"51bcc54a-e7f1-455f-a90e-6dbb13e2ca49\") " pod="openshift-marketplace/certified-operators-8jbx6" Jan 21 13:27:03 crc kubenswrapper[4881]: I0121 13:27:03.256567 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51bcc54a-e7f1-455f-a90e-6dbb13e2ca49-catalog-content\") pod \"certified-operators-8jbx6\" (UID: \"51bcc54a-e7f1-455f-a90e-6dbb13e2ca49\") " pod="openshift-marketplace/certified-operators-8jbx6" Jan 21 13:27:03 crc kubenswrapper[4881]: I0121 13:27:03.256610 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51bcc54a-e7f1-455f-a90e-6dbb13e2ca49-utilities\") pod \"certified-operators-8jbx6\" (UID: \"51bcc54a-e7f1-455f-a90e-6dbb13e2ca49\") " pod="openshift-marketplace/certified-operators-8jbx6" Jan 21 13:27:03 crc kubenswrapper[4881]: I0121 13:27:03.258510 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51bcc54a-e7f1-455f-a90e-6dbb13e2ca49-catalog-content\") pod \"certified-operators-8jbx6\" (UID: \"51bcc54a-e7f1-455f-a90e-6dbb13e2ca49\") " pod="openshift-marketplace/certified-operators-8jbx6" Jan 21 13:27:03 crc kubenswrapper[4881]: I0121 13:27:03.258548 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51bcc54a-e7f1-455f-a90e-6dbb13e2ca49-utilities\") pod \"certified-operators-8jbx6\" (UID: \"51bcc54a-e7f1-455f-a90e-6dbb13e2ca49\") " pod="openshift-marketplace/certified-operators-8jbx6" Jan 21 13:27:03 crc kubenswrapper[4881]: I0121 13:27:03.286459 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmq6z\" (UniqueName: \"kubernetes.io/projected/51bcc54a-e7f1-455f-a90e-6dbb13e2ca49-kube-api-access-gmq6z\") pod \"certified-operators-8jbx6\" (UID: \"51bcc54a-e7f1-455f-a90e-6dbb13e2ca49\") " pod="openshift-marketplace/certified-operators-8jbx6" Jan 21 13:27:03 crc kubenswrapper[4881]: I0121 13:27:03.331077 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8jbx6" Jan 21 13:27:04 crc kubenswrapper[4881]: I0121 13:27:04.197435 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8jbx6"] Jan 21 13:27:05 crc kubenswrapper[4881]: I0121 13:27:05.218026 4881 generic.go:334] "Generic (PLEG): container finished" podID="51bcc54a-e7f1-455f-a90e-6dbb13e2ca49" containerID="4ecb599d45491627d891719f8405002438c76fc0d4d316a1bfd6cd193b1f3a08" exitCode=0 Jan 21 13:27:05 crc kubenswrapper[4881]: I0121 13:27:05.218142 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8jbx6" event={"ID":"51bcc54a-e7f1-455f-a90e-6dbb13e2ca49","Type":"ContainerDied","Data":"4ecb599d45491627d891719f8405002438c76fc0d4d316a1bfd6cd193b1f3a08"} Jan 21 13:27:05 crc kubenswrapper[4881]: I0121 13:27:05.218487 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8jbx6" event={"ID":"51bcc54a-e7f1-455f-a90e-6dbb13e2ca49","Type":"ContainerStarted","Data":"e7c213d8ebb50ed1685eb2246532fa5ff812d040f27b3f2e8c8f1a768c916445"} Jan 21 13:27:07 crc kubenswrapper[4881]: I0121 13:27:07.248109 4881 generic.go:334] "Generic (PLEG): container finished" podID="51bcc54a-e7f1-455f-a90e-6dbb13e2ca49" containerID="bdabdbe9600c58209cbe056b6045ff78c4f1191546fbd41819662998d09e62ca" exitCode=0 Jan 21 13:27:07 crc kubenswrapper[4881]: I0121 13:27:07.248353 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8jbx6" event={"ID":"51bcc54a-e7f1-455f-a90e-6dbb13e2ca49","Type":"ContainerDied","Data":"bdabdbe9600c58209cbe056b6045ff78c4f1191546fbd41819662998d09e62ca"} Jan 21 13:27:07 crc kubenswrapper[4881]: I0121 13:27:07.310641 4881 scope.go:117] "RemoveContainer" containerID="4886b3658c033dd32099ac18722859936d425d0f630ab9d495cc3181594e8fc4" Jan 21 13:27:07 crc kubenswrapper[4881]: E0121 13:27:07.310943 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:27:08 crc kubenswrapper[4881]: I0121 13:27:08.262798 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8jbx6" event={"ID":"51bcc54a-e7f1-455f-a90e-6dbb13e2ca49","Type":"ContainerStarted","Data":"c3618c5333a6cef78d377467f75aa4f9b175d77174b183489d5d38d8e5caa08c"} Jan 21 13:27:08 crc kubenswrapper[4881]: I0121 13:27:08.282041 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8jbx6" podStartSLOduration=3.686252389 podStartE2EDuration="6.282014001s" podCreationTimestamp="2026-01-21 13:27:02 +0000 UTC" firstStartedPulling="2026-01-21 13:27:05.220624153 +0000 UTC m=+9012.480580632" lastFinishedPulling="2026-01-21 13:27:07.816385775 +0000 UTC m=+9015.076342244" observedRunningTime="2026-01-21 13:27:08.281282674 +0000 UTC m=+9015.541239153" watchObservedRunningTime="2026-01-21 13:27:08.282014001 +0000 UTC m=+9015.541970470" Jan 21 13:27:13 crc kubenswrapper[4881]: I0121 13:27:13.331338 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8jbx6" Jan 21 13:27:13 crc kubenswrapper[4881]: I0121 13:27:13.331863 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8jbx6" Jan 21 13:27:13 crc kubenswrapper[4881]: I0121 13:27:13.391421 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8jbx6" Jan 21 13:27:14 crc kubenswrapper[4881]: I0121 13:27:14.516605 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8jbx6" Jan 21 13:27:14 crc kubenswrapper[4881]: I0121 13:27:14.574397 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8jbx6"] Jan 21 13:27:16 crc kubenswrapper[4881]: I0121 13:27:16.473239 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8jbx6" podUID="51bcc54a-e7f1-455f-a90e-6dbb13e2ca49" containerName="registry-server" containerID="cri-o://c3618c5333a6cef78d377467f75aa4f9b175d77174b183489d5d38d8e5caa08c" gracePeriod=2 Jan 21 13:27:18 crc kubenswrapper[4881]: I0121 13:27:18.139910 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8jbx6" Jan 21 13:27:18 crc kubenswrapper[4881]: I0121 13:27:18.255128 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmq6z\" (UniqueName: \"kubernetes.io/projected/51bcc54a-e7f1-455f-a90e-6dbb13e2ca49-kube-api-access-gmq6z\") pod \"51bcc54a-e7f1-455f-a90e-6dbb13e2ca49\" (UID: \"51bcc54a-e7f1-455f-a90e-6dbb13e2ca49\") " Jan 21 13:27:18 crc kubenswrapper[4881]: I0121 13:27:18.255268 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51bcc54a-e7f1-455f-a90e-6dbb13e2ca49-catalog-content\") pod \"51bcc54a-e7f1-455f-a90e-6dbb13e2ca49\" (UID: \"51bcc54a-e7f1-455f-a90e-6dbb13e2ca49\") " Jan 21 13:27:18 crc kubenswrapper[4881]: I0121 13:27:18.255302 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51bcc54a-e7f1-455f-a90e-6dbb13e2ca49-utilities\") pod \"51bcc54a-e7f1-455f-a90e-6dbb13e2ca49\" (UID: \"51bcc54a-e7f1-455f-a90e-6dbb13e2ca49\") " Jan 21 13:27:18 crc kubenswrapper[4881]: I0121 13:27:18.256732 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51bcc54a-e7f1-455f-a90e-6dbb13e2ca49-utilities" (OuterVolumeSpecName: "utilities") pod "51bcc54a-e7f1-455f-a90e-6dbb13e2ca49" (UID: "51bcc54a-e7f1-455f-a90e-6dbb13e2ca49"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:27:18 crc kubenswrapper[4881]: I0121 13:27:18.269005 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51bcc54a-e7f1-455f-a90e-6dbb13e2ca49-kube-api-access-gmq6z" (OuterVolumeSpecName: "kube-api-access-gmq6z") pod "51bcc54a-e7f1-455f-a90e-6dbb13e2ca49" (UID: "51bcc54a-e7f1-455f-a90e-6dbb13e2ca49"). InnerVolumeSpecName "kube-api-access-gmq6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:27:18 crc kubenswrapper[4881]: I0121 13:27:18.303498 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51bcc54a-e7f1-455f-a90e-6dbb13e2ca49-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "51bcc54a-e7f1-455f-a90e-6dbb13e2ca49" (UID: "51bcc54a-e7f1-455f-a90e-6dbb13e2ca49"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:27:18 crc kubenswrapper[4881]: I0121 13:27:18.359837 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmq6z\" (UniqueName: \"kubernetes.io/projected/51bcc54a-e7f1-455f-a90e-6dbb13e2ca49-kube-api-access-gmq6z\") on node \"crc\" DevicePath \"\"" Jan 21 13:27:18 crc kubenswrapper[4881]: I0121 13:27:18.360104 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51bcc54a-e7f1-455f-a90e-6dbb13e2ca49-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:27:18 crc kubenswrapper[4881]: I0121 13:27:18.360115 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51bcc54a-e7f1-455f-a90e-6dbb13e2ca49-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:27:18 crc kubenswrapper[4881]: I0121 13:27:18.493577 4881 generic.go:334] "Generic (PLEG): container finished" podID="51bcc54a-e7f1-455f-a90e-6dbb13e2ca49" containerID="c3618c5333a6cef78d377467f75aa4f9b175d77174b183489d5d38d8e5caa08c" exitCode=0 Jan 21 13:27:18 crc kubenswrapper[4881]: I0121 13:27:18.493624 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8jbx6" event={"ID":"51bcc54a-e7f1-455f-a90e-6dbb13e2ca49","Type":"ContainerDied","Data":"c3618c5333a6cef78d377467f75aa4f9b175d77174b183489d5d38d8e5caa08c"} Jan 21 13:27:18 crc kubenswrapper[4881]: I0121 13:27:18.493653 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8jbx6" event={"ID":"51bcc54a-e7f1-455f-a90e-6dbb13e2ca49","Type":"ContainerDied","Data":"e7c213d8ebb50ed1685eb2246532fa5ff812d040f27b3f2e8c8f1a768c916445"} Jan 21 13:27:18 crc kubenswrapper[4881]: I0121 13:27:18.493659 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8jbx6" Jan 21 13:27:18 crc kubenswrapper[4881]: I0121 13:27:18.493671 4881 scope.go:117] "RemoveContainer" containerID="c3618c5333a6cef78d377467f75aa4f9b175d77174b183489d5d38d8e5caa08c" Jan 21 13:27:18 crc kubenswrapper[4881]: I0121 13:27:18.514492 4881 scope.go:117] "RemoveContainer" containerID="bdabdbe9600c58209cbe056b6045ff78c4f1191546fbd41819662998d09e62ca" Jan 21 13:27:18 crc kubenswrapper[4881]: I0121 13:27:18.548267 4881 scope.go:117] "RemoveContainer" containerID="4ecb599d45491627d891719f8405002438c76fc0d4d316a1bfd6cd193b1f3a08" Jan 21 13:27:18 crc kubenswrapper[4881]: I0121 13:27:18.560142 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8jbx6"] Jan 21 13:27:18 crc kubenswrapper[4881]: I0121 13:27:18.575568 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8jbx6"] Jan 21 13:27:18 crc kubenswrapper[4881]: I0121 13:27:18.612388 4881 scope.go:117] "RemoveContainer" containerID="c3618c5333a6cef78d377467f75aa4f9b175d77174b183489d5d38d8e5caa08c" Jan 21 13:27:18 crc kubenswrapper[4881]: E0121 13:27:18.612946 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3618c5333a6cef78d377467f75aa4f9b175d77174b183489d5d38d8e5caa08c\": container with ID starting with c3618c5333a6cef78d377467f75aa4f9b175d77174b183489d5d38d8e5caa08c not found: ID does not exist" containerID="c3618c5333a6cef78d377467f75aa4f9b175d77174b183489d5d38d8e5caa08c" Jan 21 13:27:18 crc kubenswrapper[4881]: I0121 13:27:18.612983 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3618c5333a6cef78d377467f75aa4f9b175d77174b183489d5d38d8e5caa08c"} err="failed to get container status \"c3618c5333a6cef78d377467f75aa4f9b175d77174b183489d5d38d8e5caa08c\": rpc error: code = NotFound desc = could not find container \"c3618c5333a6cef78d377467f75aa4f9b175d77174b183489d5d38d8e5caa08c\": container with ID starting with c3618c5333a6cef78d377467f75aa4f9b175d77174b183489d5d38d8e5caa08c not found: ID does not exist" Jan 21 13:27:18 crc kubenswrapper[4881]: I0121 13:27:18.613125 4881 scope.go:117] "RemoveContainer" containerID="bdabdbe9600c58209cbe056b6045ff78c4f1191546fbd41819662998d09e62ca" Jan 21 13:27:18 crc kubenswrapper[4881]: E0121 13:27:18.613551 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdabdbe9600c58209cbe056b6045ff78c4f1191546fbd41819662998d09e62ca\": container with ID starting with bdabdbe9600c58209cbe056b6045ff78c4f1191546fbd41819662998d09e62ca not found: ID does not exist" containerID="bdabdbe9600c58209cbe056b6045ff78c4f1191546fbd41819662998d09e62ca" Jan 21 13:27:18 crc kubenswrapper[4881]: I0121 13:27:18.613589 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdabdbe9600c58209cbe056b6045ff78c4f1191546fbd41819662998d09e62ca"} err="failed to get container status \"bdabdbe9600c58209cbe056b6045ff78c4f1191546fbd41819662998d09e62ca\": rpc error: code = NotFound desc = could not find container \"bdabdbe9600c58209cbe056b6045ff78c4f1191546fbd41819662998d09e62ca\": container with ID starting with bdabdbe9600c58209cbe056b6045ff78c4f1191546fbd41819662998d09e62ca not found: ID does not exist" Jan 21 13:27:18 crc kubenswrapper[4881]: I0121 13:27:18.613610 4881 scope.go:117] "RemoveContainer" containerID="4ecb599d45491627d891719f8405002438c76fc0d4d316a1bfd6cd193b1f3a08" Jan 21 13:27:18 crc kubenswrapper[4881]: E0121 13:27:18.613998 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ecb599d45491627d891719f8405002438c76fc0d4d316a1bfd6cd193b1f3a08\": container with ID starting with 4ecb599d45491627d891719f8405002438c76fc0d4d316a1bfd6cd193b1f3a08 not found: ID does not exist" containerID="4ecb599d45491627d891719f8405002438c76fc0d4d316a1bfd6cd193b1f3a08" Jan 21 13:27:18 crc kubenswrapper[4881]: I0121 13:27:18.614018 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ecb599d45491627d891719f8405002438c76fc0d4d316a1bfd6cd193b1f3a08"} err="failed to get container status \"4ecb599d45491627d891719f8405002438c76fc0d4d316a1bfd6cd193b1f3a08\": rpc error: code = NotFound desc = could not find container \"4ecb599d45491627d891719f8405002438c76fc0d4d316a1bfd6cd193b1f3a08\": container with ID starting with 4ecb599d45491627d891719f8405002438c76fc0d4d316a1bfd6cd193b1f3a08 not found: ID does not exist" Jan 21 13:27:19 crc kubenswrapper[4881]: I0121 13:27:19.329749 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51bcc54a-e7f1-455f-a90e-6dbb13e2ca49" path="/var/lib/kubelet/pods/51bcc54a-e7f1-455f-a90e-6dbb13e2ca49/volumes" Jan 21 13:27:22 crc kubenswrapper[4881]: I0121 13:27:22.312229 4881 scope.go:117] "RemoveContainer" containerID="4886b3658c033dd32099ac18722859936d425d0f630ab9d495cc3181594e8fc4" Jan 21 13:27:22 crc kubenswrapper[4881]: E0121 13:27:22.313054 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:27:33 crc kubenswrapper[4881]: I0121 13:27:33.319301 4881 scope.go:117] "RemoveContainer" containerID="4886b3658c033dd32099ac18722859936d425d0f630ab9d495cc3181594e8fc4" Jan 21 13:27:33 crc kubenswrapper[4881]: E0121 13:27:33.320097 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:27:48 crc kubenswrapper[4881]: I0121 13:27:48.312026 4881 scope.go:117] "RemoveContainer" containerID="4886b3658c033dd32099ac18722859936d425d0f630ab9d495cc3181594e8fc4" Jan 21 13:27:48 crc kubenswrapper[4881]: E0121 13:27:48.313218 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:28:02 crc kubenswrapper[4881]: I0121 13:28:02.311297 4881 scope.go:117] "RemoveContainer" containerID="4886b3658c033dd32099ac18722859936d425d0f630ab9d495cc3181594e8fc4" Jan 21 13:28:02 crc kubenswrapper[4881]: E0121 13:28:02.312205 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:28:13 crc kubenswrapper[4881]: I0121 13:28:13.324759 4881 scope.go:117] "RemoveContainer" containerID="4886b3658c033dd32099ac18722859936d425d0f630ab9d495cc3181594e8fc4" Jan 21 13:28:13 crc kubenswrapper[4881]: E0121 13:28:13.325775 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:28:25 crc kubenswrapper[4881]: I0121 13:28:25.311820 4881 scope.go:117] "RemoveContainer" containerID="4886b3658c033dd32099ac18722859936d425d0f630ab9d495cc3181594e8fc4" Jan 21 13:28:25 crc kubenswrapper[4881]: E0121 13:28:25.312971 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:28:36 crc kubenswrapper[4881]: I0121 13:28:36.312442 4881 scope.go:117] "RemoveContainer" containerID="4886b3658c033dd32099ac18722859936d425d0f630ab9d495cc3181594e8fc4" Jan 21 13:28:36 crc kubenswrapper[4881]: E0121 13:28:36.313692 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:28:51 crc kubenswrapper[4881]: I0121 13:28:51.315554 4881 scope.go:117] "RemoveContainer" containerID="4886b3658c033dd32099ac18722859936d425d0f630ab9d495cc3181594e8fc4" Jan 21 13:28:51 crc kubenswrapper[4881]: E0121 13:28:51.317081 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:29:04 crc kubenswrapper[4881]: I0121 13:29:04.310619 4881 scope.go:117] "RemoveContainer" containerID="4886b3658c033dd32099ac18722859936d425d0f630ab9d495cc3181594e8fc4" Jan 21 13:29:04 crc kubenswrapper[4881]: E0121 13:29:04.311718 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:29:18 crc kubenswrapper[4881]: I0121 13:29:18.311455 4881 scope.go:117] "RemoveContainer" containerID="4886b3658c033dd32099ac18722859936d425d0f630ab9d495cc3181594e8fc4" Jan 21 13:29:18 crc kubenswrapper[4881]: E0121 13:29:18.312344 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:29:29 crc kubenswrapper[4881]: I0121 13:29:29.315934 4881 scope.go:117] "RemoveContainer" containerID="4886b3658c033dd32099ac18722859936d425d0f630ab9d495cc3181594e8fc4" Jan 21 13:29:29 crc kubenswrapper[4881]: E0121 13:29:29.316829 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:29:44 crc kubenswrapper[4881]: I0121 13:29:44.312087 4881 scope.go:117] "RemoveContainer" containerID="4886b3658c033dd32099ac18722859936d425d0f630ab9d495cc3181594e8fc4" Jan 21 13:29:44 crc kubenswrapper[4881]: I0121 13:29:44.689361 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"9e57748be28be159b55c45e3fa90ee30718fb2ed9c755f793bb76672c2c13826"} Jan 21 13:30:00 crc kubenswrapper[4881]: I0121 13:30:00.159588 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483370-2sh2t"] Jan 21 13:30:00 crc kubenswrapper[4881]: E0121 13:30:00.160667 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51bcc54a-e7f1-455f-a90e-6dbb13e2ca49" containerName="registry-server" Jan 21 13:30:00 crc kubenswrapper[4881]: I0121 13:30:00.160693 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="51bcc54a-e7f1-455f-a90e-6dbb13e2ca49" containerName="registry-server" Jan 21 13:30:00 crc kubenswrapper[4881]: E0121 13:30:00.160712 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51bcc54a-e7f1-455f-a90e-6dbb13e2ca49" containerName="extract-content" Jan 21 13:30:00 crc kubenswrapper[4881]: I0121 13:30:00.160717 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="51bcc54a-e7f1-455f-a90e-6dbb13e2ca49" containerName="extract-content" Jan 21 13:30:00 crc kubenswrapper[4881]: E0121 13:30:00.160740 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51bcc54a-e7f1-455f-a90e-6dbb13e2ca49" containerName="extract-utilities" Jan 21 13:30:00 crc kubenswrapper[4881]: I0121 13:30:00.160746 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="51bcc54a-e7f1-455f-a90e-6dbb13e2ca49" containerName="extract-utilities" Jan 21 13:30:00 crc kubenswrapper[4881]: I0121 13:30:00.161062 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="51bcc54a-e7f1-455f-a90e-6dbb13e2ca49" containerName="registry-server" Jan 21 13:30:00 crc kubenswrapper[4881]: I0121 13:30:00.161947 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-2sh2t" Jan 21 13:30:00 crc kubenswrapper[4881]: I0121 13:30:00.164535 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 13:30:00 crc kubenswrapper[4881]: I0121 13:30:00.176911 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 13:30:00 crc kubenswrapper[4881]: I0121 13:30:00.188551 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483370-2sh2t"] Jan 21 13:30:00 crc kubenswrapper[4881]: I0121 13:30:00.291217 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66b8a832-c205-40a4-9a2f-e70e2f246734-config-volume\") pod \"collect-profiles-29483370-2sh2t\" (UID: \"66b8a832-c205-40a4-9a2f-e70e2f246734\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-2sh2t" Jan 21 13:30:00 crc kubenswrapper[4881]: I0121 13:30:00.291284 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/66b8a832-c205-40a4-9a2f-e70e2f246734-secret-volume\") pod \"collect-profiles-29483370-2sh2t\" (UID: \"66b8a832-c205-40a4-9a2f-e70e2f246734\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-2sh2t" Jan 21 13:30:00 crc kubenswrapper[4881]: I0121 13:30:00.291313 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjrnb\" (UniqueName: \"kubernetes.io/projected/66b8a832-c205-40a4-9a2f-e70e2f246734-kube-api-access-bjrnb\") pod \"collect-profiles-29483370-2sh2t\" (UID: \"66b8a832-c205-40a4-9a2f-e70e2f246734\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-2sh2t" Jan 21 13:30:00 crc kubenswrapper[4881]: I0121 13:30:00.394146 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66b8a832-c205-40a4-9a2f-e70e2f246734-config-volume\") pod \"collect-profiles-29483370-2sh2t\" (UID: \"66b8a832-c205-40a4-9a2f-e70e2f246734\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-2sh2t" Jan 21 13:30:00 crc kubenswrapper[4881]: I0121 13:30:00.394244 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/66b8a832-c205-40a4-9a2f-e70e2f246734-secret-volume\") pod \"collect-profiles-29483370-2sh2t\" (UID: \"66b8a832-c205-40a4-9a2f-e70e2f246734\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-2sh2t" Jan 21 13:30:00 crc kubenswrapper[4881]: I0121 13:30:00.394271 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjrnb\" (UniqueName: \"kubernetes.io/projected/66b8a832-c205-40a4-9a2f-e70e2f246734-kube-api-access-bjrnb\") pod \"collect-profiles-29483370-2sh2t\" (UID: \"66b8a832-c205-40a4-9a2f-e70e2f246734\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-2sh2t" Jan 21 13:30:00 crc kubenswrapper[4881]: I0121 13:30:00.395470 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66b8a832-c205-40a4-9a2f-e70e2f246734-config-volume\") pod \"collect-profiles-29483370-2sh2t\" (UID: \"66b8a832-c205-40a4-9a2f-e70e2f246734\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-2sh2t" Jan 21 13:30:00 crc kubenswrapper[4881]: I0121 13:30:00.411847 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/66b8a832-c205-40a4-9a2f-e70e2f246734-secret-volume\") pod \"collect-profiles-29483370-2sh2t\" (UID: \"66b8a832-c205-40a4-9a2f-e70e2f246734\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-2sh2t" Jan 21 13:30:00 crc kubenswrapper[4881]: I0121 13:30:00.419333 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjrnb\" (UniqueName: \"kubernetes.io/projected/66b8a832-c205-40a4-9a2f-e70e2f246734-kube-api-access-bjrnb\") pod \"collect-profiles-29483370-2sh2t\" (UID: \"66b8a832-c205-40a4-9a2f-e70e2f246734\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-2sh2t" Jan 21 13:30:00 crc kubenswrapper[4881]: I0121 13:30:00.489512 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-2sh2t" Jan 21 13:30:00 crc kubenswrapper[4881]: I0121 13:30:00.969841 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483370-2sh2t"] Jan 21 13:30:00 crc kubenswrapper[4881]: W0121 13:30:00.974248 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66b8a832_c205_40a4_9a2f_e70e2f246734.slice/crio-76503e9459d784f9580048265a4a432916dab198fb1de1da70bef9523f127374 WatchSource:0}: Error finding container 76503e9459d784f9580048265a4a432916dab198fb1de1da70bef9523f127374: Status 404 returned error can't find the container with id 76503e9459d784f9580048265a4a432916dab198fb1de1da70bef9523f127374 Jan 21 13:30:01 crc kubenswrapper[4881]: I0121 13:30:01.883320 4881 generic.go:334] "Generic (PLEG): container finished" podID="66b8a832-c205-40a4-9a2f-e70e2f246734" containerID="0ba5d71f6335983529a141b9ebebd16f047678d591839e108c1dd405896d81e3" exitCode=0 Jan 21 13:30:01 crc kubenswrapper[4881]: I0121 13:30:01.883491 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-2sh2t" event={"ID":"66b8a832-c205-40a4-9a2f-e70e2f246734","Type":"ContainerDied","Data":"0ba5d71f6335983529a141b9ebebd16f047678d591839e108c1dd405896d81e3"} Jan 21 13:30:01 crc kubenswrapper[4881]: I0121 13:30:01.883675 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-2sh2t" event={"ID":"66b8a832-c205-40a4-9a2f-e70e2f246734","Type":"ContainerStarted","Data":"76503e9459d784f9580048265a4a432916dab198fb1de1da70bef9523f127374"} Jan 21 13:30:03 crc kubenswrapper[4881]: I0121 13:30:03.307349 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-2sh2t" Jan 21 13:30:03 crc kubenswrapper[4881]: I0121 13:30:03.467637 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjrnb\" (UniqueName: \"kubernetes.io/projected/66b8a832-c205-40a4-9a2f-e70e2f246734-kube-api-access-bjrnb\") pod \"66b8a832-c205-40a4-9a2f-e70e2f246734\" (UID: \"66b8a832-c205-40a4-9a2f-e70e2f246734\") " Jan 21 13:30:03 crc kubenswrapper[4881]: I0121 13:30:03.468234 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/66b8a832-c205-40a4-9a2f-e70e2f246734-secret-volume\") pod \"66b8a832-c205-40a4-9a2f-e70e2f246734\" (UID: \"66b8a832-c205-40a4-9a2f-e70e2f246734\") " Jan 21 13:30:03 crc kubenswrapper[4881]: I0121 13:30:03.468611 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66b8a832-c205-40a4-9a2f-e70e2f246734-config-volume\") pod \"66b8a832-c205-40a4-9a2f-e70e2f246734\" (UID: \"66b8a832-c205-40a4-9a2f-e70e2f246734\") " Jan 21 13:30:03 crc kubenswrapper[4881]: I0121 13:30:03.472563 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66b8a832-c205-40a4-9a2f-e70e2f246734-config-volume" (OuterVolumeSpecName: "config-volume") pod "66b8a832-c205-40a4-9a2f-e70e2f246734" (UID: "66b8a832-c205-40a4-9a2f-e70e2f246734"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:30:03 crc kubenswrapper[4881]: I0121 13:30:03.475751 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66b8a832-c205-40a4-9a2f-e70e2f246734-kube-api-access-bjrnb" (OuterVolumeSpecName: "kube-api-access-bjrnb") pod "66b8a832-c205-40a4-9a2f-e70e2f246734" (UID: "66b8a832-c205-40a4-9a2f-e70e2f246734"). InnerVolumeSpecName "kube-api-access-bjrnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:30:03 crc kubenswrapper[4881]: I0121 13:30:03.486612 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66b8a832-c205-40a4-9a2f-e70e2f246734-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "66b8a832-c205-40a4-9a2f-e70e2f246734" (UID: "66b8a832-c205-40a4-9a2f-e70e2f246734"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:30:03 crc kubenswrapper[4881]: I0121 13:30:03.571910 4881 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66b8a832-c205-40a4-9a2f-e70e2f246734-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 13:30:03 crc kubenswrapper[4881]: I0121 13:30:03.571964 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjrnb\" (UniqueName: \"kubernetes.io/projected/66b8a832-c205-40a4-9a2f-e70e2f246734-kube-api-access-bjrnb\") on node \"crc\" DevicePath \"\"" Jan 21 13:30:03 crc kubenswrapper[4881]: I0121 13:30:03.571977 4881 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/66b8a832-c205-40a4-9a2f-e70e2f246734-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 13:30:03 crc kubenswrapper[4881]: I0121 13:30:03.924442 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-2sh2t" event={"ID":"66b8a832-c205-40a4-9a2f-e70e2f246734","Type":"ContainerDied","Data":"76503e9459d784f9580048265a4a432916dab198fb1de1da70bef9523f127374"} Jan 21 13:30:03 crc kubenswrapper[4881]: I0121 13:30:03.924915 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76503e9459d784f9580048265a4a432916dab198fb1de1da70bef9523f127374" Jan 21 13:30:03 crc kubenswrapper[4881]: I0121 13:30:03.925118 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483370-2sh2t" Jan 21 13:30:04 crc kubenswrapper[4881]: I0121 13:30:04.403174 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483325-rzms8"] Jan 21 13:30:04 crc kubenswrapper[4881]: I0121 13:30:04.411164 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483325-rzms8"] Jan 21 13:30:05 crc kubenswrapper[4881]: I0121 13:30:05.326498 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7" path="/var/lib/kubelet/pods/e92a1004-4ae7-4c9f-8ed8-1cb1a78dd2b7/volumes" Jan 21 13:30:36 crc kubenswrapper[4881]: I0121 13:30:36.646112 4881 scope.go:117] "RemoveContainer" containerID="77513d54cf4d9f5496abf1ce9933fa0d7aa3da0530b4c165a7c1ed70ba94b89c" Jan 21 13:31:48 crc kubenswrapper[4881]: I0121 13:31:48.414192 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ft2l4"] Jan 21 13:31:48 crc kubenswrapper[4881]: E0121 13:31:48.415274 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66b8a832-c205-40a4-9a2f-e70e2f246734" containerName="collect-profiles" Jan 21 13:31:48 crc kubenswrapper[4881]: I0121 13:31:48.415291 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="66b8a832-c205-40a4-9a2f-e70e2f246734" containerName="collect-profiles" Jan 21 13:31:48 crc kubenswrapper[4881]: I0121 13:31:48.415564 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="66b8a832-c205-40a4-9a2f-e70e2f246734" containerName="collect-profiles" Jan 21 13:31:48 crc kubenswrapper[4881]: I0121 13:31:48.421887 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ft2l4" Jan 21 13:31:48 crc kubenswrapper[4881]: I0121 13:31:48.431501 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ft2l4"] Jan 21 13:31:48 crc kubenswrapper[4881]: I0121 13:31:48.549208 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c759a886-be2c-47df-a1d7-1208d82c2f59-utilities\") pod \"redhat-operators-ft2l4\" (UID: \"c759a886-be2c-47df-a1d7-1208d82c2f59\") " pod="openshift-marketplace/redhat-operators-ft2l4" Jan 21 13:31:48 crc kubenswrapper[4881]: I0121 13:31:48.549292 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c759a886-be2c-47df-a1d7-1208d82c2f59-catalog-content\") pod \"redhat-operators-ft2l4\" (UID: \"c759a886-be2c-47df-a1d7-1208d82c2f59\") " pod="openshift-marketplace/redhat-operators-ft2l4" Jan 21 13:31:48 crc kubenswrapper[4881]: I0121 13:31:48.549602 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clkkc\" (UniqueName: \"kubernetes.io/projected/c759a886-be2c-47df-a1d7-1208d82c2f59-kube-api-access-clkkc\") pod \"redhat-operators-ft2l4\" (UID: \"c759a886-be2c-47df-a1d7-1208d82c2f59\") " pod="openshift-marketplace/redhat-operators-ft2l4" Jan 21 13:31:48 crc kubenswrapper[4881]: I0121 13:31:48.652755 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c759a886-be2c-47df-a1d7-1208d82c2f59-utilities\") pod \"redhat-operators-ft2l4\" (UID: \"c759a886-be2c-47df-a1d7-1208d82c2f59\") " pod="openshift-marketplace/redhat-operators-ft2l4" Jan 21 13:31:48 crc kubenswrapper[4881]: I0121 13:31:48.652854 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c759a886-be2c-47df-a1d7-1208d82c2f59-catalog-content\") pod \"redhat-operators-ft2l4\" (UID: \"c759a886-be2c-47df-a1d7-1208d82c2f59\") " pod="openshift-marketplace/redhat-operators-ft2l4" Jan 21 13:31:48 crc kubenswrapper[4881]: I0121 13:31:48.652915 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clkkc\" (UniqueName: \"kubernetes.io/projected/c759a886-be2c-47df-a1d7-1208d82c2f59-kube-api-access-clkkc\") pod \"redhat-operators-ft2l4\" (UID: \"c759a886-be2c-47df-a1d7-1208d82c2f59\") " pod="openshift-marketplace/redhat-operators-ft2l4" Jan 21 13:31:48 crc kubenswrapper[4881]: I0121 13:31:48.653457 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c759a886-be2c-47df-a1d7-1208d82c2f59-utilities\") pod \"redhat-operators-ft2l4\" (UID: \"c759a886-be2c-47df-a1d7-1208d82c2f59\") " pod="openshift-marketplace/redhat-operators-ft2l4" Jan 21 13:31:48 crc kubenswrapper[4881]: I0121 13:31:48.653494 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c759a886-be2c-47df-a1d7-1208d82c2f59-catalog-content\") pod \"redhat-operators-ft2l4\" (UID: \"c759a886-be2c-47df-a1d7-1208d82c2f59\") " pod="openshift-marketplace/redhat-operators-ft2l4" Jan 21 13:31:48 crc kubenswrapper[4881]: I0121 13:31:48.676329 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clkkc\" (UniqueName: \"kubernetes.io/projected/c759a886-be2c-47df-a1d7-1208d82c2f59-kube-api-access-clkkc\") pod \"redhat-operators-ft2l4\" (UID: \"c759a886-be2c-47df-a1d7-1208d82c2f59\") " pod="openshift-marketplace/redhat-operators-ft2l4" Jan 21 13:31:48 crc kubenswrapper[4881]: I0121 13:31:48.769987 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ft2l4" Jan 21 13:31:49 crc kubenswrapper[4881]: I0121 13:31:49.283894 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ft2l4"] Jan 21 13:31:49 crc kubenswrapper[4881]: I0121 13:31:49.500116 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ft2l4" event={"ID":"c759a886-be2c-47df-a1d7-1208d82c2f59","Type":"ContainerStarted","Data":"2df816c5e6752dbeb71a7b9bbfa33d75710ad8f517cdada1359b9256fa202c34"} Jan 21 13:31:50 crc kubenswrapper[4881]: I0121 13:31:50.512996 4881 generic.go:334] "Generic (PLEG): container finished" podID="c759a886-be2c-47df-a1d7-1208d82c2f59" containerID="268d2958c35060cfcd098ead85774caebc987e2f07b6521892e13e27bbd7542e" exitCode=0 Jan 21 13:31:50 crc kubenswrapper[4881]: I0121 13:31:50.513119 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ft2l4" event={"ID":"c759a886-be2c-47df-a1d7-1208d82c2f59","Type":"ContainerDied","Data":"268d2958c35060cfcd098ead85774caebc987e2f07b6521892e13e27bbd7542e"} Jan 21 13:31:50 crc kubenswrapper[4881]: I0121 13:31:50.515538 4881 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 13:31:52 crc kubenswrapper[4881]: I0121 13:31:52.535589 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ft2l4" event={"ID":"c759a886-be2c-47df-a1d7-1208d82c2f59","Type":"ContainerStarted","Data":"645a629d574e21d4164a272af8a2d18057eaf2429750011101612452f6c847c3"} Jan 21 13:31:56 crc kubenswrapper[4881]: I0121 13:31:56.602534 4881 generic.go:334] "Generic (PLEG): container finished" podID="c759a886-be2c-47df-a1d7-1208d82c2f59" containerID="645a629d574e21d4164a272af8a2d18057eaf2429750011101612452f6c847c3" exitCode=0 Jan 21 13:31:56 crc kubenswrapper[4881]: I0121 13:31:56.602638 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ft2l4" event={"ID":"c759a886-be2c-47df-a1d7-1208d82c2f59","Type":"ContainerDied","Data":"645a629d574e21d4164a272af8a2d18057eaf2429750011101612452f6c847c3"} Jan 21 13:31:57 crc kubenswrapper[4881]: I0121 13:31:57.614299 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ft2l4" event={"ID":"c759a886-be2c-47df-a1d7-1208d82c2f59","Type":"ContainerStarted","Data":"99f9dfbb0e65c7e6f7b6294407d45e0162afa7411415e9ceed83cccdb2a31aa8"} Jan 21 13:31:57 crc kubenswrapper[4881]: I0121 13:31:57.646772 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ft2l4" podStartSLOduration=2.927036289 podStartE2EDuration="9.646720409s" podCreationTimestamp="2026-01-21 13:31:48 +0000 UTC" firstStartedPulling="2026-01-21 13:31:50.515215159 +0000 UTC m=+9297.775171618" lastFinishedPulling="2026-01-21 13:31:57.234899259 +0000 UTC m=+9304.494855738" observedRunningTime="2026-01-21 13:31:57.635122286 +0000 UTC m=+9304.895078765" watchObservedRunningTime="2026-01-21 13:31:57.646720409 +0000 UTC m=+9304.906676878" Jan 21 13:31:58 crc kubenswrapper[4881]: I0121 13:31:58.770768 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ft2l4" Jan 21 13:31:58 crc kubenswrapper[4881]: I0121 13:31:58.771187 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ft2l4" Jan 21 13:31:59 crc kubenswrapper[4881]: I0121 13:31:59.828023 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ft2l4" podUID="c759a886-be2c-47df-a1d7-1208d82c2f59" containerName="registry-server" probeResult="failure" output=< Jan 21 13:31:59 crc kubenswrapper[4881]: timeout: failed to connect service ":50051" within 1s Jan 21 13:31:59 crc kubenswrapper[4881]: > Jan 21 13:32:00 crc kubenswrapper[4881]: I0121 13:31:59.851475 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:32:00 crc kubenswrapper[4881]: I0121 13:31:59.851574 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:32:08 crc kubenswrapper[4881]: I0121 13:32:08.854106 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ft2l4" Jan 21 13:32:08 crc kubenswrapper[4881]: I0121 13:32:08.917068 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ft2l4" Jan 21 13:32:09 crc kubenswrapper[4881]: I0121 13:32:09.110919 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ft2l4"] Jan 21 13:32:10 crc kubenswrapper[4881]: I0121 13:32:10.766345 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ft2l4" podUID="c759a886-be2c-47df-a1d7-1208d82c2f59" containerName="registry-server" containerID="cri-o://99f9dfbb0e65c7e6f7b6294407d45e0162afa7411415e9ceed83cccdb2a31aa8" gracePeriod=2 Jan 21 13:32:11 crc kubenswrapper[4881]: I0121 13:32:11.377427 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ft2l4" Jan 21 13:32:11 crc kubenswrapper[4881]: I0121 13:32:11.560136 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c759a886-be2c-47df-a1d7-1208d82c2f59-catalog-content\") pod \"c759a886-be2c-47df-a1d7-1208d82c2f59\" (UID: \"c759a886-be2c-47df-a1d7-1208d82c2f59\") " Jan 21 13:32:11 crc kubenswrapper[4881]: I0121 13:32:11.560282 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clkkc\" (UniqueName: \"kubernetes.io/projected/c759a886-be2c-47df-a1d7-1208d82c2f59-kube-api-access-clkkc\") pod \"c759a886-be2c-47df-a1d7-1208d82c2f59\" (UID: \"c759a886-be2c-47df-a1d7-1208d82c2f59\") " Jan 21 13:32:11 crc kubenswrapper[4881]: I0121 13:32:11.560337 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c759a886-be2c-47df-a1d7-1208d82c2f59-utilities\") pod \"c759a886-be2c-47df-a1d7-1208d82c2f59\" (UID: \"c759a886-be2c-47df-a1d7-1208d82c2f59\") " Jan 21 13:32:11 crc kubenswrapper[4881]: I0121 13:32:11.561659 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c759a886-be2c-47df-a1d7-1208d82c2f59-utilities" (OuterVolumeSpecName: "utilities") pod "c759a886-be2c-47df-a1d7-1208d82c2f59" (UID: "c759a886-be2c-47df-a1d7-1208d82c2f59"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:32:11 crc kubenswrapper[4881]: I0121 13:32:11.576839 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c759a886-be2c-47df-a1d7-1208d82c2f59-kube-api-access-clkkc" (OuterVolumeSpecName: "kube-api-access-clkkc") pod "c759a886-be2c-47df-a1d7-1208d82c2f59" (UID: "c759a886-be2c-47df-a1d7-1208d82c2f59"). InnerVolumeSpecName "kube-api-access-clkkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:32:11 crc kubenswrapper[4881]: I0121 13:32:11.663419 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clkkc\" (UniqueName: \"kubernetes.io/projected/c759a886-be2c-47df-a1d7-1208d82c2f59-kube-api-access-clkkc\") on node \"crc\" DevicePath \"\"" Jan 21 13:32:11 crc kubenswrapper[4881]: I0121 13:32:11.663466 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c759a886-be2c-47df-a1d7-1208d82c2f59-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:32:11 crc kubenswrapper[4881]: I0121 13:32:11.714569 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c759a886-be2c-47df-a1d7-1208d82c2f59-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c759a886-be2c-47df-a1d7-1208d82c2f59" (UID: "c759a886-be2c-47df-a1d7-1208d82c2f59"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:32:11 crc kubenswrapper[4881]: I0121 13:32:11.765588 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c759a886-be2c-47df-a1d7-1208d82c2f59-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:32:11 crc kubenswrapper[4881]: I0121 13:32:11.779963 4881 generic.go:334] "Generic (PLEG): container finished" podID="c759a886-be2c-47df-a1d7-1208d82c2f59" containerID="99f9dfbb0e65c7e6f7b6294407d45e0162afa7411415e9ceed83cccdb2a31aa8" exitCode=0 Jan 21 13:32:11 crc kubenswrapper[4881]: I0121 13:32:11.780017 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ft2l4" event={"ID":"c759a886-be2c-47df-a1d7-1208d82c2f59","Type":"ContainerDied","Data":"99f9dfbb0e65c7e6f7b6294407d45e0162afa7411415e9ceed83cccdb2a31aa8"} Jan 21 13:32:11 crc kubenswrapper[4881]: I0121 13:32:11.780059 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ft2l4" event={"ID":"c759a886-be2c-47df-a1d7-1208d82c2f59","Type":"ContainerDied","Data":"2df816c5e6752dbeb71a7b9bbfa33d75710ad8f517cdada1359b9256fa202c34"} Jan 21 13:32:11 crc kubenswrapper[4881]: I0121 13:32:11.780081 4881 scope.go:117] "RemoveContainer" containerID="99f9dfbb0e65c7e6f7b6294407d45e0162afa7411415e9ceed83cccdb2a31aa8" Jan 21 13:32:11 crc kubenswrapper[4881]: I0121 13:32:11.780262 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ft2l4" Jan 21 13:32:11 crc kubenswrapper[4881]: I0121 13:32:11.800325 4881 scope.go:117] "RemoveContainer" containerID="645a629d574e21d4164a272af8a2d18057eaf2429750011101612452f6c847c3" Jan 21 13:32:11 crc kubenswrapper[4881]: I0121 13:32:11.830008 4881 scope.go:117] "RemoveContainer" containerID="268d2958c35060cfcd098ead85774caebc987e2f07b6521892e13e27bbd7542e" Jan 21 13:32:11 crc kubenswrapper[4881]: I0121 13:32:11.833416 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ft2l4"] Jan 21 13:32:11 crc kubenswrapper[4881]: I0121 13:32:11.841610 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ft2l4"] Jan 21 13:32:11 crc kubenswrapper[4881]: I0121 13:32:11.880712 4881 scope.go:117] "RemoveContainer" containerID="99f9dfbb0e65c7e6f7b6294407d45e0162afa7411415e9ceed83cccdb2a31aa8" Jan 21 13:32:11 crc kubenswrapper[4881]: E0121 13:32:11.883388 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99f9dfbb0e65c7e6f7b6294407d45e0162afa7411415e9ceed83cccdb2a31aa8\": container with ID starting with 99f9dfbb0e65c7e6f7b6294407d45e0162afa7411415e9ceed83cccdb2a31aa8 not found: ID does not exist" containerID="99f9dfbb0e65c7e6f7b6294407d45e0162afa7411415e9ceed83cccdb2a31aa8" Jan 21 13:32:11 crc kubenswrapper[4881]: I0121 13:32:11.883437 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99f9dfbb0e65c7e6f7b6294407d45e0162afa7411415e9ceed83cccdb2a31aa8"} err="failed to get container status \"99f9dfbb0e65c7e6f7b6294407d45e0162afa7411415e9ceed83cccdb2a31aa8\": rpc error: code = NotFound desc = could not find container \"99f9dfbb0e65c7e6f7b6294407d45e0162afa7411415e9ceed83cccdb2a31aa8\": container with ID starting with 99f9dfbb0e65c7e6f7b6294407d45e0162afa7411415e9ceed83cccdb2a31aa8 not found: ID does not exist" Jan 21 13:32:11 crc kubenswrapper[4881]: I0121 13:32:11.883469 4881 scope.go:117] "RemoveContainer" containerID="645a629d574e21d4164a272af8a2d18057eaf2429750011101612452f6c847c3" Jan 21 13:32:11 crc kubenswrapper[4881]: E0121 13:32:11.883850 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"645a629d574e21d4164a272af8a2d18057eaf2429750011101612452f6c847c3\": container with ID starting with 645a629d574e21d4164a272af8a2d18057eaf2429750011101612452f6c847c3 not found: ID does not exist" containerID="645a629d574e21d4164a272af8a2d18057eaf2429750011101612452f6c847c3" Jan 21 13:32:11 crc kubenswrapper[4881]: I0121 13:32:11.883872 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"645a629d574e21d4164a272af8a2d18057eaf2429750011101612452f6c847c3"} err="failed to get container status \"645a629d574e21d4164a272af8a2d18057eaf2429750011101612452f6c847c3\": rpc error: code = NotFound desc = could not find container \"645a629d574e21d4164a272af8a2d18057eaf2429750011101612452f6c847c3\": container with ID starting with 645a629d574e21d4164a272af8a2d18057eaf2429750011101612452f6c847c3 not found: ID does not exist" Jan 21 13:32:11 crc kubenswrapper[4881]: I0121 13:32:11.883886 4881 scope.go:117] "RemoveContainer" containerID="268d2958c35060cfcd098ead85774caebc987e2f07b6521892e13e27bbd7542e" Jan 21 13:32:11 crc kubenswrapper[4881]: E0121 13:32:11.884114 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"268d2958c35060cfcd098ead85774caebc987e2f07b6521892e13e27bbd7542e\": container with ID starting with 268d2958c35060cfcd098ead85774caebc987e2f07b6521892e13e27bbd7542e not found: ID does not exist" containerID="268d2958c35060cfcd098ead85774caebc987e2f07b6521892e13e27bbd7542e" Jan 21 13:32:11 crc kubenswrapper[4881]: I0121 13:32:11.884139 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"268d2958c35060cfcd098ead85774caebc987e2f07b6521892e13e27bbd7542e"} err="failed to get container status \"268d2958c35060cfcd098ead85774caebc987e2f07b6521892e13e27bbd7542e\": rpc error: code = NotFound desc = could not find container \"268d2958c35060cfcd098ead85774caebc987e2f07b6521892e13e27bbd7542e\": container with ID starting with 268d2958c35060cfcd098ead85774caebc987e2f07b6521892e13e27bbd7542e not found: ID does not exist" Jan 21 13:32:13 crc kubenswrapper[4881]: I0121 13:32:13.335814 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c759a886-be2c-47df-a1d7-1208d82c2f59" path="/var/lib/kubelet/pods/c759a886-be2c-47df-a1d7-1208d82c2f59/volumes" Jan 21 13:32:29 crc kubenswrapper[4881]: I0121 13:32:29.850887 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:32:29 crc kubenswrapper[4881]: I0121 13:32:29.851481 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:32:59 crc kubenswrapper[4881]: I0121 13:32:59.850885 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:32:59 crc kubenswrapper[4881]: I0121 13:32:59.851346 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:32:59 crc kubenswrapper[4881]: I0121 13:32:59.851404 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 13:32:59 crc kubenswrapper[4881]: I0121 13:32:59.852449 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9e57748be28be159b55c45e3fa90ee30718fb2ed9c755f793bb76672c2c13826"} pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 13:32:59 crc kubenswrapper[4881]: I0121 13:32:59.852514 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" containerID="cri-o://9e57748be28be159b55c45e3fa90ee30718fb2ed9c755f793bb76672c2c13826" gracePeriod=600 Jan 21 13:33:00 crc kubenswrapper[4881]: I0121 13:33:00.468943 4881 generic.go:334] "Generic (PLEG): container finished" podID="3687b313-1df2-4274-80db-8c758b51bf2d" containerID="9e57748be28be159b55c45e3fa90ee30718fb2ed9c755f793bb76672c2c13826" exitCode=0 Jan 21 13:33:00 crc kubenswrapper[4881]: I0121 13:33:00.469036 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerDied","Data":"9e57748be28be159b55c45e3fa90ee30718fb2ed9c755f793bb76672c2c13826"} Jan 21 13:33:00 crc kubenswrapper[4881]: I0121 13:33:00.469318 4881 scope.go:117] "RemoveContainer" containerID="4886b3658c033dd32099ac18722859936d425d0f630ab9d495cc3181594e8fc4" Jan 21 13:33:01 crc kubenswrapper[4881]: I0121 13:33:01.483874 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"3422468dfa40d9a7a02df7eba322e19f98dd660e5b5e992afb43cdf01389ca85"} Jan 21 13:35:29 crc kubenswrapper[4881]: I0121 13:35:29.851316 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:35:29 crc kubenswrapper[4881]: I0121 13:35:29.851953 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:35:59 crc kubenswrapper[4881]: I0121 13:35:59.851526 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:35:59 crc kubenswrapper[4881]: I0121 13:35:59.852179 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:36:29 crc kubenswrapper[4881]: I0121 13:36:29.851409 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:36:29 crc kubenswrapper[4881]: I0121 13:36:29.852023 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:36:29 crc kubenswrapper[4881]: I0121 13:36:29.852076 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 13:36:29 crc kubenswrapper[4881]: I0121 13:36:29.852985 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3422468dfa40d9a7a02df7eba322e19f98dd660e5b5e992afb43cdf01389ca85"} pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 13:36:29 crc kubenswrapper[4881]: I0121 13:36:29.853041 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" containerID="cri-o://3422468dfa40d9a7a02df7eba322e19f98dd660e5b5e992afb43cdf01389ca85" gracePeriod=600 Jan 21 13:36:29 crc kubenswrapper[4881]: E0121 13:36:29.990286 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:36:30 crc kubenswrapper[4881]: I0121 13:36:30.192299 4881 generic.go:334] "Generic (PLEG): container finished" podID="3687b313-1df2-4274-80db-8c758b51bf2d" containerID="3422468dfa40d9a7a02df7eba322e19f98dd660e5b5e992afb43cdf01389ca85" exitCode=0 Jan 21 13:36:30 crc kubenswrapper[4881]: I0121 13:36:30.192357 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerDied","Data":"3422468dfa40d9a7a02df7eba322e19f98dd660e5b5e992afb43cdf01389ca85"} Jan 21 13:36:30 crc kubenswrapper[4881]: I0121 13:36:30.192399 4881 scope.go:117] "RemoveContainer" containerID="9e57748be28be159b55c45e3fa90ee30718fb2ed9c755f793bb76672c2c13826" Jan 21 13:36:30 crc kubenswrapper[4881]: I0121 13:36:30.193746 4881 scope.go:117] "RemoveContainer" containerID="3422468dfa40d9a7a02df7eba322e19f98dd660e5b5e992afb43cdf01389ca85" Jan 21 13:36:30 crc kubenswrapper[4881]: E0121 13:36:30.194611 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:36:44 crc kubenswrapper[4881]: I0121 13:36:44.311095 4881 scope.go:117] "RemoveContainer" containerID="3422468dfa40d9a7a02df7eba322e19f98dd660e5b5e992afb43cdf01389ca85" Jan 21 13:36:44 crc kubenswrapper[4881]: E0121 13:36:44.311953 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:36:57 crc kubenswrapper[4881]: I0121 13:36:57.311501 4881 scope.go:117] "RemoveContainer" containerID="3422468dfa40d9a7a02df7eba322e19f98dd660e5b5e992afb43cdf01389ca85" Jan 21 13:36:57 crc kubenswrapper[4881]: E0121 13:36:57.312495 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:37:12 crc kubenswrapper[4881]: I0121 13:37:12.311557 4881 scope.go:117] "RemoveContainer" containerID="3422468dfa40d9a7a02df7eba322e19f98dd660e5b5e992afb43cdf01389ca85" Jan 21 13:37:12 crc kubenswrapper[4881]: E0121 13:37:12.312907 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:37:26 crc kubenswrapper[4881]: I0121 13:37:26.310849 4881 scope.go:117] "RemoveContainer" containerID="3422468dfa40d9a7a02df7eba322e19f98dd660e5b5e992afb43cdf01389ca85" Jan 21 13:37:26 crc kubenswrapper[4881]: E0121 13:37:26.311875 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:37:39 crc kubenswrapper[4881]: I0121 13:37:39.311234 4881 scope.go:117] "RemoveContainer" containerID="3422468dfa40d9a7a02df7eba322e19f98dd660e5b5e992afb43cdf01389ca85" Jan 21 13:37:39 crc kubenswrapper[4881]: E0121 13:37:39.312268 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:37:53 crc kubenswrapper[4881]: I0121 13:37:53.323739 4881 scope.go:117] "RemoveContainer" containerID="3422468dfa40d9a7a02df7eba322e19f98dd660e5b5e992afb43cdf01389ca85" Jan 21 13:37:53 crc kubenswrapper[4881]: E0121 13:37:53.324545 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:37:57 crc kubenswrapper[4881]: I0121 13:37:57.018050 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-s88gg"] Jan 21 13:37:57 crc kubenswrapper[4881]: E0121 13:37:57.019131 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c759a886-be2c-47df-a1d7-1208d82c2f59" containerName="extract-content" Jan 21 13:37:57 crc kubenswrapper[4881]: I0121 13:37:57.019148 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="c759a886-be2c-47df-a1d7-1208d82c2f59" containerName="extract-content" Jan 21 13:37:57 crc kubenswrapper[4881]: E0121 13:37:57.019193 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c759a886-be2c-47df-a1d7-1208d82c2f59" containerName="extract-utilities" Jan 21 13:37:57 crc kubenswrapper[4881]: I0121 13:37:57.019202 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="c759a886-be2c-47df-a1d7-1208d82c2f59" containerName="extract-utilities" Jan 21 13:37:57 crc kubenswrapper[4881]: E0121 13:37:57.019223 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c759a886-be2c-47df-a1d7-1208d82c2f59" containerName="registry-server" Jan 21 13:37:57 crc kubenswrapper[4881]: I0121 13:37:57.019230 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="c759a886-be2c-47df-a1d7-1208d82c2f59" containerName="registry-server" Jan 21 13:37:57 crc kubenswrapper[4881]: I0121 13:37:57.019527 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="c759a886-be2c-47df-a1d7-1208d82c2f59" containerName="registry-server" Jan 21 13:37:57 crc kubenswrapper[4881]: I0121 13:37:57.021561 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s88gg" Jan 21 13:37:57 crc kubenswrapper[4881]: I0121 13:37:57.027927 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s88gg"] Jan 21 13:37:57 crc kubenswrapper[4881]: I0121 13:37:57.034211 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed14e1b3-9440-4f92-a793-683eb01e4401-catalog-content\") pod \"redhat-marketplace-s88gg\" (UID: \"ed14e1b3-9440-4f92-a793-683eb01e4401\") " pod="openshift-marketplace/redhat-marketplace-s88gg" Jan 21 13:37:57 crc kubenswrapper[4881]: I0121 13:37:57.034395 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed14e1b3-9440-4f92-a793-683eb01e4401-utilities\") pod \"redhat-marketplace-s88gg\" (UID: \"ed14e1b3-9440-4f92-a793-683eb01e4401\") " pod="openshift-marketplace/redhat-marketplace-s88gg" Jan 21 13:37:57 crc kubenswrapper[4881]: I0121 13:37:57.034542 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b4dz\" (UniqueName: \"kubernetes.io/projected/ed14e1b3-9440-4f92-a793-683eb01e4401-kube-api-access-7b4dz\") pod \"redhat-marketplace-s88gg\" (UID: \"ed14e1b3-9440-4f92-a793-683eb01e4401\") " pod="openshift-marketplace/redhat-marketplace-s88gg" Jan 21 13:37:57 crc kubenswrapper[4881]: I0121 13:37:57.135636 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7b4dz\" (UniqueName: \"kubernetes.io/projected/ed14e1b3-9440-4f92-a793-683eb01e4401-kube-api-access-7b4dz\") pod \"redhat-marketplace-s88gg\" (UID: \"ed14e1b3-9440-4f92-a793-683eb01e4401\") " pod="openshift-marketplace/redhat-marketplace-s88gg" Jan 21 13:37:57 crc kubenswrapper[4881]: I0121 13:37:57.135721 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed14e1b3-9440-4f92-a793-683eb01e4401-catalog-content\") pod \"redhat-marketplace-s88gg\" (UID: \"ed14e1b3-9440-4f92-a793-683eb01e4401\") " pod="openshift-marketplace/redhat-marketplace-s88gg" Jan 21 13:37:57 crc kubenswrapper[4881]: I0121 13:37:57.135860 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed14e1b3-9440-4f92-a793-683eb01e4401-utilities\") pod \"redhat-marketplace-s88gg\" (UID: \"ed14e1b3-9440-4f92-a793-683eb01e4401\") " pod="openshift-marketplace/redhat-marketplace-s88gg" Jan 21 13:37:57 crc kubenswrapper[4881]: I0121 13:37:57.136406 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed14e1b3-9440-4f92-a793-683eb01e4401-catalog-content\") pod \"redhat-marketplace-s88gg\" (UID: \"ed14e1b3-9440-4f92-a793-683eb01e4401\") " pod="openshift-marketplace/redhat-marketplace-s88gg" Jan 21 13:37:57 crc kubenswrapper[4881]: I0121 13:37:57.136436 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed14e1b3-9440-4f92-a793-683eb01e4401-utilities\") pod \"redhat-marketplace-s88gg\" (UID: \"ed14e1b3-9440-4f92-a793-683eb01e4401\") " pod="openshift-marketplace/redhat-marketplace-s88gg" Jan 21 13:37:57 crc kubenswrapper[4881]: I0121 13:37:57.163040 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7b4dz\" (UniqueName: \"kubernetes.io/projected/ed14e1b3-9440-4f92-a793-683eb01e4401-kube-api-access-7b4dz\") pod \"redhat-marketplace-s88gg\" (UID: \"ed14e1b3-9440-4f92-a793-683eb01e4401\") " pod="openshift-marketplace/redhat-marketplace-s88gg" Jan 21 13:37:57 crc kubenswrapper[4881]: I0121 13:37:57.209734 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lr68z"] Jan 21 13:37:57 crc kubenswrapper[4881]: I0121 13:37:57.211971 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lr68z" Jan 21 13:37:57 crc kubenswrapper[4881]: I0121 13:37:57.220726 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lr68z"] Jan 21 13:37:57 crc kubenswrapper[4881]: I0121 13:37:57.238983 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03907694-a0e6-40d6-8142-9f20169ffe16-utilities\") pod \"certified-operators-lr68z\" (UID: \"03907694-a0e6-40d6-8142-9f20169ffe16\") " pod="openshift-marketplace/certified-operators-lr68z" Jan 21 13:37:57 crc kubenswrapper[4881]: I0121 13:37:57.239404 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcnx5\" (UniqueName: \"kubernetes.io/projected/03907694-a0e6-40d6-8142-9f20169ffe16-kube-api-access-pcnx5\") pod \"certified-operators-lr68z\" (UID: \"03907694-a0e6-40d6-8142-9f20169ffe16\") " pod="openshift-marketplace/certified-operators-lr68z" Jan 21 13:37:57 crc kubenswrapper[4881]: I0121 13:37:57.239447 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03907694-a0e6-40d6-8142-9f20169ffe16-catalog-content\") pod \"certified-operators-lr68z\" (UID: \"03907694-a0e6-40d6-8142-9f20169ffe16\") " pod="openshift-marketplace/certified-operators-lr68z" Jan 21 13:37:57 crc kubenswrapper[4881]: I0121 13:37:57.340745 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcnx5\" (UniqueName: \"kubernetes.io/projected/03907694-a0e6-40d6-8142-9f20169ffe16-kube-api-access-pcnx5\") pod \"certified-operators-lr68z\" (UID: \"03907694-a0e6-40d6-8142-9f20169ffe16\") " pod="openshift-marketplace/certified-operators-lr68z" Jan 21 13:37:57 crc kubenswrapper[4881]: I0121 13:37:57.340810 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03907694-a0e6-40d6-8142-9f20169ffe16-catalog-content\") pod \"certified-operators-lr68z\" (UID: \"03907694-a0e6-40d6-8142-9f20169ffe16\") " pod="openshift-marketplace/certified-operators-lr68z" Jan 21 13:37:57 crc kubenswrapper[4881]: I0121 13:37:57.341372 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03907694-a0e6-40d6-8142-9f20169ffe16-catalog-content\") pod \"certified-operators-lr68z\" (UID: \"03907694-a0e6-40d6-8142-9f20169ffe16\") " pod="openshift-marketplace/certified-operators-lr68z" Jan 21 13:37:57 crc kubenswrapper[4881]: I0121 13:37:57.340903 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03907694-a0e6-40d6-8142-9f20169ffe16-utilities\") pod \"certified-operators-lr68z\" (UID: \"03907694-a0e6-40d6-8142-9f20169ffe16\") " pod="openshift-marketplace/certified-operators-lr68z" Jan 21 13:37:57 crc kubenswrapper[4881]: I0121 13:37:57.341933 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s88gg" Jan 21 13:37:57 crc kubenswrapper[4881]: I0121 13:37:57.343073 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03907694-a0e6-40d6-8142-9f20169ffe16-utilities\") pod \"certified-operators-lr68z\" (UID: \"03907694-a0e6-40d6-8142-9f20169ffe16\") " pod="openshift-marketplace/certified-operators-lr68z" Jan 21 13:37:57 crc kubenswrapper[4881]: I0121 13:37:57.357757 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcnx5\" (UniqueName: \"kubernetes.io/projected/03907694-a0e6-40d6-8142-9f20169ffe16-kube-api-access-pcnx5\") pod \"certified-operators-lr68z\" (UID: \"03907694-a0e6-40d6-8142-9f20169ffe16\") " pod="openshift-marketplace/certified-operators-lr68z" Jan 21 13:37:57 crc kubenswrapper[4881]: I0121 13:37:57.556115 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lr68z" Jan 21 13:37:58 crc kubenswrapper[4881]: I0121 13:37:57.954847 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s88gg"] Jan 21 13:37:58 crc kubenswrapper[4881]: W0121 13:37:57.980595 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded14e1b3_9440_4f92_a793_683eb01e4401.slice/crio-3128a5db6427ef859cbb254cfb64ea3e3fe6d1a8d86c2c240331ac40ce10660b WatchSource:0}: Error finding container 3128a5db6427ef859cbb254cfb64ea3e3fe6d1a8d86c2c240331ac40ce10660b: Status 404 returned error can't find the container with id 3128a5db6427ef859cbb254cfb64ea3e3fe6d1a8d86c2c240331ac40ce10660b Jan 21 13:37:58 crc kubenswrapper[4881]: I0121 13:37:58.399856 4881 generic.go:334] "Generic (PLEG): container finished" podID="ed14e1b3-9440-4f92-a793-683eb01e4401" containerID="a4abcb1bb7fc4e7fbb86a1d2bb48c302a0121a5ee39ee48b7817d62657b97100" exitCode=0 Jan 21 13:37:58 crc kubenswrapper[4881]: I0121 13:37:58.400145 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s88gg" event={"ID":"ed14e1b3-9440-4f92-a793-683eb01e4401","Type":"ContainerDied","Data":"a4abcb1bb7fc4e7fbb86a1d2bb48c302a0121a5ee39ee48b7817d62657b97100"} Jan 21 13:37:58 crc kubenswrapper[4881]: I0121 13:37:58.400172 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s88gg" event={"ID":"ed14e1b3-9440-4f92-a793-683eb01e4401","Type":"ContainerStarted","Data":"3128a5db6427ef859cbb254cfb64ea3e3fe6d1a8d86c2c240331ac40ce10660b"} Jan 21 13:37:58 crc kubenswrapper[4881]: I0121 13:37:58.403460 4881 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 13:37:58 crc kubenswrapper[4881]: I0121 13:37:58.941311 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lr68z"] Jan 21 13:37:59 crc kubenswrapper[4881]: I0121 13:37:59.412691 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lr68z" event={"ID":"03907694-a0e6-40d6-8142-9f20169ffe16","Type":"ContainerStarted","Data":"17ba08f13a57d780ffed935060230f2347bf7739e7e17de9f0c3d10f0e502757"} Jan 21 13:37:59 crc kubenswrapper[4881]: I0121 13:37:59.616477 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zw7gl"] Jan 21 13:37:59 crc kubenswrapper[4881]: I0121 13:37:59.619466 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zw7gl" Jan 21 13:37:59 crc kubenswrapper[4881]: I0121 13:37:59.679674 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zw7gl"] Jan 21 13:37:59 crc kubenswrapper[4881]: I0121 13:37:59.702042 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf33fd22-6287-45a0-a95d-52c731fdda8d-catalog-content\") pod \"community-operators-zw7gl\" (UID: \"bf33fd22-6287-45a0-a95d-52c731fdda8d\") " pod="openshift-marketplace/community-operators-zw7gl" Jan 21 13:37:59 crc kubenswrapper[4881]: I0121 13:37:59.702101 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf33fd22-6287-45a0-a95d-52c731fdda8d-utilities\") pod \"community-operators-zw7gl\" (UID: \"bf33fd22-6287-45a0-a95d-52c731fdda8d\") " pod="openshift-marketplace/community-operators-zw7gl" Jan 21 13:37:59 crc kubenswrapper[4881]: I0121 13:37:59.702251 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rrct\" (UniqueName: \"kubernetes.io/projected/bf33fd22-6287-45a0-a95d-52c731fdda8d-kube-api-access-8rrct\") pod \"community-operators-zw7gl\" (UID: \"bf33fd22-6287-45a0-a95d-52c731fdda8d\") " pod="openshift-marketplace/community-operators-zw7gl" Jan 21 13:37:59 crc kubenswrapper[4881]: I0121 13:37:59.804251 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf33fd22-6287-45a0-a95d-52c731fdda8d-catalog-content\") pod \"community-operators-zw7gl\" (UID: \"bf33fd22-6287-45a0-a95d-52c731fdda8d\") " pod="openshift-marketplace/community-operators-zw7gl" Jan 21 13:37:59 crc kubenswrapper[4881]: I0121 13:37:59.804609 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf33fd22-6287-45a0-a95d-52c731fdda8d-utilities\") pod \"community-operators-zw7gl\" (UID: \"bf33fd22-6287-45a0-a95d-52c731fdda8d\") " pod="openshift-marketplace/community-operators-zw7gl" Jan 21 13:37:59 crc kubenswrapper[4881]: I0121 13:37:59.804754 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rrct\" (UniqueName: \"kubernetes.io/projected/bf33fd22-6287-45a0-a95d-52c731fdda8d-kube-api-access-8rrct\") pod \"community-operators-zw7gl\" (UID: \"bf33fd22-6287-45a0-a95d-52c731fdda8d\") " pod="openshift-marketplace/community-operators-zw7gl" Jan 21 13:37:59 crc kubenswrapper[4881]: I0121 13:37:59.805335 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf33fd22-6287-45a0-a95d-52c731fdda8d-utilities\") pod \"community-operators-zw7gl\" (UID: \"bf33fd22-6287-45a0-a95d-52c731fdda8d\") " pod="openshift-marketplace/community-operators-zw7gl" Jan 21 13:37:59 crc kubenswrapper[4881]: I0121 13:37:59.805390 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf33fd22-6287-45a0-a95d-52c731fdda8d-catalog-content\") pod \"community-operators-zw7gl\" (UID: \"bf33fd22-6287-45a0-a95d-52c731fdda8d\") " pod="openshift-marketplace/community-operators-zw7gl" Jan 21 13:37:59 crc kubenswrapper[4881]: I0121 13:37:59.830655 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rrct\" (UniqueName: \"kubernetes.io/projected/bf33fd22-6287-45a0-a95d-52c731fdda8d-kube-api-access-8rrct\") pod \"community-operators-zw7gl\" (UID: \"bf33fd22-6287-45a0-a95d-52c731fdda8d\") " pod="openshift-marketplace/community-operators-zw7gl" Jan 21 13:38:00 crc kubenswrapper[4881]: I0121 13:38:00.019801 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zw7gl" Jan 21 13:38:00 crc kubenswrapper[4881]: I0121 13:38:00.440231 4881 generic.go:334] "Generic (PLEG): container finished" podID="03907694-a0e6-40d6-8142-9f20169ffe16" containerID="2eb3085ce73aca7857a7b8d8990101a886d19523c869ba8ce6f26a66f122249d" exitCode=0 Jan 21 13:38:00 crc kubenswrapper[4881]: I0121 13:38:00.440708 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lr68z" event={"ID":"03907694-a0e6-40d6-8142-9f20169ffe16","Type":"ContainerDied","Data":"2eb3085ce73aca7857a7b8d8990101a886d19523c869ba8ce6f26a66f122249d"} Jan 21 13:38:00 crc kubenswrapper[4881]: I0121 13:38:00.456373 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s88gg" event={"ID":"ed14e1b3-9440-4f92-a793-683eb01e4401","Type":"ContainerStarted","Data":"84d46420b749da45b6e3d003b8e0f9f987868b2f6b740919fb1f4b6f4381ec22"} Jan 21 13:38:00 crc kubenswrapper[4881]: I0121 13:38:00.864174 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zw7gl"] Jan 21 13:38:00 crc kubenswrapper[4881]: W0121 13:38:00.866150 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf33fd22_6287_45a0_a95d_52c731fdda8d.slice/crio-7d972578e89879daf6c160e9a56e6d8e189f16edd9dc6acd027b280469b2b64a WatchSource:0}: Error finding container 7d972578e89879daf6c160e9a56e6d8e189f16edd9dc6acd027b280469b2b64a: Status 404 returned error can't find the container with id 7d972578e89879daf6c160e9a56e6d8e189f16edd9dc6acd027b280469b2b64a Jan 21 13:38:01 crc kubenswrapper[4881]: I0121 13:38:01.479154 4881 generic.go:334] "Generic (PLEG): container finished" podID="bf33fd22-6287-45a0-a95d-52c731fdda8d" containerID="c8b5a836281ab5b467d91cb111b8bde5e2a3b2341cf2889f854337a51110a7f2" exitCode=0 Jan 21 13:38:01 crc kubenswrapper[4881]: I0121 13:38:01.479494 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zw7gl" event={"ID":"bf33fd22-6287-45a0-a95d-52c731fdda8d","Type":"ContainerDied","Data":"c8b5a836281ab5b467d91cb111b8bde5e2a3b2341cf2889f854337a51110a7f2"} Jan 21 13:38:01 crc kubenswrapper[4881]: I0121 13:38:01.479988 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zw7gl" event={"ID":"bf33fd22-6287-45a0-a95d-52c731fdda8d","Type":"ContainerStarted","Data":"7d972578e89879daf6c160e9a56e6d8e189f16edd9dc6acd027b280469b2b64a"} Jan 21 13:38:01 crc kubenswrapper[4881]: I0121 13:38:01.486456 4881 generic.go:334] "Generic (PLEG): container finished" podID="ed14e1b3-9440-4f92-a793-683eb01e4401" containerID="84d46420b749da45b6e3d003b8e0f9f987868b2f6b740919fb1f4b6f4381ec22" exitCode=0 Jan 21 13:38:01 crc kubenswrapper[4881]: I0121 13:38:01.486508 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s88gg" event={"ID":"ed14e1b3-9440-4f92-a793-683eb01e4401","Type":"ContainerDied","Data":"84d46420b749da45b6e3d003b8e0f9f987868b2f6b740919fb1f4b6f4381ec22"} Jan 21 13:38:02 crc kubenswrapper[4881]: I0121 13:38:02.502799 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zw7gl" event={"ID":"bf33fd22-6287-45a0-a95d-52c731fdda8d","Type":"ContainerStarted","Data":"1464e8dac96b23af6bad563afba50c099ee6ffdb3c7eb1c93e0ab2b66618e523"} Jan 21 13:38:02 crc kubenswrapper[4881]: I0121 13:38:02.511391 4881 generic.go:334] "Generic (PLEG): container finished" podID="03907694-a0e6-40d6-8142-9f20169ffe16" containerID="b7713d44fbace2dbc23c6335fe5f1e40542531f096c7a2e71ced23cf196b9cb8" exitCode=0 Jan 21 13:38:02 crc kubenswrapper[4881]: I0121 13:38:02.511492 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lr68z" event={"ID":"03907694-a0e6-40d6-8142-9f20169ffe16","Type":"ContainerDied","Data":"b7713d44fbace2dbc23c6335fe5f1e40542531f096c7a2e71ced23cf196b9cb8"} Jan 21 13:38:02 crc kubenswrapper[4881]: I0121 13:38:02.518021 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s88gg" event={"ID":"ed14e1b3-9440-4f92-a793-683eb01e4401","Type":"ContainerStarted","Data":"68684b89b8dc294a8040b9a52afa09badc56e4921abf15ec54176d3e4b23f734"} Jan 21 13:38:02 crc kubenswrapper[4881]: I0121 13:38:02.561255 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-s88gg" podStartSLOduration=3.05382083 podStartE2EDuration="6.561222184s" podCreationTimestamp="2026-01-21 13:37:56 +0000 UTC" firstStartedPulling="2026-01-21 13:37:58.403104017 +0000 UTC m=+9665.663060486" lastFinishedPulling="2026-01-21 13:38:01.910505361 +0000 UTC m=+9669.170461840" observedRunningTime="2026-01-21 13:38:02.551699132 +0000 UTC m=+9669.811655601" watchObservedRunningTime="2026-01-21 13:38:02.561222184 +0000 UTC m=+9669.821178653" Jan 21 13:38:05 crc kubenswrapper[4881]: I0121 13:38:05.761252 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lr68z" event={"ID":"03907694-a0e6-40d6-8142-9f20169ffe16","Type":"ContainerStarted","Data":"fa5a4de2f4e98ac0f222c55956633702fd594d9072f2c4646b5748469e7268b6"} Jan 21 13:38:05 crc kubenswrapper[4881]: I0121 13:38:05.766657 4881 generic.go:334] "Generic (PLEG): container finished" podID="bf33fd22-6287-45a0-a95d-52c731fdda8d" containerID="1464e8dac96b23af6bad563afba50c099ee6ffdb3c7eb1c93e0ab2b66618e523" exitCode=0 Jan 21 13:38:05 crc kubenswrapper[4881]: I0121 13:38:05.766715 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zw7gl" event={"ID":"bf33fd22-6287-45a0-a95d-52c731fdda8d","Type":"ContainerDied","Data":"1464e8dac96b23af6bad563afba50c099ee6ffdb3c7eb1c93e0ab2b66618e523"} Jan 21 13:38:05 crc kubenswrapper[4881]: I0121 13:38:05.789555 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lr68z" podStartSLOduration=5.393748249 podStartE2EDuration="8.789532068s" podCreationTimestamp="2026-01-21 13:37:57 +0000 UTC" firstStartedPulling="2026-01-21 13:38:00.448530092 +0000 UTC m=+9667.708486561" lastFinishedPulling="2026-01-21 13:38:03.844313911 +0000 UTC m=+9671.104270380" observedRunningTime="2026-01-21 13:38:05.785233463 +0000 UTC m=+9673.045189962" watchObservedRunningTime="2026-01-21 13:38:05.789532068 +0000 UTC m=+9673.049488537" Jan 21 13:38:06 crc kubenswrapper[4881]: I0121 13:38:06.798544 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zw7gl" event={"ID":"bf33fd22-6287-45a0-a95d-52c731fdda8d","Type":"ContainerStarted","Data":"005380c69dc02bb03b813c5b9b36612ee450bef0fa7fc34d08e62eb7b603f7e6"} Jan 21 13:38:06 crc kubenswrapper[4881]: I0121 13:38:06.820451 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zw7gl" podStartSLOduration=2.8995577949999998 podStartE2EDuration="7.82042906s" podCreationTimestamp="2026-01-21 13:37:59 +0000 UTC" firstStartedPulling="2026-01-21 13:38:01.482289193 +0000 UTC m=+9668.742245672" lastFinishedPulling="2026-01-21 13:38:06.403160468 +0000 UTC m=+9673.663116937" observedRunningTime="2026-01-21 13:38:06.81718012 +0000 UTC m=+9674.077136599" watchObservedRunningTime="2026-01-21 13:38:06.82042906 +0000 UTC m=+9674.080385529" Jan 21 13:38:07 crc kubenswrapper[4881]: I0121 13:38:07.311131 4881 scope.go:117] "RemoveContainer" containerID="3422468dfa40d9a7a02df7eba322e19f98dd660e5b5e992afb43cdf01389ca85" Jan 21 13:38:07 crc kubenswrapper[4881]: E0121 13:38:07.311679 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:38:07 crc kubenswrapper[4881]: I0121 13:38:07.342907 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-s88gg" Jan 21 13:38:07 crc kubenswrapper[4881]: I0121 13:38:07.342965 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-s88gg" Jan 21 13:38:07 crc kubenswrapper[4881]: I0121 13:38:07.398148 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-s88gg" Jan 21 13:38:07 crc kubenswrapper[4881]: I0121 13:38:07.557123 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lr68z" Jan 21 13:38:07 crc kubenswrapper[4881]: I0121 13:38:07.557431 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lr68z" Jan 21 13:38:07 crc kubenswrapper[4881]: I0121 13:38:07.642477 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lr68z" Jan 21 13:38:07 crc kubenswrapper[4881]: I0121 13:38:07.853727 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-s88gg" Jan 21 13:38:10 crc kubenswrapper[4881]: I0121 13:38:10.020855 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zw7gl" Jan 21 13:38:10 crc kubenswrapper[4881]: I0121 13:38:10.021280 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zw7gl" Jan 21 13:38:10 crc kubenswrapper[4881]: I0121 13:38:10.069109 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zw7gl" Jan 21 13:38:10 crc kubenswrapper[4881]: I0121 13:38:10.193308 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s88gg"] Jan 21 13:38:10 crc kubenswrapper[4881]: I0121 13:38:10.193583 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-s88gg" podUID="ed14e1b3-9440-4f92-a793-683eb01e4401" containerName="registry-server" containerID="cri-o://68684b89b8dc294a8040b9a52afa09badc56e4921abf15ec54176d3e4b23f734" gracePeriod=2 Jan 21 13:38:10 crc kubenswrapper[4881]: I0121 13:38:10.706697 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s88gg" Jan 21 13:38:10 crc kubenswrapper[4881]: I0121 13:38:10.750373 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed14e1b3-9440-4f92-a793-683eb01e4401-utilities\") pod \"ed14e1b3-9440-4f92-a793-683eb01e4401\" (UID: \"ed14e1b3-9440-4f92-a793-683eb01e4401\") " Jan 21 13:38:10 crc kubenswrapper[4881]: I0121 13:38:10.750466 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed14e1b3-9440-4f92-a793-683eb01e4401-catalog-content\") pod \"ed14e1b3-9440-4f92-a793-683eb01e4401\" (UID: \"ed14e1b3-9440-4f92-a793-683eb01e4401\") " Jan 21 13:38:10 crc kubenswrapper[4881]: I0121 13:38:10.750686 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7b4dz\" (UniqueName: \"kubernetes.io/projected/ed14e1b3-9440-4f92-a793-683eb01e4401-kube-api-access-7b4dz\") pod \"ed14e1b3-9440-4f92-a793-683eb01e4401\" (UID: \"ed14e1b3-9440-4f92-a793-683eb01e4401\") " Jan 21 13:38:10 crc kubenswrapper[4881]: I0121 13:38:10.751413 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed14e1b3-9440-4f92-a793-683eb01e4401-utilities" (OuterVolumeSpecName: "utilities") pod "ed14e1b3-9440-4f92-a793-683eb01e4401" (UID: "ed14e1b3-9440-4f92-a793-683eb01e4401"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:38:10 crc kubenswrapper[4881]: I0121 13:38:10.757155 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed14e1b3-9440-4f92-a793-683eb01e4401-kube-api-access-7b4dz" (OuterVolumeSpecName: "kube-api-access-7b4dz") pod "ed14e1b3-9440-4f92-a793-683eb01e4401" (UID: "ed14e1b3-9440-4f92-a793-683eb01e4401"). InnerVolumeSpecName "kube-api-access-7b4dz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:38:10 crc kubenswrapper[4881]: I0121 13:38:10.791836 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed14e1b3-9440-4f92-a793-683eb01e4401-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ed14e1b3-9440-4f92-a793-683eb01e4401" (UID: "ed14e1b3-9440-4f92-a793-683eb01e4401"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:38:10 crc kubenswrapper[4881]: I0121 13:38:10.853132 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7b4dz\" (UniqueName: \"kubernetes.io/projected/ed14e1b3-9440-4f92-a793-683eb01e4401-kube-api-access-7b4dz\") on node \"crc\" DevicePath \"\"" Jan 21 13:38:10 crc kubenswrapper[4881]: I0121 13:38:10.853169 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed14e1b3-9440-4f92-a793-683eb01e4401-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:38:10 crc kubenswrapper[4881]: I0121 13:38:10.853179 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed14e1b3-9440-4f92-a793-683eb01e4401-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:38:10 crc kubenswrapper[4881]: I0121 13:38:10.956381 4881 generic.go:334] "Generic (PLEG): container finished" podID="ed14e1b3-9440-4f92-a793-683eb01e4401" containerID="68684b89b8dc294a8040b9a52afa09badc56e4921abf15ec54176d3e4b23f734" exitCode=0 Jan 21 13:38:10 crc kubenswrapper[4881]: I0121 13:38:10.956445 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s88gg" Jan 21 13:38:10 crc kubenswrapper[4881]: I0121 13:38:10.956554 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s88gg" event={"ID":"ed14e1b3-9440-4f92-a793-683eb01e4401","Type":"ContainerDied","Data":"68684b89b8dc294a8040b9a52afa09badc56e4921abf15ec54176d3e4b23f734"} Jan 21 13:38:10 crc kubenswrapper[4881]: I0121 13:38:10.956604 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s88gg" event={"ID":"ed14e1b3-9440-4f92-a793-683eb01e4401","Type":"ContainerDied","Data":"3128a5db6427ef859cbb254cfb64ea3e3fe6d1a8d86c2c240331ac40ce10660b"} Jan 21 13:38:10 crc kubenswrapper[4881]: I0121 13:38:10.956624 4881 scope.go:117] "RemoveContainer" containerID="68684b89b8dc294a8040b9a52afa09badc56e4921abf15ec54176d3e4b23f734" Jan 21 13:38:10 crc kubenswrapper[4881]: I0121 13:38:10.980348 4881 scope.go:117] "RemoveContainer" containerID="84d46420b749da45b6e3d003b8e0f9f987868b2f6b740919fb1f4b6f4381ec22" Jan 21 13:38:11 crc kubenswrapper[4881]: I0121 13:38:11.001292 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s88gg"] Jan 21 13:38:11 crc kubenswrapper[4881]: I0121 13:38:11.017011 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-s88gg"] Jan 21 13:38:11 crc kubenswrapper[4881]: I0121 13:38:11.022650 4881 scope.go:117] "RemoveContainer" containerID="a4abcb1bb7fc4e7fbb86a1d2bb48c302a0121a5ee39ee48b7817d62657b97100" Jan 21 13:38:11 crc kubenswrapper[4881]: I0121 13:38:11.070023 4881 scope.go:117] "RemoveContainer" containerID="68684b89b8dc294a8040b9a52afa09badc56e4921abf15ec54176d3e4b23f734" Jan 21 13:38:11 crc kubenswrapper[4881]: E0121 13:38:11.070568 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68684b89b8dc294a8040b9a52afa09badc56e4921abf15ec54176d3e4b23f734\": container with ID starting with 68684b89b8dc294a8040b9a52afa09badc56e4921abf15ec54176d3e4b23f734 not found: ID does not exist" containerID="68684b89b8dc294a8040b9a52afa09badc56e4921abf15ec54176d3e4b23f734" Jan 21 13:38:11 crc kubenswrapper[4881]: I0121 13:38:11.070631 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68684b89b8dc294a8040b9a52afa09badc56e4921abf15ec54176d3e4b23f734"} err="failed to get container status \"68684b89b8dc294a8040b9a52afa09badc56e4921abf15ec54176d3e4b23f734\": rpc error: code = NotFound desc = could not find container \"68684b89b8dc294a8040b9a52afa09badc56e4921abf15ec54176d3e4b23f734\": container with ID starting with 68684b89b8dc294a8040b9a52afa09badc56e4921abf15ec54176d3e4b23f734 not found: ID does not exist" Jan 21 13:38:11 crc kubenswrapper[4881]: I0121 13:38:11.070667 4881 scope.go:117] "RemoveContainer" containerID="84d46420b749da45b6e3d003b8e0f9f987868b2f6b740919fb1f4b6f4381ec22" Jan 21 13:38:11 crc kubenswrapper[4881]: E0121 13:38:11.071176 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84d46420b749da45b6e3d003b8e0f9f987868b2f6b740919fb1f4b6f4381ec22\": container with ID starting with 84d46420b749da45b6e3d003b8e0f9f987868b2f6b740919fb1f4b6f4381ec22 not found: ID does not exist" containerID="84d46420b749da45b6e3d003b8e0f9f987868b2f6b740919fb1f4b6f4381ec22" Jan 21 13:38:11 crc kubenswrapper[4881]: I0121 13:38:11.071207 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84d46420b749da45b6e3d003b8e0f9f987868b2f6b740919fb1f4b6f4381ec22"} err="failed to get container status \"84d46420b749da45b6e3d003b8e0f9f987868b2f6b740919fb1f4b6f4381ec22\": rpc error: code = NotFound desc = could not find container \"84d46420b749da45b6e3d003b8e0f9f987868b2f6b740919fb1f4b6f4381ec22\": container with ID starting with 84d46420b749da45b6e3d003b8e0f9f987868b2f6b740919fb1f4b6f4381ec22 not found: ID does not exist" Jan 21 13:38:11 crc kubenswrapper[4881]: I0121 13:38:11.071229 4881 scope.go:117] "RemoveContainer" containerID="a4abcb1bb7fc4e7fbb86a1d2bb48c302a0121a5ee39ee48b7817d62657b97100" Jan 21 13:38:11 crc kubenswrapper[4881]: E0121 13:38:11.071453 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4abcb1bb7fc4e7fbb86a1d2bb48c302a0121a5ee39ee48b7817d62657b97100\": container with ID starting with a4abcb1bb7fc4e7fbb86a1d2bb48c302a0121a5ee39ee48b7817d62657b97100 not found: ID does not exist" containerID="a4abcb1bb7fc4e7fbb86a1d2bb48c302a0121a5ee39ee48b7817d62657b97100" Jan 21 13:38:11 crc kubenswrapper[4881]: I0121 13:38:11.071484 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4abcb1bb7fc4e7fbb86a1d2bb48c302a0121a5ee39ee48b7817d62657b97100"} err="failed to get container status \"a4abcb1bb7fc4e7fbb86a1d2bb48c302a0121a5ee39ee48b7817d62657b97100\": rpc error: code = NotFound desc = could not find container \"a4abcb1bb7fc4e7fbb86a1d2bb48c302a0121a5ee39ee48b7817d62657b97100\": container with ID starting with a4abcb1bb7fc4e7fbb86a1d2bb48c302a0121a5ee39ee48b7817d62657b97100 not found: ID does not exist" Jan 21 13:38:11 crc kubenswrapper[4881]: I0121 13:38:11.327962 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed14e1b3-9440-4f92-a793-683eb01e4401" path="/var/lib/kubelet/pods/ed14e1b3-9440-4f92-a793-683eb01e4401/volumes" Jan 21 13:38:17 crc kubenswrapper[4881]: I0121 13:38:17.607975 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lr68z" Jan 21 13:38:17 crc kubenswrapper[4881]: I0121 13:38:17.664566 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lr68z"] Jan 21 13:38:18 crc kubenswrapper[4881]: I0121 13:38:18.047693 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lr68z" podUID="03907694-a0e6-40d6-8142-9f20169ffe16" containerName="registry-server" containerID="cri-o://fa5a4de2f4e98ac0f222c55956633702fd594d9072f2c4646b5748469e7268b6" gracePeriod=2 Jan 21 13:38:19 crc kubenswrapper[4881]: I0121 13:38:19.017683 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lr68z" Jan 21 13:38:19 crc kubenswrapper[4881]: I0121 13:38:19.129706 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03907694-a0e6-40d6-8142-9f20169ffe16-utilities\") pod \"03907694-a0e6-40d6-8142-9f20169ffe16\" (UID: \"03907694-a0e6-40d6-8142-9f20169ffe16\") " Jan 21 13:38:19 crc kubenswrapper[4881]: I0121 13:38:19.130044 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03907694-a0e6-40d6-8142-9f20169ffe16-catalog-content\") pod \"03907694-a0e6-40d6-8142-9f20169ffe16\" (UID: \"03907694-a0e6-40d6-8142-9f20169ffe16\") " Jan 21 13:38:19 crc kubenswrapper[4881]: I0121 13:38:19.130077 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcnx5\" (UniqueName: \"kubernetes.io/projected/03907694-a0e6-40d6-8142-9f20169ffe16-kube-api-access-pcnx5\") pod \"03907694-a0e6-40d6-8142-9f20169ffe16\" (UID: \"03907694-a0e6-40d6-8142-9f20169ffe16\") " Jan 21 13:38:19 crc kubenswrapper[4881]: I0121 13:38:19.132693 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03907694-a0e6-40d6-8142-9f20169ffe16-utilities" (OuterVolumeSpecName: "utilities") pod "03907694-a0e6-40d6-8142-9f20169ffe16" (UID: "03907694-a0e6-40d6-8142-9f20169ffe16"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:38:19 crc kubenswrapper[4881]: I0121 13:38:19.137448 4881 generic.go:334] "Generic (PLEG): container finished" podID="03907694-a0e6-40d6-8142-9f20169ffe16" containerID="fa5a4de2f4e98ac0f222c55956633702fd594d9072f2c4646b5748469e7268b6" exitCode=0 Jan 21 13:38:19 crc kubenswrapper[4881]: I0121 13:38:19.137505 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lr68z" event={"ID":"03907694-a0e6-40d6-8142-9f20169ffe16","Type":"ContainerDied","Data":"fa5a4de2f4e98ac0f222c55956633702fd594d9072f2c4646b5748469e7268b6"} Jan 21 13:38:19 crc kubenswrapper[4881]: I0121 13:38:19.137541 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lr68z" Jan 21 13:38:19 crc kubenswrapper[4881]: I0121 13:38:19.137554 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lr68z" event={"ID":"03907694-a0e6-40d6-8142-9f20169ffe16","Type":"ContainerDied","Data":"17ba08f13a57d780ffed935060230f2347bf7739e7e17de9f0c3d10f0e502757"} Jan 21 13:38:19 crc kubenswrapper[4881]: I0121 13:38:19.137607 4881 scope.go:117] "RemoveContainer" containerID="fa5a4de2f4e98ac0f222c55956633702fd594d9072f2c4646b5748469e7268b6" Jan 21 13:38:19 crc kubenswrapper[4881]: I0121 13:38:19.141706 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03907694-a0e6-40d6-8142-9f20169ffe16-kube-api-access-pcnx5" (OuterVolumeSpecName: "kube-api-access-pcnx5") pod "03907694-a0e6-40d6-8142-9f20169ffe16" (UID: "03907694-a0e6-40d6-8142-9f20169ffe16"). InnerVolumeSpecName "kube-api-access-pcnx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:38:19 crc kubenswrapper[4881]: I0121 13:38:19.183893 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03907694-a0e6-40d6-8142-9f20169ffe16-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "03907694-a0e6-40d6-8142-9f20169ffe16" (UID: "03907694-a0e6-40d6-8142-9f20169ffe16"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:38:19 crc kubenswrapper[4881]: I0121 13:38:19.218171 4881 scope.go:117] "RemoveContainer" containerID="b7713d44fbace2dbc23c6335fe5f1e40542531f096c7a2e71ced23cf196b9cb8" Jan 21 13:38:19 crc kubenswrapper[4881]: I0121 13:38:19.233304 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03907694-a0e6-40d6-8142-9f20169ffe16-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:38:19 crc kubenswrapper[4881]: I0121 13:38:19.233347 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03907694-a0e6-40d6-8142-9f20169ffe16-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:38:19 crc kubenswrapper[4881]: I0121 13:38:19.233364 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcnx5\" (UniqueName: \"kubernetes.io/projected/03907694-a0e6-40d6-8142-9f20169ffe16-kube-api-access-pcnx5\") on node \"crc\" DevicePath \"\"" Jan 21 13:38:19 crc kubenswrapper[4881]: I0121 13:38:19.245198 4881 scope.go:117] "RemoveContainer" containerID="2eb3085ce73aca7857a7b8d8990101a886d19523c869ba8ce6f26a66f122249d" Jan 21 13:38:19 crc kubenswrapper[4881]: I0121 13:38:19.300359 4881 scope.go:117] "RemoveContainer" containerID="fa5a4de2f4e98ac0f222c55956633702fd594d9072f2c4646b5748469e7268b6" Jan 21 13:38:19 crc kubenswrapper[4881]: E0121 13:38:19.301964 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa5a4de2f4e98ac0f222c55956633702fd594d9072f2c4646b5748469e7268b6\": container with ID starting with fa5a4de2f4e98ac0f222c55956633702fd594d9072f2c4646b5748469e7268b6 not found: ID does not exist" containerID="fa5a4de2f4e98ac0f222c55956633702fd594d9072f2c4646b5748469e7268b6" Jan 21 13:38:19 crc kubenswrapper[4881]: I0121 13:38:19.302018 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa5a4de2f4e98ac0f222c55956633702fd594d9072f2c4646b5748469e7268b6"} err="failed to get container status \"fa5a4de2f4e98ac0f222c55956633702fd594d9072f2c4646b5748469e7268b6\": rpc error: code = NotFound desc = could not find container \"fa5a4de2f4e98ac0f222c55956633702fd594d9072f2c4646b5748469e7268b6\": container with ID starting with fa5a4de2f4e98ac0f222c55956633702fd594d9072f2c4646b5748469e7268b6 not found: ID does not exist" Jan 21 13:38:19 crc kubenswrapper[4881]: I0121 13:38:19.302061 4881 scope.go:117] "RemoveContainer" containerID="b7713d44fbace2dbc23c6335fe5f1e40542531f096c7a2e71ced23cf196b9cb8" Jan 21 13:38:19 crc kubenswrapper[4881]: E0121 13:38:19.304068 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7713d44fbace2dbc23c6335fe5f1e40542531f096c7a2e71ced23cf196b9cb8\": container with ID starting with b7713d44fbace2dbc23c6335fe5f1e40542531f096c7a2e71ced23cf196b9cb8 not found: ID does not exist" containerID="b7713d44fbace2dbc23c6335fe5f1e40542531f096c7a2e71ced23cf196b9cb8" Jan 21 13:38:19 crc kubenswrapper[4881]: I0121 13:38:19.304116 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7713d44fbace2dbc23c6335fe5f1e40542531f096c7a2e71ced23cf196b9cb8"} err="failed to get container status \"b7713d44fbace2dbc23c6335fe5f1e40542531f096c7a2e71ced23cf196b9cb8\": rpc error: code = NotFound desc = could not find container \"b7713d44fbace2dbc23c6335fe5f1e40542531f096c7a2e71ced23cf196b9cb8\": container with ID starting with b7713d44fbace2dbc23c6335fe5f1e40542531f096c7a2e71ced23cf196b9cb8 not found: ID does not exist" Jan 21 13:38:19 crc kubenswrapper[4881]: I0121 13:38:19.304150 4881 scope.go:117] "RemoveContainer" containerID="2eb3085ce73aca7857a7b8d8990101a886d19523c869ba8ce6f26a66f122249d" Jan 21 13:38:19 crc kubenswrapper[4881]: E0121 13:38:19.304451 4881 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2eb3085ce73aca7857a7b8d8990101a886d19523c869ba8ce6f26a66f122249d\": container with ID starting with 2eb3085ce73aca7857a7b8d8990101a886d19523c869ba8ce6f26a66f122249d not found: ID does not exist" containerID="2eb3085ce73aca7857a7b8d8990101a886d19523c869ba8ce6f26a66f122249d" Jan 21 13:38:19 crc kubenswrapper[4881]: I0121 13:38:19.304481 4881 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2eb3085ce73aca7857a7b8d8990101a886d19523c869ba8ce6f26a66f122249d"} err="failed to get container status \"2eb3085ce73aca7857a7b8d8990101a886d19523c869ba8ce6f26a66f122249d\": rpc error: code = NotFound desc = could not find container \"2eb3085ce73aca7857a7b8d8990101a886d19523c869ba8ce6f26a66f122249d\": container with ID starting with 2eb3085ce73aca7857a7b8d8990101a886d19523c869ba8ce6f26a66f122249d not found: ID does not exist" Jan 21 13:38:19 crc kubenswrapper[4881]: I0121 13:38:19.311176 4881 scope.go:117] "RemoveContainer" containerID="3422468dfa40d9a7a02df7eba322e19f98dd660e5b5e992afb43cdf01389ca85" Jan 21 13:38:19 crc kubenswrapper[4881]: E0121 13:38:19.311737 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:38:19 crc kubenswrapper[4881]: I0121 13:38:19.467039 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lr68z"] Jan 21 13:38:19 crc kubenswrapper[4881]: I0121 13:38:19.476464 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lr68z"] Jan 21 13:38:20 crc kubenswrapper[4881]: I0121 13:38:20.092498 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zw7gl" Jan 21 13:38:21 crc kubenswrapper[4881]: I0121 13:38:21.337851 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03907694-a0e6-40d6-8142-9f20169ffe16" path="/var/lib/kubelet/pods/03907694-a0e6-40d6-8142-9f20169ffe16/volumes" Jan 21 13:38:22 crc kubenswrapper[4881]: I0121 13:38:22.453566 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zw7gl"] Jan 21 13:38:22 crc kubenswrapper[4881]: I0121 13:38:22.454210 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zw7gl" podUID="bf33fd22-6287-45a0-a95d-52c731fdda8d" containerName="registry-server" containerID="cri-o://005380c69dc02bb03b813c5b9b36612ee450bef0fa7fc34d08e62eb7b603f7e6" gracePeriod=2 Jan 21 13:38:23 crc kubenswrapper[4881]: I0121 13:38:23.184101 4881 generic.go:334] "Generic (PLEG): container finished" podID="bf33fd22-6287-45a0-a95d-52c731fdda8d" containerID="005380c69dc02bb03b813c5b9b36612ee450bef0fa7fc34d08e62eb7b603f7e6" exitCode=0 Jan 21 13:38:23 crc kubenswrapper[4881]: I0121 13:38:23.184150 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zw7gl" event={"ID":"bf33fd22-6287-45a0-a95d-52c731fdda8d","Type":"ContainerDied","Data":"005380c69dc02bb03b813c5b9b36612ee450bef0fa7fc34d08e62eb7b603f7e6"} Jan 21 13:38:24 crc kubenswrapper[4881]: I0121 13:38:24.262187 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zw7gl" event={"ID":"bf33fd22-6287-45a0-a95d-52c731fdda8d","Type":"ContainerDied","Data":"7d972578e89879daf6c160e9a56e6d8e189f16edd9dc6acd027b280469b2b64a"} Jan 21 13:38:24 crc kubenswrapper[4881]: I0121 13:38:24.262246 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d972578e89879daf6c160e9a56e6d8e189f16edd9dc6acd027b280469b2b64a" Jan 21 13:38:24 crc kubenswrapper[4881]: I0121 13:38:24.332868 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zw7gl" Jan 21 13:38:24 crc kubenswrapper[4881]: I0121 13:38:24.457276 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf33fd22-6287-45a0-a95d-52c731fdda8d-catalog-content\") pod \"bf33fd22-6287-45a0-a95d-52c731fdda8d\" (UID: \"bf33fd22-6287-45a0-a95d-52c731fdda8d\") " Jan 21 13:38:24 crc kubenswrapper[4881]: I0121 13:38:24.457581 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf33fd22-6287-45a0-a95d-52c731fdda8d-utilities\") pod \"bf33fd22-6287-45a0-a95d-52c731fdda8d\" (UID: \"bf33fd22-6287-45a0-a95d-52c731fdda8d\") " Jan 21 13:38:24 crc kubenswrapper[4881]: I0121 13:38:24.457639 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rrct\" (UniqueName: \"kubernetes.io/projected/bf33fd22-6287-45a0-a95d-52c731fdda8d-kube-api-access-8rrct\") pod \"bf33fd22-6287-45a0-a95d-52c731fdda8d\" (UID: \"bf33fd22-6287-45a0-a95d-52c731fdda8d\") " Jan 21 13:38:24 crc kubenswrapper[4881]: I0121 13:38:24.458747 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf33fd22-6287-45a0-a95d-52c731fdda8d-utilities" (OuterVolumeSpecName: "utilities") pod "bf33fd22-6287-45a0-a95d-52c731fdda8d" (UID: "bf33fd22-6287-45a0-a95d-52c731fdda8d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:38:24 crc kubenswrapper[4881]: I0121 13:38:24.484674 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf33fd22-6287-45a0-a95d-52c731fdda8d-kube-api-access-8rrct" (OuterVolumeSpecName: "kube-api-access-8rrct") pod "bf33fd22-6287-45a0-a95d-52c731fdda8d" (UID: "bf33fd22-6287-45a0-a95d-52c731fdda8d"). InnerVolumeSpecName "kube-api-access-8rrct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:38:24 crc kubenswrapper[4881]: I0121 13:38:24.518810 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf33fd22-6287-45a0-a95d-52c731fdda8d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bf33fd22-6287-45a0-a95d-52c731fdda8d" (UID: "bf33fd22-6287-45a0-a95d-52c731fdda8d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 13:38:24 crc kubenswrapper[4881]: I0121 13:38:24.560740 4881 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf33fd22-6287-45a0-a95d-52c731fdda8d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 13:38:24 crc kubenswrapper[4881]: I0121 13:38:24.560774 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rrct\" (UniqueName: \"kubernetes.io/projected/bf33fd22-6287-45a0-a95d-52c731fdda8d-kube-api-access-8rrct\") on node \"crc\" DevicePath \"\"" Jan 21 13:38:24 crc kubenswrapper[4881]: I0121 13:38:24.560801 4881 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf33fd22-6287-45a0-a95d-52c731fdda8d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 13:38:25 crc kubenswrapper[4881]: I0121 13:38:25.275530 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zw7gl" Jan 21 13:38:25 crc kubenswrapper[4881]: I0121 13:38:25.355815 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zw7gl"] Jan 21 13:38:25 crc kubenswrapper[4881]: I0121 13:38:25.358899 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zw7gl"] Jan 21 13:38:27 crc kubenswrapper[4881]: I0121 13:38:27.324380 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf33fd22-6287-45a0-a95d-52c731fdda8d" path="/var/lib/kubelet/pods/bf33fd22-6287-45a0-a95d-52c731fdda8d/volumes" Jan 21 13:38:33 crc kubenswrapper[4881]: I0121 13:38:33.318170 4881 scope.go:117] "RemoveContainer" containerID="3422468dfa40d9a7a02df7eba322e19f98dd660e5b5e992afb43cdf01389ca85" Jan 21 13:38:33 crc kubenswrapper[4881]: E0121 13:38:33.320742 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:38:33 crc kubenswrapper[4881]: I0121 13:38:33.768824 4881 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="cd1973a5-773b-438b-aab7-709fb828324d" containerName="galera" probeResult="failure" output="command timed out" Jan 21 13:38:33 crc kubenswrapper[4881]: I0121 13:38:33.768894 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="cd1973a5-773b-438b-aab7-709fb828324d" containerName="galera" probeResult="failure" output="command timed out" Jan 21 13:38:44 crc kubenswrapper[4881]: I0121 13:38:44.459514 4881 scope.go:117] "RemoveContainer" containerID="3422468dfa40d9a7a02df7eba322e19f98dd660e5b5e992afb43cdf01389ca85" Jan 21 13:38:44 crc kubenswrapper[4881]: E0121 13:38:44.460190 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:38:55 crc kubenswrapper[4881]: I0121 13:38:55.311819 4881 scope.go:117] "RemoveContainer" containerID="3422468dfa40d9a7a02df7eba322e19f98dd660e5b5e992afb43cdf01389ca85" Jan 21 13:38:55 crc kubenswrapper[4881]: E0121 13:38:55.312750 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:39:09 crc kubenswrapper[4881]: I0121 13:39:09.311280 4881 scope.go:117] "RemoveContainer" containerID="3422468dfa40d9a7a02df7eba322e19f98dd660e5b5e992afb43cdf01389ca85" Jan 21 13:39:09 crc kubenswrapper[4881]: E0121 13:39:09.313073 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:39:20 crc kubenswrapper[4881]: I0121 13:39:20.311360 4881 scope.go:117] "RemoveContainer" containerID="3422468dfa40d9a7a02df7eba322e19f98dd660e5b5e992afb43cdf01389ca85" Jan 21 13:39:20 crc kubenswrapper[4881]: E0121 13:39:20.312823 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:39:35 crc kubenswrapper[4881]: I0121 13:39:35.311681 4881 scope.go:117] "RemoveContainer" containerID="3422468dfa40d9a7a02df7eba322e19f98dd660e5b5e992afb43cdf01389ca85" Jan 21 13:39:35 crc kubenswrapper[4881]: E0121 13:39:35.313700 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:39:49 crc kubenswrapper[4881]: I0121 13:39:49.311302 4881 scope.go:117] "RemoveContainer" containerID="3422468dfa40d9a7a02df7eba322e19f98dd660e5b5e992afb43cdf01389ca85" Jan 21 13:39:49 crc kubenswrapper[4881]: E0121 13:39:49.312073 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:40:00 crc kubenswrapper[4881]: I0121 13:40:00.312167 4881 scope.go:117] "RemoveContainer" containerID="3422468dfa40d9a7a02df7eba322e19f98dd660e5b5e992afb43cdf01389ca85" Jan 21 13:40:00 crc kubenswrapper[4881]: E0121 13:40:00.313460 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:40:11 crc kubenswrapper[4881]: I0121 13:40:11.317312 4881 scope.go:117] "RemoveContainer" containerID="3422468dfa40d9a7a02df7eba322e19f98dd660e5b5e992afb43cdf01389ca85" Jan 21 13:40:11 crc kubenswrapper[4881]: E0121 13:40:11.318102 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:40:23 crc kubenswrapper[4881]: I0121 13:40:23.318401 4881 scope.go:117] "RemoveContainer" containerID="3422468dfa40d9a7a02df7eba322e19f98dd660e5b5e992afb43cdf01389ca85" Jan 21 13:40:23 crc kubenswrapper[4881]: E0121 13:40:23.324251 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:40:38 crc kubenswrapper[4881]: I0121 13:40:38.312663 4881 scope.go:117] "RemoveContainer" containerID="3422468dfa40d9a7a02df7eba322e19f98dd660e5b5e992afb43cdf01389ca85" Jan 21 13:40:38 crc kubenswrapper[4881]: E0121 13:40:38.313522 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:40:51 crc kubenswrapper[4881]: I0121 13:40:51.311566 4881 scope.go:117] "RemoveContainer" containerID="3422468dfa40d9a7a02df7eba322e19f98dd660e5b5e992afb43cdf01389ca85" Jan 21 13:40:51 crc kubenswrapper[4881]: E0121 13:40:51.312819 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:41:06 crc kubenswrapper[4881]: I0121 13:41:06.312078 4881 scope.go:117] "RemoveContainer" containerID="3422468dfa40d9a7a02df7eba322e19f98dd660e5b5e992afb43cdf01389ca85" Jan 21 13:41:06 crc kubenswrapper[4881]: E0121 13:41:06.313276 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:41:17 crc kubenswrapper[4881]: I0121 13:41:17.311311 4881 scope.go:117] "RemoveContainer" containerID="3422468dfa40d9a7a02df7eba322e19f98dd660e5b5e992afb43cdf01389ca85" Jan 21 13:41:17 crc kubenswrapper[4881]: E0121 13:41:17.311987 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:41:29 crc kubenswrapper[4881]: I0121 13:41:29.310740 4881 scope.go:117] "RemoveContainer" containerID="3422468dfa40d9a7a02df7eba322e19f98dd660e5b5e992afb43cdf01389ca85" Jan 21 13:41:29 crc kubenswrapper[4881]: E0121 13:41:29.311929 4881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-fb4fr_openshift-machine-config-operator(3687b313-1df2-4274-80db-8c758b51bf2d)\"" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" Jan 21 13:41:41 crc kubenswrapper[4881]: I0121 13:41:41.311019 4881 scope.go:117] "RemoveContainer" containerID="3422468dfa40d9a7a02df7eba322e19f98dd660e5b5e992afb43cdf01389ca85" Jan 21 13:41:42 crc kubenswrapper[4881]: I0121 13:41:42.363730 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"5d5a67903992fb662b7e04fe2469b9d92cb257eabe2ba374576c606306072e01"} Jan 21 13:43:59 crc kubenswrapper[4881]: I0121 13:43:59.852444 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:43:59 crc kubenswrapper[4881]: I0121 13:43:59.853304 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:44:29 crc kubenswrapper[4881]: I0121 13:44:29.851491 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:44:29 crc kubenswrapper[4881]: I0121 13:44:29.852126 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:44:37 crc kubenswrapper[4881]: I0121 13:44:37.134548 4881 scope.go:117] "RemoveContainer" containerID="005380c69dc02bb03b813c5b9b36612ee450bef0fa7fc34d08e62eb7b603f7e6" Jan 21 13:44:37 crc kubenswrapper[4881]: I0121 13:44:37.180058 4881 scope.go:117] "RemoveContainer" containerID="c8b5a836281ab5b467d91cb111b8bde5e2a3b2341cf2889f854337a51110a7f2" Jan 21 13:44:37 crc kubenswrapper[4881]: I0121 13:44:37.272547 4881 scope.go:117] "RemoveContainer" containerID="1464e8dac96b23af6bad563afba50c099ee6ffdb3c7eb1c93e0ab2b66618e523" Jan 21 13:44:59 crc kubenswrapper[4881]: I0121 13:44:59.851625 4881 patch_prober.go:28] interesting pod/machine-config-daemon-fb4fr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 13:44:59 crc kubenswrapper[4881]: I0121 13:44:59.852279 4881 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 13:44:59 crc kubenswrapper[4881]: I0121 13:44:59.852339 4881 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" Jan 21 13:44:59 crc kubenswrapper[4881]: I0121 13:44:59.853383 4881 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5d5a67903992fb662b7e04fe2469b9d92cb257eabe2ba374576c606306072e01"} pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 13:44:59 crc kubenswrapper[4881]: I0121 13:44:59.853476 4881 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" podUID="3687b313-1df2-4274-80db-8c758b51bf2d" containerName="machine-config-daemon" containerID="cri-o://5d5a67903992fb662b7e04fe2469b9d92cb257eabe2ba374576c606306072e01" gracePeriod=600 Jan 21 13:45:00 crc kubenswrapper[4881]: I0121 13:45:00.064983 4881 generic.go:334] "Generic (PLEG): container finished" podID="3687b313-1df2-4274-80db-8c758b51bf2d" containerID="5d5a67903992fb662b7e04fe2469b9d92cb257eabe2ba374576c606306072e01" exitCode=0 Jan 21 13:45:00 crc kubenswrapper[4881]: I0121 13:45:00.065052 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerDied","Data":"5d5a67903992fb662b7e04fe2469b9d92cb257eabe2ba374576c606306072e01"} Jan 21 13:45:00 crc kubenswrapper[4881]: I0121 13:45:00.065228 4881 scope.go:117] "RemoveContainer" containerID="3422468dfa40d9a7a02df7eba322e19f98dd660e5b5e992afb43cdf01389ca85" Jan 21 13:45:00 crc kubenswrapper[4881]: I0121 13:45:00.161079 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483385-f28d8"] Jan 21 13:45:00 crc kubenswrapper[4881]: E0121 13:45:00.161598 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03907694-a0e6-40d6-8142-9f20169ffe16" containerName="registry-server" Jan 21 13:45:00 crc kubenswrapper[4881]: I0121 13:45:00.161617 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="03907694-a0e6-40d6-8142-9f20169ffe16" containerName="registry-server" Jan 21 13:45:00 crc kubenswrapper[4881]: E0121 13:45:00.161631 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed14e1b3-9440-4f92-a793-683eb01e4401" containerName="registry-server" Jan 21 13:45:00 crc kubenswrapper[4881]: I0121 13:45:00.161637 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed14e1b3-9440-4f92-a793-683eb01e4401" containerName="registry-server" Jan 21 13:45:00 crc kubenswrapper[4881]: E0121 13:45:00.161655 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf33fd22-6287-45a0-a95d-52c731fdda8d" containerName="registry-server" Jan 21 13:45:00 crc kubenswrapper[4881]: I0121 13:45:00.161661 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf33fd22-6287-45a0-a95d-52c731fdda8d" containerName="registry-server" Jan 21 13:45:00 crc kubenswrapper[4881]: E0121 13:45:00.161672 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf33fd22-6287-45a0-a95d-52c731fdda8d" containerName="extract-utilities" Jan 21 13:45:00 crc kubenswrapper[4881]: I0121 13:45:00.161733 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf33fd22-6287-45a0-a95d-52c731fdda8d" containerName="extract-utilities" Jan 21 13:45:00 crc kubenswrapper[4881]: E0121 13:45:00.161745 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03907694-a0e6-40d6-8142-9f20169ffe16" containerName="extract-content" Jan 21 13:45:00 crc kubenswrapper[4881]: I0121 13:45:00.161750 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="03907694-a0e6-40d6-8142-9f20169ffe16" containerName="extract-content" Jan 21 13:45:00 crc kubenswrapper[4881]: E0121 13:45:00.161765 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed14e1b3-9440-4f92-a793-683eb01e4401" containerName="extract-utilities" Jan 21 13:45:00 crc kubenswrapper[4881]: I0121 13:45:00.161770 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed14e1b3-9440-4f92-a793-683eb01e4401" containerName="extract-utilities" Jan 21 13:45:00 crc kubenswrapper[4881]: E0121 13:45:00.161780 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03907694-a0e6-40d6-8142-9f20169ffe16" containerName="extract-utilities" Jan 21 13:45:00 crc kubenswrapper[4881]: I0121 13:45:00.161799 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="03907694-a0e6-40d6-8142-9f20169ffe16" containerName="extract-utilities" Jan 21 13:45:00 crc kubenswrapper[4881]: E0121 13:45:00.161816 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed14e1b3-9440-4f92-a793-683eb01e4401" containerName="extract-content" Jan 21 13:45:00 crc kubenswrapper[4881]: I0121 13:45:00.161822 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed14e1b3-9440-4f92-a793-683eb01e4401" containerName="extract-content" Jan 21 13:45:00 crc kubenswrapper[4881]: E0121 13:45:00.161840 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf33fd22-6287-45a0-a95d-52c731fdda8d" containerName="extract-content" Jan 21 13:45:00 crc kubenswrapper[4881]: I0121 13:45:00.161846 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf33fd22-6287-45a0-a95d-52c731fdda8d" containerName="extract-content" Jan 21 13:45:00 crc kubenswrapper[4881]: I0121 13:45:00.162080 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="03907694-a0e6-40d6-8142-9f20169ffe16" containerName="registry-server" Jan 21 13:45:00 crc kubenswrapper[4881]: I0121 13:45:00.162103 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed14e1b3-9440-4f92-a793-683eb01e4401" containerName="registry-server" Jan 21 13:45:00 crc kubenswrapper[4881]: I0121 13:45:00.162112 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf33fd22-6287-45a0-a95d-52c731fdda8d" containerName="registry-server" Jan 21 13:45:00 crc kubenswrapper[4881]: I0121 13:45:00.162956 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-f28d8" Jan 21 13:45:00 crc kubenswrapper[4881]: I0121 13:45:00.165462 4881 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 13:45:00 crc kubenswrapper[4881]: I0121 13:45:00.166857 4881 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 13:45:00 crc kubenswrapper[4881]: I0121 13:45:00.191642 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483385-f28d8"] Jan 21 13:45:00 crc kubenswrapper[4881]: I0121 13:45:00.215034 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kkdt\" (UniqueName: \"kubernetes.io/projected/26bc618a-da67-42a8-a7bb-d387e43c3b07-kube-api-access-8kkdt\") pod \"collect-profiles-29483385-f28d8\" (UID: \"26bc618a-da67-42a8-a7bb-d387e43c3b07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-f28d8" Jan 21 13:45:00 crc kubenswrapper[4881]: I0121 13:45:00.215107 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/26bc618a-da67-42a8-a7bb-d387e43c3b07-secret-volume\") pod \"collect-profiles-29483385-f28d8\" (UID: \"26bc618a-da67-42a8-a7bb-d387e43c3b07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-f28d8" Jan 21 13:45:00 crc kubenswrapper[4881]: I0121 13:45:00.215215 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26bc618a-da67-42a8-a7bb-d387e43c3b07-config-volume\") pod \"collect-profiles-29483385-f28d8\" (UID: \"26bc618a-da67-42a8-a7bb-d387e43c3b07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-f28d8" Jan 21 13:45:00 crc kubenswrapper[4881]: I0121 13:45:00.317096 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kkdt\" (UniqueName: \"kubernetes.io/projected/26bc618a-da67-42a8-a7bb-d387e43c3b07-kube-api-access-8kkdt\") pod \"collect-profiles-29483385-f28d8\" (UID: \"26bc618a-da67-42a8-a7bb-d387e43c3b07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-f28d8" Jan 21 13:45:00 crc kubenswrapper[4881]: I0121 13:45:00.317341 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/26bc618a-da67-42a8-a7bb-d387e43c3b07-secret-volume\") pod \"collect-profiles-29483385-f28d8\" (UID: \"26bc618a-da67-42a8-a7bb-d387e43c3b07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-f28d8" Jan 21 13:45:00 crc kubenswrapper[4881]: I0121 13:45:00.317387 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26bc618a-da67-42a8-a7bb-d387e43c3b07-config-volume\") pod \"collect-profiles-29483385-f28d8\" (UID: \"26bc618a-da67-42a8-a7bb-d387e43c3b07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-f28d8" Jan 21 13:45:00 crc kubenswrapper[4881]: I0121 13:45:00.318386 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26bc618a-da67-42a8-a7bb-d387e43c3b07-config-volume\") pod \"collect-profiles-29483385-f28d8\" (UID: \"26bc618a-da67-42a8-a7bb-d387e43c3b07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-f28d8" Jan 21 13:45:00 crc kubenswrapper[4881]: I0121 13:45:00.334531 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/26bc618a-da67-42a8-a7bb-d387e43c3b07-secret-volume\") pod \"collect-profiles-29483385-f28d8\" (UID: \"26bc618a-da67-42a8-a7bb-d387e43c3b07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-f28d8" Jan 21 13:45:00 crc kubenswrapper[4881]: I0121 13:45:00.340668 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kkdt\" (UniqueName: \"kubernetes.io/projected/26bc618a-da67-42a8-a7bb-d387e43c3b07-kube-api-access-8kkdt\") pod \"collect-profiles-29483385-f28d8\" (UID: \"26bc618a-da67-42a8-a7bb-d387e43c3b07\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-f28d8" Jan 21 13:45:00 crc kubenswrapper[4881]: I0121 13:45:00.484088 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-f28d8" Jan 21 13:45:01 crc kubenswrapper[4881]: I0121 13:45:01.023162 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483385-f28d8"] Jan 21 13:45:01 crc kubenswrapper[4881]: I0121 13:45:01.077505 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-f28d8" event={"ID":"26bc618a-da67-42a8-a7bb-d387e43c3b07","Type":"ContainerStarted","Data":"7d144563e9481a5fd3724ac8a32737ad5c62afd07039e96c32a51ff9a35213a8"} Jan 21 13:45:01 crc kubenswrapper[4881]: I0121 13:45:01.079746 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fb4fr" event={"ID":"3687b313-1df2-4274-80db-8c758b51bf2d","Type":"ContainerStarted","Data":"113d1373287853d89aa9f3d38980901d710b940a1b9ccbd9225bbeb2e3770216"} Jan 21 13:45:02 crc kubenswrapper[4881]: I0121 13:45:02.092077 4881 generic.go:334] "Generic (PLEG): container finished" podID="26bc618a-da67-42a8-a7bb-d387e43c3b07" containerID="719e3859e6f66471c6e2f81f0e16f40576c22800a6d3e0c44b5d268011817fa6" exitCode=0 Jan 21 13:45:02 crc kubenswrapper[4881]: I0121 13:45:02.092212 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-f28d8" event={"ID":"26bc618a-da67-42a8-a7bb-d387e43c3b07","Type":"ContainerDied","Data":"719e3859e6f66471c6e2f81f0e16f40576c22800a6d3e0c44b5d268011817fa6"} Jan 21 13:45:04 crc kubenswrapper[4881]: I0121 13:45:04.098717 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-f28d8" Jan 21 13:45:04 crc kubenswrapper[4881]: I0121 13:45:04.115386 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-f28d8" event={"ID":"26bc618a-da67-42a8-a7bb-d387e43c3b07","Type":"ContainerDied","Data":"7d144563e9481a5fd3724ac8a32737ad5c62afd07039e96c32a51ff9a35213a8"} Jan 21 13:45:04 crc kubenswrapper[4881]: I0121 13:45:04.115459 4881 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d144563e9481a5fd3724ac8a32737ad5c62afd07039e96c32a51ff9a35213a8" Jan 21 13:45:04 crc kubenswrapper[4881]: I0121 13:45:04.115549 4881 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483385-f28d8" Jan 21 13:45:04 crc kubenswrapper[4881]: I0121 13:45:04.205294 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/26bc618a-da67-42a8-a7bb-d387e43c3b07-secret-volume\") pod \"26bc618a-da67-42a8-a7bb-d387e43c3b07\" (UID: \"26bc618a-da67-42a8-a7bb-d387e43c3b07\") " Jan 21 13:45:04 crc kubenswrapper[4881]: I0121 13:45:04.205592 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kkdt\" (UniqueName: \"kubernetes.io/projected/26bc618a-da67-42a8-a7bb-d387e43c3b07-kube-api-access-8kkdt\") pod \"26bc618a-da67-42a8-a7bb-d387e43c3b07\" (UID: \"26bc618a-da67-42a8-a7bb-d387e43c3b07\") " Jan 21 13:45:04 crc kubenswrapper[4881]: I0121 13:45:04.205678 4881 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26bc618a-da67-42a8-a7bb-d387e43c3b07-config-volume\") pod \"26bc618a-da67-42a8-a7bb-d387e43c3b07\" (UID: \"26bc618a-da67-42a8-a7bb-d387e43c3b07\") " Jan 21 13:45:04 crc kubenswrapper[4881]: I0121 13:45:04.206944 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26bc618a-da67-42a8-a7bb-d387e43c3b07-config-volume" (OuterVolumeSpecName: "config-volume") pod "26bc618a-da67-42a8-a7bb-d387e43c3b07" (UID: "26bc618a-da67-42a8-a7bb-d387e43c3b07"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 13:45:04 crc kubenswrapper[4881]: I0121 13:45:04.213779 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26bc618a-da67-42a8-a7bb-d387e43c3b07-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "26bc618a-da67-42a8-a7bb-d387e43c3b07" (UID: "26bc618a-da67-42a8-a7bb-d387e43c3b07"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 13:45:04 crc kubenswrapper[4881]: I0121 13:45:04.213938 4881 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26bc618a-da67-42a8-a7bb-d387e43c3b07-kube-api-access-8kkdt" (OuterVolumeSpecName: "kube-api-access-8kkdt") pod "26bc618a-da67-42a8-a7bb-d387e43c3b07" (UID: "26bc618a-da67-42a8-a7bb-d387e43c3b07"). InnerVolumeSpecName "kube-api-access-8kkdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 13:45:04 crc kubenswrapper[4881]: I0121 13:45:04.308910 4881 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kkdt\" (UniqueName: \"kubernetes.io/projected/26bc618a-da67-42a8-a7bb-d387e43c3b07-kube-api-access-8kkdt\") on node \"crc\" DevicePath \"\"" Jan 21 13:45:04 crc kubenswrapper[4881]: I0121 13:45:04.308945 4881 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26bc618a-da67-42a8-a7bb-d387e43c3b07-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 13:45:04 crc kubenswrapper[4881]: I0121 13:45:04.308959 4881 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/26bc618a-da67-42a8-a7bb-d387e43c3b07-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 13:45:05 crc kubenswrapper[4881]: I0121 13:45:05.203407 4881 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483340-9mvx4"] Jan 21 13:45:05 crc kubenswrapper[4881]: I0121 13:45:05.216799 4881 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483340-9mvx4"] Jan 21 13:45:05 crc kubenswrapper[4881]: I0121 13:45:05.323384 4881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3d03c94-fe93-4321-a2a8-44fc4e42cecf" path="/var/lib/kubelet/pods/a3d03c94-fe93-4321-a2a8-44fc4e42cecf/volumes" Jan 21 13:45:37 crc kubenswrapper[4881]: I0121 13:45:37.351552 4881 scope.go:117] "RemoveContainer" containerID="c991ea82acb208ee5146cd2f274afea24486b30d08f10d3df4a9a9be6e57a12c" Jan 21 13:45:38 crc kubenswrapper[4881]: I0121 13:45:38.568180 4881 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-925l6"] Jan 21 13:45:38 crc kubenswrapper[4881]: E0121 13:45:38.569197 4881 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26bc618a-da67-42a8-a7bb-d387e43c3b07" containerName="collect-profiles" Jan 21 13:45:38 crc kubenswrapper[4881]: I0121 13:45:38.569215 4881 state_mem.go:107] "Deleted CPUSet assignment" podUID="26bc618a-da67-42a8-a7bb-d387e43c3b07" containerName="collect-profiles" Jan 21 13:45:38 crc kubenswrapper[4881]: I0121 13:45:38.569473 4881 memory_manager.go:354] "RemoveStaleState removing state" podUID="26bc618a-da67-42a8-a7bb-d387e43c3b07" containerName="collect-profiles" Jan 21 13:45:38 crc kubenswrapper[4881]: I0121 13:45:38.571129 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-925l6" Jan 21 13:45:38 crc kubenswrapper[4881]: I0121 13:45:38.580661 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-925l6"] Jan 21 13:45:38 crc kubenswrapper[4881]: I0121 13:45:38.682445 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqxv5\" (UniqueName: \"kubernetes.io/projected/11625cbb-4258-4797-9dd3-77d7f130a5c4-kube-api-access-wqxv5\") pod \"redhat-operators-925l6\" (UID: \"11625cbb-4258-4797-9dd3-77d7f130a5c4\") " pod="openshift-marketplace/redhat-operators-925l6" Jan 21 13:45:38 crc kubenswrapper[4881]: I0121 13:45:38.682628 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11625cbb-4258-4797-9dd3-77d7f130a5c4-utilities\") pod \"redhat-operators-925l6\" (UID: \"11625cbb-4258-4797-9dd3-77d7f130a5c4\") " pod="openshift-marketplace/redhat-operators-925l6" Jan 21 13:45:38 crc kubenswrapper[4881]: I0121 13:45:38.682755 4881 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11625cbb-4258-4797-9dd3-77d7f130a5c4-catalog-content\") pod \"redhat-operators-925l6\" (UID: \"11625cbb-4258-4797-9dd3-77d7f130a5c4\") " pod="openshift-marketplace/redhat-operators-925l6" Jan 21 13:45:38 crc kubenswrapper[4881]: I0121 13:45:38.788592 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11625cbb-4258-4797-9dd3-77d7f130a5c4-catalog-content\") pod \"redhat-operators-925l6\" (UID: \"11625cbb-4258-4797-9dd3-77d7f130a5c4\") " pod="openshift-marketplace/redhat-operators-925l6" Jan 21 13:45:38 crc kubenswrapper[4881]: I0121 13:45:38.789057 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqxv5\" (UniqueName: \"kubernetes.io/projected/11625cbb-4258-4797-9dd3-77d7f130a5c4-kube-api-access-wqxv5\") pod \"redhat-operators-925l6\" (UID: \"11625cbb-4258-4797-9dd3-77d7f130a5c4\") " pod="openshift-marketplace/redhat-operators-925l6" Jan 21 13:45:38 crc kubenswrapper[4881]: I0121 13:45:38.789220 4881 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11625cbb-4258-4797-9dd3-77d7f130a5c4-utilities\") pod \"redhat-operators-925l6\" (UID: \"11625cbb-4258-4797-9dd3-77d7f130a5c4\") " pod="openshift-marketplace/redhat-operators-925l6" Jan 21 13:45:38 crc kubenswrapper[4881]: I0121 13:45:38.789998 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11625cbb-4258-4797-9dd3-77d7f130a5c4-utilities\") pod \"redhat-operators-925l6\" (UID: \"11625cbb-4258-4797-9dd3-77d7f130a5c4\") " pod="openshift-marketplace/redhat-operators-925l6" Jan 21 13:45:38 crc kubenswrapper[4881]: I0121 13:45:38.790306 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11625cbb-4258-4797-9dd3-77d7f130a5c4-catalog-content\") pod \"redhat-operators-925l6\" (UID: \"11625cbb-4258-4797-9dd3-77d7f130a5c4\") " pod="openshift-marketplace/redhat-operators-925l6" Jan 21 13:45:38 crc kubenswrapper[4881]: I0121 13:45:38.834514 4881 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqxv5\" (UniqueName: \"kubernetes.io/projected/11625cbb-4258-4797-9dd3-77d7f130a5c4-kube-api-access-wqxv5\") pod \"redhat-operators-925l6\" (UID: \"11625cbb-4258-4797-9dd3-77d7f130a5c4\") " pod="openshift-marketplace/redhat-operators-925l6" Jan 21 13:45:38 crc kubenswrapper[4881]: I0121 13:45:38.907123 4881 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-925l6" Jan 21 13:45:39 crc kubenswrapper[4881]: I0121 13:45:39.483849 4881 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-925l6"] Jan 21 13:45:39 crc kubenswrapper[4881]: W0121 13:45:39.490671 4881 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11625cbb_4258_4797_9dd3_77d7f130a5c4.slice/crio-7c7a6d88bf4f7e95253e7ca12b0da292dc5197f4cca316be03ef2493d95bc95c WatchSource:0}: Error finding container 7c7a6d88bf4f7e95253e7ca12b0da292dc5197f4cca316be03ef2493d95bc95c: Status 404 returned error can't find the container with id 7c7a6d88bf4f7e95253e7ca12b0da292dc5197f4cca316be03ef2493d95bc95c Jan 21 13:45:39 crc kubenswrapper[4881]: I0121 13:45:39.519152 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-925l6" event={"ID":"11625cbb-4258-4797-9dd3-77d7f130a5c4","Type":"ContainerStarted","Data":"7c7a6d88bf4f7e95253e7ca12b0da292dc5197f4cca316be03ef2493d95bc95c"} Jan 21 13:45:40 crc kubenswrapper[4881]: I0121 13:45:40.535725 4881 generic.go:334] "Generic (PLEG): container finished" podID="11625cbb-4258-4797-9dd3-77d7f130a5c4" containerID="3c91dca6c145548c065b1c82b5b0712c2b2b48cdfe089d0af35f72e3766f188f" exitCode=0 Jan 21 13:45:40 crc kubenswrapper[4881]: I0121 13:45:40.535855 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-925l6" event={"ID":"11625cbb-4258-4797-9dd3-77d7f130a5c4","Type":"ContainerDied","Data":"3c91dca6c145548c065b1c82b5b0712c2b2b48cdfe089d0af35f72e3766f188f"} Jan 21 13:45:40 crc kubenswrapper[4881]: I0121 13:45:40.539755 4881 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 13:45:42 crc kubenswrapper[4881]: I0121 13:45:42.560341 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-925l6" event={"ID":"11625cbb-4258-4797-9dd3-77d7f130a5c4","Type":"ContainerStarted","Data":"19fb3965c3fd57b60d01d90930021e7c27ac818b0d94454d0c4cdf0661e2cb68"} Jan 21 13:45:46 crc kubenswrapper[4881]: I0121 13:45:46.601114 4881 generic.go:334] "Generic (PLEG): container finished" podID="11625cbb-4258-4797-9dd3-77d7f130a5c4" containerID="19fb3965c3fd57b60d01d90930021e7c27ac818b0d94454d0c4cdf0661e2cb68" exitCode=0 Jan 21 13:45:46 crc kubenswrapper[4881]: I0121 13:45:46.601483 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-925l6" event={"ID":"11625cbb-4258-4797-9dd3-77d7f130a5c4","Type":"ContainerDied","Data":"19fb3965c3fd57b60d01d90930021e7c27ac818b0d94454d0c4cdf0661e2cb68"} Jan 21 13:45:47 crc kubenswrapper[4881]: I0121 13:45:47.617825 4881 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-925l6" event={"ID":"11625cbb-4258-4797-9dd3-77d7f130a5c4","Type":"ContainerStarted","Data":"6d8ec2fe28f5db8a3db49b58203dde54cf989a4aa627b678c26ca0bbf857d247"} Jan 21 13:45:47 crc kubenswrapper[4881]: I0121 13:45:47.640826 4881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-925l6" podStartSLOduration=3.180776915 podStartE2EDuration="9.64079556s" podCreationTimestamp="2026-01-21 13:45:38 +0000 UTC" firstStartedPulling="2026-01-21 13:45:40.539402144 +0000 UTC m=+10127.799358613" lastFinishedPulling="2026-01-21 13:45:46.999420749 +0000 UTC m=+10134.259377258" observedRunningTime="2026-01-21 13:45:47.636580479 +0000 UTC m=+10134.896536948" watchObservedRunningTime="2026-01-21 13:45:47.64079556 +0000 UTC m=+10134.900752029" Jan 21 13:45:48 crc kubenswrapper[4881]: I0121 13:45:48.907745 4881 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-925l6" Jan 21 13:45:48 crc kubenswrapper[4881]: I0121 13:45:48.908136 4881 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-925l6" Jan 21 13:45:50 crc kubenswrapper[4881]: I0121 13:45:50.320445 4881 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-925l6" podUID="11625cbb-4258-4797-9dd3-77d7f130a5c4" containerName="registry-server" probeResult="failure" output=< Jan 21 13:45:50 crc kubenswrapper[4881]: timeout: failed to connect service ":50051" within 1s Jan 21 13:45:50 crc kubenswrapper[4881]: > var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515134154234024447 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015134154235017365 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015134127757016521 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015134127760015463 5ustar corecore